Abstrakt

Reinforcement Learning (RL) is an optimal control method for regulating the behaviour of a dynamical system when the system model is unknown. This feature is a strong advantage for controlling systems, such as Water Distribution
Networks, where it is difficult to have a reliable model. When learning an optimal policy with RL, the exploration phase implies high degree of uncertainty in the system operation. Large scale infrastructures such as WDN require a robust operation since they cannot afford fails during the operation. This paper presents a model-free control method which provides safety in the operation while learning an optimal policy. This method introduces a policy supervisor block in the control loop which assesses the safety of the learned policy in real-time.
The safety verification consists of evaluating the trajectory on a standard linear model. In this model only the fundamental linear dynamics are represented and the system’s dimensions do not require to be expressed with high accuracy. If the predicted trajectory violates the boundaries, the supervisor provides a safe control action. Simulation and experimental results prove the applicability of the proposed method.
OriginalsprogEngelsk
TidsskriftIEEE publication
Vol/bind978-1-6654-3643-4
StatusUdgivet - aug. 2021

Fingeraftryk

Dyk ned i forskningsemnerne om 'Safe Reinforcement Learning Control for Water Distribution Networks'. Sammen danner de et unikt fingeraftryk.

Citationsformater