Abstrakt

Reinforcement Learning (RL) is a widely used method for solving optimal problems without system knowledge. However, the use of RL for control of industrial applications is still reduced. One of the reasons for limited applicability of RL in this field is the difficulty of learning the system
behaviour under poor experimental conditions. This paper proposes two methods to cope with scenarios where the data collected is not contributing to the learning in linear systems. The first method identifies the periods where the learning is not efficient and pauses the policy update, the second method
applies a reduction of the approximation space to continue with the learning. The proposed methods are validated in a simulation environment of a water distribution network. Both methods show similar performance and provide a reliable operation during steady state or poor experimental conditions.
OriginalsprogEngelsk
TidsskriftEUCA Conference Editorial Board
Vol/bind978-94-6384-236-5
Antal sider6
StatusUdgivet - jun. 2021

Fingeraftryk

Dyk ned i forskningsemnerne om 'Real-Time Reinforcement Learning Control in Poor Experimental Conditions'. Sammen danner de et unikt fingeraftryk.

Citationsformater