Mixing Loop Control using Reinforcement Learning

Anders Overgaard, Carsten Kallesøe, Jan Dimon Bendtsen, Brian Kongsgaard Nielsen

Publikation: Bidrag til tidsskriftKonferenceartikel i tidsskriftForskningpeer review

3 Citationer (Scopus)
88 Downloads (Pure)

Abstract

In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

OriginalsprogEngelsk
Artikelnummer05013
BogserieE3S Web of Conferences
Vol/bind111
Antal sider7
ISSN2267-1242
DOI
StatusUdgivet - 2019
BegivenhedE3S Web of Conferences 111: CLIMA 2019 CONGRESS - Bucharest, Rumænien
Varighed: 26 maj 201929 maj 2019

Konference

KonferenceE3S Web of Conferences 111
Land/OmrådeRumænien
ByBucharest
Periode26/05/201929/05/2019

Fingeraftryk

Dyk ned i forskningsemnerne om 'Mixing Loop Control using Reinforcement Learning'. Sammen danner de et unikt fingeraftryk.

Citationsformater