Mixing Loop Control using Reinforcement Learning

Publikation: Bidrag til tidsskriftKonferenceartikel i tidsskriftForskningpeer review

10 Downloads (Pure)

Resumé

In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

OriginalsprogEngelsk
Artikelnummer05013
BogserieE3S Web of Conferences
Vol/bind111
Antal sider7
ISSN2267-1242
DOI
StatusUdgivet - 2019
BegivenhedE3S Web of Conferences 111: CLIMA 2019 CONGRESS - Bucharest, Rumænien
Varighed: 26 maj 201929 maj 2019

Konference

KonferenceE3S Web of Conferences 111
LandRumænien
ByBucharest
Periode26/05/201929/05/2019

Fingerprint

Reinforcement learning
reinforcement
learning
Hot water heating
heating
simulation
Heating
Controllers
method
Costs
cost
Temperature
temperature

Citer dette

@inproceedings{0e56e0fceb2246f9aee84c7a1f8df370,
title = "Mixing Loop Control using Reinforcement Learning",
abstract = "In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.",
author = "Anders Overgaard and Carsten Kalles{\o}e and Bendtsen, {Jan Dimon} and Nielsen, {Brian Kongsgaard}",
year = "2019",
doi = "10.1051/e3sconf/201911105013",
language = "English",
volume = "111",
journal = "E3S Web of Conferences",
issn = "2267-1242",
publisher = "EDP Sciences",

}

Mixing Loop Control using Reinforcement Learning. / Overgaard, Anders; Kallesøe, Carsten; Bendtsen, Jan Dimon; Nielsen, Brian Kongsgaard.

I: E3S Web of Conferences, Bind 111, 05013, 2019.

Publikation: Bidrag til tidsskriftKonferenceartikel i tidsskriftForskningpeer review

TY - GEN

T1 - Mixing Loop Control using Reinforcement Learning

AU - Overgaard, Anders

AU - Kallesøe, Carsten

AU - Bendtsen, Jan Dimon

AU - Nielsen, Brian Kongsgaard

PY - 2019

Y1 - 2019

N2 - In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

AB - In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

U2 - 10.1051/e3sconf/201911105013

DO - 10.1051/e3sconf/201911105013

M3 - Conference article in Journal

VL - 111

JO - E3S Web of Conferences

JF - E3S Web of Conferences

SN - 2267-1242

M1 - 05013

ER -