Mixing Loop Control using Reinforcement Learning

Research output: Contribution to journalConference article in JournalResearchpeer-review

11 Downloads (Pure)

Abstract

In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

Original languageEnglish
Article number05013
Book seriesE3S Web of Conferences
Volume111
Number of pages7
ISSN2267-1242
DOIs
Publication statusPublished - 2019
EventE3S Web of Conferences 111: CLIMA 2019 CONGRESS - Bucharest, Romania
Duration: 26 May 201929 May 2019

Conference

ConferenceE3S Web of Conferences 111
CountryRomania
CityBucharest
Period26/05/201929/05/2019

Fingerprint

Reinforcement learning
reinforcement
learning
Hot water heating
heating
simulation
Heating
Controllers
method
Costs
cost
Temperature
temperature

Cite this

@inproceedings{0e56e0fceb2246f9aee84c7a1f8df370,
title = "Mixing Loop Control using Reinforcement Learning",
abstract = "In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.",
author = "Anders Overgaard and Carsten Kalles{\o}e and Bendtsen, {Jan Dimon} and Nielsen, {Brian Kongsgaard}",
year = "2019",
doi = "10.1051/e3sconf/201911105013",
language = "English",
volume = "111",
journal = "E3S Web of Conferences",
issn = "2267-1242",
publisher = "EDP Sciences",

}

Mixing Loop Control using Reinforcement Learning. / Overgaard, Anders; Kallesøe, Carsten; Bendtsen, Jan Dimon; Nielsen, Brian Kongsgaard.

In: E3S Web of Conferences, Vol. 111, 05013, 2019.

Research output: Contribution to journalConference article in JournalResearchpeer-review

TY - GEN

T1 - Mixing Loop Control using Reinforcement Learning

AU - Overgaard, Anders

AU - Kallesøe, Carsten

AU - Bendtsen, Jan Dimon

AU - Nielsen, Brian Kongsgaard

PY - 2019

Y1 - 2019

N2 - In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

AB - In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.

U2 - 10.1051/e3sconf/201911105013

DO - 10.1051/e3sconf/201911105013

M3 - Conference article in Journal

VL - 111

JO - E3S Web of Conferences

JF - E3S Web of Conferences

SN - 2267-1242

M1 - 05013

ER -