TY - JOUR
T1 - Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep reinforcement learning
AU - Li, Sichen
AU - Hu, Weihao
AU - Cao, Di
AU - Chen, Zhe
AU - Huang, Qi
AU - Blaabjerg, Frede
AU - Liao, Kaiji
PY - 2023/9/15
Y1 - 2023/9/15
N2 - Reliability and cost-effectiveness in the operation of the multiple microgrid (MMG) system depend on the skillful management of its energy resources. Traditional energy management approaches are physics-model-based, which rely on the precise system parameters (e.g., line parameters) of the electricity and heat network. It is difficult to establish the precise system parameters in practice because they depend on a variety of factors. In this context, this paper proposes a physics-model-free control framework for the energy management of MMGs with heat-electricity energy, consisting of proposed surrogate model and multi-agent deep reinforcement learning (MADRL) approach. An important step is to use historical data to train a surrogate model in supervised manner that can imitate the realistic power and thermal flow calculations. Meanwhile, the energy management problem is reformulated as a Markov game. It is solved by the proposed MADRL-based approach by modeling each MG controller as an agent with a specific objective. The historical trajectories representation, parameter space technology, and deep dense architecture in reinforcement learning are introduced in MADRL to overcome the negative impact brought by the time series data from the input state on the decision-making process and construct an efficient exploration mechanism to overcome inefficient optimization of the MMG system in a multi-agent setting. During training period of MADRL, the trained surrogate models integrate into the environment of the MADRL, which can develop optimal energy management strategy based on the continuous interaction with the surrogate models. The proposed surrogate model enabled MADRL approach can reduce the reliance on precise physical systems and prevent having an impact on the real system while being trained involving trial and error process. Simulation results demonstrate the effectiveness of the proposed control framework.
AB - Reliability and cost-effectiveness in the operation of the multiple microgrid (MMG) system depend on the skillful management of its energy resources. Traditional energy management approaches are physics-model-based, which rely on the precise system parameters (e.g., line parameters) of the electricity and heat network. It is difficult to establish the precise system parameters in practice because they depend on a variety of factors. In this context, this paper proposes a physics-model-free control framework for the energy management of MMGs with heat-electricity energy, consisting of proposed surrogate model and multi-agent deep reinforcement learning (MADRL) approach. An important step is to use historical data to train a surrogate model in supervised manner that can imitate the realistic power and thermal flow calculations. Meanwhile, the energy management problem is reformulated as a Markov game. It is solved by the proposed MADRL-based approach by modeling each MG controller as an agent with a specific objective. The historical trajectories representation, parameter space technology, and deep dense architecture in reinforcement learning are introduced in MADRL to overcome the negative impact brought by the time series data from the input state on the decision-making process and construct an efficient exploration mechanism to overcome inefficient optimization of the MMG system in a multi-agent setting. During training period of MADRL, the trained surrogate models integrate into the environment of the MADRL, which can develop optimal energy management strategy based on the continuous interaction with the surrogate models. The proposed surrogate model enabled MADRL approach can reduce the reliance on precise physical systems and prevent having an impact on the real system while being trained involving trial and error process. Simulation results demonstrate the effectiveness of the proposed control framework.
KW - Combined Heat and Power
KW - Energy Management of multiple microgrids
KW - Multi-agent deep reinforcement learning
KW - sparse variational Gaussian processes
UR - http://www.scopus.com/inward/record.url?scp=85164480644&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2023.121359
DO - 10.1016/j.apenergy.2023.121359
M3 - Journal article
SN - 0306-2619
VL - 346
SP - 1
EP - 16
JO - Applied Energy
JF - Applied Energy
M1 - 121359
ER -