TY - JOUR
T1 - Coordinated carbon capture systems and power-to-gas dynamic economic energy dispatch strategy for electricity–gas coupled systems considering system uncertainty
T2 - An improved soft actor–critic approach
AU - Zhang, Bin
AU - Wu, Xuewei
AU - Ghias, Amer M.Y.M.
AU - Chen, Zhe
PY - 2023/5/15
Y1 - 2023/5/15
N2 - Due to uncertainties in renewable energy generation and load demands, traditional energy dispatch schemes for an integrated electricity–gas system (IEGS) considerably depend on explicit forecast mathematical models. In this study, a novel data-driven deep reinforcement learning method is applied to solve the IEGS dynamic dispatch problem with the targets of minimizing carbon emission and operating cost. Moreover, a flexible operation of carbon capture system and power-to-gas facility is proposed to attain low operating costs. The IEGS dynamic dispatch problem is formulated as a Markov game, and a soft actor–critic (SAC) algorithm is applied to learn the optimal dispatch solution. To improve training efficiency and convergence, prioritized experience replay (PER) is employed. In the simulation, the proposed PER–SAC algorithm compared with deep Q-network and SAC has fast and stable learning performance. In contrast to a modified sequential quadratic programming based on uncertainty prediction, the proposed method can reduce the target cost by 11.62% when the prediction error exceeds 10%. The computational time of scenario analysis solution on the same hardware platform is 4.58 times than that of training the PER–SAC method. Finally, the simulation results under different scenarios demonstrate that the PER–SAC-based dispatch strategy has satisfactory generalization and adaptability.
AB - Due to uncertainties in renewable energy generation and load demands, traditional energy dispatch schemes for an integrated electricity–gas system (IEGS) considerably depend on explicit forecast mathematical models. In this study, a novel data-driven deep reinforcement learning method is applied to solve the IEGS dynamic dispatch problem with the targets of minimizing carbon emission and operating cost. Moreover, a flexible operation of carbon capture system and power-to-gas facility is proposed to attain low operating costs. The IEGS dynamic dispatch problem is formulated as a Markov game, and a soft actor–critic (SAC) algorithm is applied to learn the optimal dispatch solution. To improve training efficiency and convergence, prioritized experience replay (PER) is employed. In the simulation, the proposed PER–SAC algorithm compared with deep Q-network and SAC has fast and stable learning performance. In contrast to a modified sequential quadratic programming based on uncertainty prediction, the proposed method can reduce the target cost by 11.62% when the prediction error exceeds 10%. The computational time of scenario analysis solution on the same hardware platform is 4.58 times than that of training the PER–SAC method. Finally, the simulation results under different scenarios demonstrate that the PER–SAC-based dispatch strategy has satisfactory generalization and adaptability.
KW - Deep reinforcement learning
KW - Low-carbon and economic dispatch
KW - Prioritized experience replay
KW - Soft actor-critic
KW - electricity-gas coupled system
U2 - 10.1016/j.energy.2023.126965
DO - 10.1016/j.energy.2023.126965
M3 - Journal article
SN - 0360-5442
VL - 271
JO - Energy
JF - Energy
M1 - 126965
ER -