TY - GEN
T1 - A Multi-Agent Deep Reinforcement Learning based Voltage Control on Power Distribution Networks
AU - Zhang, Bin
AU - Ghias, Amer M. Y. M.
AU - Chen, Zhe
PY - 2022/11/5
Y1 - 2022/11/5
N2 - Due to the high penetration of renewable energy in the distribution network, the exponential increase in the amount of data collected and variables makes it difficult for centralized control methods to achieve real-time voltage regulation. Besides, hardware conditions (e.g., communication equipment) limit its application in practice. Therefore, a model-free multi-agent deep reinforcement learning (MADRL) voltage control strategy is developed in this paper. The proposed MADRL control strategy is carried out with a framework of centralized training and distributed execution. We apply deep deterministic policy gradient algorithm to help each agent control its corresponding PV inverter in a distributed manner. However, during the training process, the agent could observe other agents' information to improve training. The simulation on a 33-bus distribution network is carried out to illustrate the effectiveness of the proposed method, and its superiority is also validated by comparing with traditional methods.
AB - Due to the high penetration of renewable energy in the distribution network, the exponential increase in the amount of data collected and variables makes it difficult for centralized control methods to achieve real-time voltage regulation. Besides, hardware conditions (e.g., communication equipment) limit its application in practice. Therefore, a model-free multi-agent deep reinforcement learning (MADRL) voltage control strategy is developed in this paper. The proposed MADRL control strategy is carried out with a framework of centralized training and distributed execution. We apply deep deterministic policy gradient algorithm to help each agent control its corresponding PV inverter in a distributed manner. However, during the training process, the agent could observe other agents' information to improve training. The simulation on a 33-bus distribution network is carried out to illustrate the effectiveness of the proposed method, and its superiority is also validated by comparing with traditional methods.
KW - Training
KW - Deep learning
KW - Renewable energy sources
KW - Asia
KW - Distribution networks
KW - Reinforcement learning
KW - Real-time systems
UR - https://ieeexplore.ieee.org/document/10003515/
U2 - 10.1109/ISGTAsia54193.2022.10003515
DO - 10.1109/ISGTAsia54193.2022.10003515
M3 - Konferenceartikel i proceeding
SN - 979-8-3503-9967-7
T3 - Innovative Smart Grid Technologies - Asia (ISGT Asia), IEEE
SP - 761
EP - 765
BT - 2022 IEEE PES Innovative Smart Grid Technologies - Asia (ISGT Asia)
PB - IEEE
T2 - 2022 IEEE PES Innovative Smart Grid Technologies - Asia (ISGT Asia)
Y2 - 1 November 2022 through 5 November 2022
ER -