Abstract

This paper proposes a learning-based finite control set model predictive control (FCS-MPC) to improve the performance of DC-DC buck converters interfaced with constant power loads in a DC microgrid (DC-MG). An approach based on deep reinforcement learning (DRL) is presented to address one of the ongoing challenges in FCS-MPC of the converters, i.e., optimal design of the weighting coefficients appearing in the FCS-MPC objective function for each converter. A deep deterministic policy gradient method is employed to learn the optimal weighting coefficient design policy. A Markov decision method formulates the DRL problem. The DRL agent is trained for each converter in the MG, and the weighting coefficients are obtained based on reward computation with the interactions between the MG and agent. The proposed strategy is wholly distributed, wherein agents exchange data with other agents, implying a multi-agent DRL problem. The proposed control scheme offers several advantages, including preventing the dependency of the converter control system on the operating point conditions, plug-and-play capability, and robustness against the MG uncertainties and unknown load dynamics.
OriginalsprogEngelsk
Artikelnummer5399
TidsskriftEnergies
Vol/bind15
Udgave nummer15
Sider (fra-til)1-21
Antal sider21
ISSN1996-1073
DOI
StatusUdgivet - jul. 2022

Fingeraftryk

Dyk ned i forskningsemnerne om 'Learning-Based Model Predictive Control of DC-DC Buck Converters in DC Microgrids: A Multi-Agent Deep Reinforcement Learning Approach'. Sammen danner de et unikt fingeraftryk.

Citationsformater