Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint

Arash Bahari Kordabad, Rafael Wisniewski, Sebastien Gros

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

8 Citationer (Scopus)
194 Downloads (Pure)

Abstract

In this paper, we address the chance-constrained safe Reinforcement Learning (RL) problem using the function approximators based on Stochastic Model Predictive Control (SMPC) and Distributionally Robust Model Predictive Control (DRMPC). We use Conditional Value at Risk (CVaR) to measure the probability of constraint violation and safety. In order to provide a safe policy by construction, we first propose using parameterized nonlinear DRMPC at each time step. DRMPC optimizes a finite-horizon cost function subject to the worst-case constraint violation in an ambiguity set. We use a statistical ball around the empirical distribution with a radius measured by the Wasserstein metric as the ambiguity set. Unlike the sample average approximation SMPC, DRMPC provides a probabilistic guarantee of the out-of-sample risk and requires lower samples from the disturbance. Then the Q-learning method is used to optimize the parameters in the DRMPC to achieve the best closed-loop performance. Wheeled Mobile Robot (WMR) path planning with obstacle avoidance will be considered to illustrate the efficiency of the proposed method.
OriginalsprogEngelsk
Artikelnummer9982609
TidsskriftIEEE Access
Vol/bind10
Sider (fra-til)130058-130067
Antal sider10
ISSN2169-3536
DOI
StatusUdgivet - 1 jan. 2022

Fingeraftryk

Dyk ned i forskningsemnerne om 'Safe Reinforcement Learning Using Wasserstein Distributionally Robust MPC and Chance Constraint'. Sammen danner de et unikt fingeraftryk.

Citationsformater