Online Model-free Safety Verification for Markov Decision Processes Without Safety Violation

Abhijit Mazumdar*, Rafal Wisniewski, Manuela L Bujorianu

*Corresponding author for this work

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

Abstract

In this paper, we consider the problem of safety assessment for Markov decision processes without explicit knowledge of the model. We aim to learn probabilistic safety specifications associated with a given policy without compromising the safety of the process. To accomplish our goal, we characterize a subset of the state-space called proxy set, which contains the states that are near in a probabilistic sense to the forbidden set consisting of all unsafe states. We compute the safety function using the single-step temporal difference method. To this end, we relate the safety function computation to that of the value function estimation using temporal difference learning. Since the given control policy could be unsafe, we use a safe baseline subpolicy to generate data for learning. We then use an off-policy temporal difference learning method with importance sampling to learn the safety function corresponding to the given policy. Finally, we demonstrate our results using a numerical example
Original languageEnglish
Title of host publicationOnline Model-free Safety Verification for Markov Decision Processes Without Safety Violation
Number of pages5
Publication date25 Jun 2024
Pages1-5
Publication statusPublished - 25 Jun 2024

Fingerprint

Dive into the research topics of 'Online Model-free Safety Verification for Markov Decision Processes Without Safety Violation'. Together they form a unique fingerprint.

Cite this