Active Learning of Markov Decision Processes for System Verification

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

19 Citations (Scopus)
665 Downloads (Pure)

Abstract

Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required to obtain accurate system models.
Original languageEnglish
Title of host publicationInternational Conference on Machine Learning and Applications (ICMLA)
Publication date12 Dec 2012
Pages289-294
ISBN (Print)978-1-4673-4651-1
DOIs
Publication statusPublished - 12 Dec 2012

Fingerprint

Dive into the research topics of 'Active Learning of Markov Decision Processes for System Verification'. Together they form a unique fingerprint.

Cite this