Active Learning of Markov Decision Processes for System Verification

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

11 Citations (Scopus)
374 Downloads (Pure)

Abstract

Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required to obtain accurate system models.
Original languageEnglish
Title of host publicationInternational Conference on Machine Learning and Applications (ICMLA)
Publication date12 Dec 2012
Pages289-294
ISBN (Print)978-1-4673-4651-1
DOIs
Publication statusPublished - 12 Dec 2012

Fingerprint

Learning systems
Problem-Based Learning

Cite this

Chen, Y., & Nielsen, T. D. (2012). Active Learning of Markov Decision Processes for System Verification. In International Conference on Machine Learning and Applications (ICMLA) (pp. 289-294) https://doi.org/10.1109/ICMLA.2012.158
Chen, Yingke ; Nielsen, Thomas Dyhre. / Active Learning of Markov Decision Processes for System Verification. International Conference on Machine Learning and Applications (ICMLA). 2012. pp. 289-294
@inproceedings{0f547760884445999cfdf3eb70f6f53a,
title = "Active Learning of Markov Decision Processes for System Verification",
abstract = "Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required to obtain accurate system models.",
author = "Yingke Chen and Nielsen, {Thomas Dyhre}",
year = "2012",
month = "12",
day = "12",
doi = "10.1109/ICMLA.2012.158",
language = "English",
isbn = "978-1-4673-4651-1",
pages = "289--294",
booktitle = "International Conference on Machine Learning and Applications (ICMLA)",

}

Chen, Y & Nielsen, TD 2012, Active Learning of Markov Decision Processes for System Verification. in International Conference on Machine Learning and Applications (ICMLA). pp. 289-294. https://doi.org/10.1109/ICMLA.2012.158

Active Learning of Markov Decision Processes for System Verification. / Chen, Yingke; Nielsen, Thomas Dyhre.

International Conference on Machine Learning and Applications (ICMLA). 2012. p. 289-294.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Active Learning of Markov Decision Processes for System Verification

AU - Chen, Yingke

AU - Nielsen, Thomas Dyhre

PY - 2012/12/12

Y1 - 2012/12/12

N2 - Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required to obtain accurate system models.

AB - Formal model verification has proven a powerful tool for verifying and validating the properties of a system. Central to this class of techniques is the construction of an accurate formal model for the system being investigated. Unfortunately, manual construction of such models can be a resource demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required to obtain accurate system models.

UR - http://www.scopus.com/inward/record.url?scp=84873572616&partnerID=8YFLogxK

U2 - 10.1109/ICMLA.2012.158

DO - 10.1109/ICMLA.2012.158

M3 - Article in proceeding

SN - 978-1-4673-4651-1

SP - 289

EP - 294

BT - International Conference on Machine Learning and Applications (ICMLA)

ER -

Chen Y, Nielsen TD. Active Learning of Markov Decision Processes for System Verification. In International Conference on Machine Learning and Applications (ICMLA). 2012. p. 289-294 https://doi.org/10.1109/ICMLA.2012.158