Projects per year
Project Details
Description
Probabilistic Decision Graphs (PDGs) combine elements of probabilistic graphical
models and ordered binary decision diagrams (OBDDs). PDGs were first
proposed by M. Bozga and O. Maler in the context of probabilistic verification.
We have introduced a slightly generalized version of PDGs, and investigated
their properties from a reasoning under uncertainty point of view. Special
focus of this work is a comparison with Bayesian networks with regard to
efficiency of probabilistic inference, and with regard to the probabilistic independence
models that can be represented by the graphical structures in the
two frameworks. The results show that PDGs and Bayesian networks represent
different kinds of independence structures: whereas independence structures encoded
by Bayesian networks are given in terms of subsets of random variables,
the independence structures encoded by PDGs are given in terms of partitions
of the state space. For probability distributions exhibiting the latter type of
independence patterns, one can obtain significantly more efficient representations
(with regard to complexity of probabilistic inference) with PDGs than
with Bayesian networks.
We also have developed and implemented a learning algorithm for PDGs. We have conducted experiments in which both PDG and Bayesian network
models were learned from reallife data sets. Goal of the experiments was to
determine whether for some types of data the learning algorithms for the two
frameworks would produce models with significantly different inference complexity.
Somewhat surprisingly, the results showed that the learned models in the
two representation languages exhibited quite similar complexity characteristics.
models and ordered binary decision diagrams (OBDDs). PDGs were first
proposed by M. Bozga and O. Maler in the context of probabilistic verification.
We have introduced a slightly generalized version of PDGs, and investigated
their properties from a reasoning under uncertainty point of view. Special
focus of this work is a comparison with Bayesian networks with regard to
efficiency of probabilistic inference, and with regard to the probabilistic independence
models that can be represented by the graphical structures in the
two frameworks. The results show that PDGs and Bayesian networks represent
different kinds of independence structures: whereas independence structures encoded
by Bayesian networks are given in terms of subsets of random variables,
the independence structures encoded by PDGs are given in terms of partitions
of the state space. For probability distributions exhibiting the latter type of
independence patterns, one can obtain significantly more efficient representations
(with regard to complexity of probabilistic inference) with PDGs than
with Bayesian networks.
We also have developed and implemented a learning algorithm for PDGs. We have conducted experiments in which both PDG and Bayesian network
models were learned from reallife data sets. Goal of the experiments was to
determine whether for some types of data the learning algorithms for the two
frameworks would produce models with significantly different inference complexity.
Somewhat surprisingly, the results showed that the learned models in the
two representation languages exhibited quite similar complexity characteristics.
Status  Active 

Effective start/end date  19/05/2010 → … 
Funding
 <ingen navn>
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
Projects
 1 Active

Learning Bayesian Networks
Nielsen, J. D., Nielsen, T. D., Zeng, Y., Jensen, F. V. & Nielsen, S. H.
19/05/2010 → …
Project: Research