Abstract
In this paper we propose a scalable importance sampling algorithm for computing Gaussian mixture posteriors in conditional linear Gaussian Bayesian networks. Our contribution is based on using a stochastic gradient ascent procedure taking as input a stream of importance sampling weights, so that a mixture of Gaussians is dynamically updated with no need to store the full sample. The algorithm has been designed following a Map/Reduce approach and is therefore scalable with respect to computing resources. The implementation of the proposed algorithm is available as part of the AMIDST open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com).
Original language | English |
---|---|
Article number | 100 |
Journal | International Journal of Approximate Reasoning |
Volume | 100 |
Pages (from-to) | 115-134 |
Number of pages | 20 |
ISSN | 0888-613X |
DOIs | |
Publication status | Published - 1 Sept 2018 |
Keywords
- Bayesian networks
- Conditional linear Gaussian models
- Gaussian mixtures
- Importance sampling
- Scalable inference