Scalable importance sampling estimation of Gaussian mixture posteriors in Bayesian networks

Darío Ramos-López, Andrés Masegosa, Antonio Salmerón, Rafael Rumí, Helge Langseth, Thomas Dyhre Nielsen, Anders Læsø Madsen

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

1 Citation (Scopus)

Resumé

In this paper we propose a scalable importance sampling algorithm for computing Gaussian mixture posteriors in conditional linear Gaussian Bayesian networks. Our contribution is based on using a stochastic gradient ascent procedure taking as input a stream of importance sampling weights, so that a mixture of Gaussians is dynamically updated with no need to store the full sample. The algorithm has been designed following a Map/Reduce approach and is therefore scalable with respect to computing resources. The implementation of the proposed algorithm is available as part of the AMIDST open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com).
OriginalsprogEngelsk
Artikelnummer100
TidsskriftInternational Journal of Approximate Reasoning
Vol/bind100
Sider (fra-til)115-134
Antal sider20
ISSN0888-613X
DOI
StatusUdgivet - 1 sep. 2018

Fingerprint

Importance sampling
Gaussian Mixture
Importance Sampling
Bayesian networks
Bayesian Networks
Stochastic Gradient
MapReduce
Ascent
Computing
Open Source
Learning systems
Machine Learning
Resources

Emneord

    Citer dette

    Ramos-López, Darío ; Masegosa, Andrés ; Salmerón, Antonio ; Rumí, Rafael ; Langseth, Helge ; Nielsen, Thomas Dyhre ; Madsen, Anders Læsø. / Scalable importance sampling estimation of Gaussian mixture posteriors in Bayesian networks. I: International Journal of Approximate Reasoning. 2018 ; Bind 100. s. 115-134.
    @article{035458d61cc1421989a531864b43b5f9,
    title = "Scalable importance sampling estimation of Gaussian mixture posteriors in Bayesian networks",
    abstract = "In this paper we propose a scalable importance sampling algorithm for computing Gaussian mixture posteriors in conditional linear Gaussian Bayesian networks. Our contribution is based on using a stochastic gradient ascent procedure taking as input a stream of importance sampling weights, so that a mixture of Gaussians is dynamically updated with no need to store the full sample. The algorithm has been designed following a Map/Reduce approach and is therefore scalable with respect to computing resources. The implementation of the proposed algorithm is available as part of the AMIDST open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com).",
    keywords = "Bayesian networks, Conditional linear Gaussian models, Gaussian mixtures, Importance sampling, Scalable inference",
    author = "Dar{\'i}o Ramos-L{\'o}pez and Andr{\'e}s Masegosa and Antonio Salmer{\'o}n and Rafael Rum{\'i} and Helge Langseth and Nielsen, {Thomas Dyhre} and Madsen, {Anders L{\ae}s{\o}}",
    year = "2018",
    month = "9",
    day = "1",
    doi = "10.1016/j.ijar.2018.06.004",
    language = "English",
    volume = "100",
    pages = "115--134",
    journal = "International Journal of Approximate Reasoning",
    issn = "0888-613X",
    publisher = "Elsevier",

    }

    Scalable importance sampling estimation of Gaussian mixture posteriors in Bayesian networks. / Ramos-López, Darío; Masegosa, Andrés; Salmerón, Antonio; Rumí, Rafael; Langseth, Helge; Nielsen, Thomas Dyhre; Madsen, Anders Læsø.

    I: International Journal of Approximate Reasoning, Bind 100, 100, 01.09.2018, s. 115-134.

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

    TY - JOUR

    T1 - Scalable importance sampling estimation of Gaussian mixture posteriors in Bayesian networks

    AU - Ramos-López, Darío

    AU - Masegosa, Andrés

    AU - Salmerón, Antonio

    AU - Rumí, Rafael

    AU - Langseth, Helge

    AU - Nielsen, Thomas Dyhre

    AU - Madsen, Anders Læsø

    PY - 2018/9/1

    Y1 - 2018/9/1

    N2 - In this paper we propose a scalable importance sampling algorithm for computing Gaussian mixture posteriors in conditional linear Gaussian Bayesian networks. Our contribution is based on using a stochastic gradient ascent procedure taking as input a stream of importance sampling weights, so that a mixture of Gaussians is dynamically updated with no need to store the full sample. The algorithm has been designed following a Map/Reduce approach and is therefore scalable with respect to computing resources. The implementation of the proposed algorithm is available as part of the AMIDST open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com).

    AB - In this paper we propose a scalable importance sampling algorithm for computing Gaussian mixture posteriors in conditional linear Gaussian Bayesian networks. Our contribution is based on using a stochastic gradient ascent procedure taking as input a stream of importance sampling weights, so that a mixture of Gaussians is dynamically updated with no need to store the full sample. The algorithm has been designed following a Map/Reduce approach and is therefore scalable with respect to computing resources. The implementation of the proposed algorithm is available as part of the AMIDST open-source toolbox for scalable probabilistic machine learning (http://www.amidsttoolbox.com).

    KW - Bayesian networks

    KW - Conditional linear Gaussian models

    KW - Gaussian mixtures

    KW - Importance sampling

    KW - Scalable inference

    UR - http://www.scopus.com/inward/record.url?scp=85048544181&partnerID=8YFLogxK

    U2 - 10.1016/j.ijar.2018.06.004

    DO - 10.1016/j.ijar.2018.06.004

    M3 - Journal article

    VL - 100

    SP - 115

    EP - 134

    JO - International Journal of Approximate Reasoning

    JF - International Journal of Approximate Reasoning

    SN - 0888-613X

    M1 - 100

    ER -