Rethinking Bayesian Learning for Data Analysis: The art of prior and inference in sparsity-aware modeling

Lei Cheng, Feng Yin, Sergios Theodoridis, Soterios Chatzis, Tsung-Hui Chang

Research output: Contribution to journalJournal articleResearchpeer-review

70 Citations (Scopus)

Abstract

Sparse modeling for signal processing and machine learning, in general, has been at the focus of scientific research for over two decades. Among others, supervised sparsity-aware learning (SAL) consists of two major paths paved by 1) discriminative methods that establish direct input-output mapping based on a regularized cost function optimization and 2) generative methods that learn the underlying distributions. The latter, more widely known as Bayesian methods, enable uncertainty evaluation with respect to the performed predictions. Furthermore, they can better exploit related prior information and also, in principle, can naturally introduce robustness into the model, due to their unique capacity to marginalize out uncertainties related to the parameter estimates. Moreover, hyperparameters (tuning parameters) associated with the adopted priors, which correspond to cost function regularizers, can be learned via the training data and not via costly cross-validation techniques, which is, in general, the case with the discriminative methods.

Original languageEnglish
JournalI E E E - Signal Processing Magazine
Volume39
Issue number6
Pages (from-to)18-52
Number of pages35
ISSN1053-5888
DOIs
Publication statusPublished - Nov 2022

Fingerprint

Dive into the research topics of 'Rethinking Bayesian Learning for Data Analysis: The art of prior and inference in sparsity-aware modeling'. Together they form a unique fingerprint.

Cite this