Abstract
One of the simplest, and yet most consistently well-performing set of classifiers is the naive Bayes models (a special class of Bayesian network models). However, these models rely on the (naive) assumption that all the attributes used to describe an instance are conditionally independent given the class of that instance. To relax this independence assumption, we have in previous work proposed a family of models, called latent classification models (LCMs). LCMs are defined for continuous domains and generalize the naive Bayes model by using latent variables to model class-conditional dependencies between the attributes. In addition to providing good classification accuracy, the LCM model has several appealing properties, including a relatively small parameter space making it less susceptible to over-fitting. In this paper we take a first-step towards generalizing LCMs to hybrid domains, by proposing an LCM model for domains with binary attributes. We present algorithms for learning the proposed model, and we describe a variational
approximation-based inference procedure. Finally, we empirically compare the accuracy of the proposed model to the
accuracy of other classifiers for a number of different domains, including the problem of recognizing
symbols in black and white images.
approximation-based inference procedure. Finally, we empirically compare the accuracy of the proposed model to the
accuracy of other classifiers for a number of different domains, including the problem of recognizing
symbols in black and white images.
Original language | English |
---|---|
Journal | Pattern Recognition |
Volume | 42 |
Issue number | 11 |
Pages (from-to) | 2724-2736 |
ISSN | 0031-3203 |
DOIs | |
Publication status | Published - 2009 |