Multi-modal RGB–Depth–Thermal Human Body Segmentation

Cristina Palmero, Albert Clapés, Chris Bahnsen, Andreas Møgelmose, Thomas B. Moeslund, Sergio Escalera

Research output: Contribution to journalJournal articleResearchpeer-review

66 Citations (Scopus)
1517 Downloads (Pure)


This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration device and a registration algorithm. Our baseline extracts regions of interest using background sub- traction, defines a partitioning of the foreground regions into cells, computes a set of image features on those cells us- ing different state-of-the-art feature extractions, and models the distribution of the descriptors per cell using probabilis- tic models. A supervised learning algorithm then fuses the output likelihoods over cells in a stacked feature vector rep- resentation. The baseline, using Gaussian Mixture Models for the probabilistic modeling and Random Forest for the stacked learning, is superior to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations.
Original languageEnglish
JournalInternational Journal of Computer Vision
Issue number2
Pages (from-to)217-239
Publication statusPublished - 13 Apr 2016


Dive into the research topics of 'Multi-modal RGB–Depth–Thermal Human Body Segmentation'. Together they form a unique fingerprint.

Cite this