Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

Mohammad Ahsanul Haque, Ruben B. Bautista, Fatemeh Noroozi, Kaustubh Kulkarni, Christian B. Laursen, Ramin Irani, Marco Bellantonio, Sergio Escalera, Gholamreza Anbarjafari, Kamal Nasrollahi, Ole Kæseler Andersen, Erika Geraldina Spaich, Thomas B. Moeslund

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

50 Citationer (Scopus)
1046 Downloads (Pure)

Abstract

Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.
OriginalsprogEngelsk
TitelProceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
Antal sider8
ForlagIEEE
Publikationsdato5 jun. 2018
Sider250-257
ISBN (Elektronisk)978-1-5386-2335-0
DOI
StatusUdgivet - 5 jun. 2018
Begivenhed13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 - Xi'an, Kina
Varighed: 15 maj 201819 maj 2018

Konference

Konference13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
Land/OmrådeKina
ByXi'an
Periode15/05/201819/05/2018

Fingeraftryk

Dyk ned i forskningsemnerne om 'Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities'. Sammen danner de et unikt fingeraftryk.

Citationsformater