Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

Mohammad Ahsanul Haque, Ruben B. Bautista, Fatemeh Noroozi, Kaustubh Kulkarni, Christian B. Laursen, Ramin Irani, Marco Bellantonio, Sergio Escalera, Gholamreza Anbarjafari, Kamal Nasrollahi, Ole Kæseler Andersen, Erika Geraldina Spaich, Thomas B. Moeslund

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

4 Citations (Scopus)
383 Downloads (Pure)

Abstract

Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.
Original languageEnglish
Title of host publicationProceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China
Number of pages8
PublisherIEEE
Publication date2018
Pages250-257
ISBN (Electronic)978-1-5386-2335-0
DOIs
Publication statusPublished - 2018
Event13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 - Xi'an, China
Duration: 15 May 201819 May 2018

Conference

Conference13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
CountryChina
CityXi'an
Period15/05/201819/05/2018

Fingerprint

Fusion reactions
Costs
Inspection
Tissue
Hot Temperature
Deep learning

Bibliographical note

DNRF121

Keywords

  • pain recognition
  • Deep Learning
  • database
  • Spatiotemporal
  • Multimodal
  • video
  • RGBDT
  • benchmark

Cite this

Haque, M. A., B. Bautista, R., Noroozi, F., Kulkarni, K., Laursen, C. B., Irani, R., ... Moeslund, T. B. (2018). Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. In Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China (pp. 250-257). IEEE. https://doi.org/10.1109/FG.2018.00044
Haque, Mohammad Ahsanul ; B. Bautista, Ruben ; Noroozi, Fatemeh ; Kulkarni, Kaustubh ; Laursen, Christian B. ; Irani, Ramin ; Bellantonio, Marco ; Escalera, Sergio ; Anbarjafari, Gholamreza ; Nasrollahi, Kamal ; Andersen, Ole Kæseler ; Spaich, Erika Geraldina ; Moeslund, Thomas B. / Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China. IEEE, 2018. pp. 250-257
@inproceedings{c1ff9f563d3e42438293fd0372b1c119,
title = "Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities",
abstract = "Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.",
keywords = "pain recognition, Deep Learning, database, Spatiotemporal, Multimodal, video, RGBDT, benchmark",
author = "Haque, {Mohammad Ahsanul} and {B. Bautista}, Ruben and Fatemeh Noroozi and Kaustubh Kulkarni and Laursen, {Christian B.} and Ramin Irani and Marco Bellantonio and Sergio Escalera and Gholamreza Anbarjafari and Kamal Nasrollahi and Andersen, {Ole K{\ae}seler} and Spaich, {Erika Geraldina} and Moeslund, {Thomas B.}",
note = "DNRF121",
year = "2018",
doi = "10.1109/FG.2018.00044",
language = "English",
pages = "250--257",
booktitle = "Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China",
publisher = "IEEE",
address = "United States",

}

Haque, MA, B. Bautista, R, Noroozi, F, Kulkarni, K, Laursen, CB, Irani, R, Bellantonio, M, Escalera, S, Anbarjafari, G, Nasrollahi, K, Andersen, OK, Spaich, EG & Moeslund, TB 2018, Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. in Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China. IEEE, pp. 250-257, 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, Xi'an, China, 15/05/2018. https://doi.org/10.1109/FG.2018.00044

Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. / Haque, Mohammad Ahsanul; B. Bautista, Ruben; Noroozi, Fatemeh; Kulkarni, Kaustubh; Laursen, Christian B.; Irani, Ramin; Bellantonio, Marco; Escalera, Sergio; Anbarjafari, Gholamreza; Nasrollahi, Kamal; Andersen, Ole Kæseler; Spaich, Erika Geraldina; Moeslund, Thomas B.

Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China. IEEE, 2018. p. 250-257.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

AU - Haque, Mohammad Ahsanul

AU - B. Bautista, Ruben

AU - Noroozi, Fatemeh

AU - Kulkarni, Kaustubh

AU - Laursen, Christian B.

AU - Irani, Ramin

AU - Bellantonio, Marco

AU - Escalera, Sergio

AU - Anbarjafari, Gholamreza

AU - Nasrollahi, Kamal

AU - Andersen, Ole Kæseler

AU - Spaich, Erika Geraldina

AU - Moeslund, Thomas B.

N1 - DNRF121

PY - 2018

Y1 - 2018

N2 - Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.

AB - Pain is a symptom of many disorders associated with actual or potential tissue damage in human body. Managing pain is not only a duty but also highly cost prone. The most primitive state of pain management is the assessment of pain. Traditionally it was accomplished by self-report or visual inspection by experts. However, automatic pain assessment systems from facial videos are also rapidly evolving due to the need of managing pain in a robust and cost effective way. Among different challenges of automatic pain assessment from facial video data two issues are increasingly prevalent: first, exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data on shallow learning scenarios. However, employing deep learning techniques for spatio-temporal analysis considering Depth (D) and Thermal (T) along with RGB has high potential in this area. In this paper, we present the first state-of-the-art publicly available database, 'Multimodal Intensity Pain (MIntPAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate.

KW - pain recognition

KW - Deep Learning

KW - database

KW - Spatiotemporal

KW - Multimodal

KW - video

KW - RGBDT

KW - benchmark

U2 - 10.1109/FG.2018.00044

DO - 10.1109/FG.2018.00044

M3 - Article in proceeding

SP - 250

EP - 257

BT - Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China

PB - IEEE

ER -

Haque MA, B. Bautista R, Noroozi F, Kulkarni K, Laursen CB, Irani R et al. Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities. In Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG, 15-19 May 2018, Xi'an, China. IEEE. 2018. p. 250-257 https://doi.org/10.1109/FG.2018.00044