Modeling, Recognizing, and Explaining Apparent Personality from Videos

Hugo Jair Escalante*, Heysem Kaya, Albert Ali Salah, Sergio Escalera, Yagmur Gucluturk, Umut Guclu, Xavier Baro, Isabelle Guyon, Julio C.S.Jacques Junior, Meysam Madadi, Stephane Ayache, Evelyne Viegas, Furkan Gurpnar, Achmadnoer Sukma Wicaksana, Cynthia C.S. Liem, Marcel A.J. Van Gerven, Rob Van Lier

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

22 Citations (Scopus)

Abstract

Explainability and interpretability are two critical aspects of decision support systems. Despite their importance, it is only recently that researchers are starting to explore these aspects. This paper provides an introduction to explainability and interpretability in the context of apparent personality recognition. To the best of our knowledge, this is the first effort in this direction. We describe a challenge we organized on explainability in first impressions analysis from video. We analyze in detail the newly introduced data set, evaluation protocol, proposed solutions and summarize the results of the challenge. We investigate the issue of bias in detail. Finally, derived from our study, we outline research opportunities that we foresee will be relevant in this area in the near future.

Original languageEnglish
JournalIEEE Transactions on Affective Computing
Volume13
Issue number2
Pages (from-to)894-911
Number of pages18
ISSN2371-9850
DOIs
Publication statusPublished - 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2010-2012 IEEE.

Keywords

  • algorithmic accountability
  • Explainable computer vision
  • first impressions
  • multimodal information
  • personality analysis

Fingerprint

Dive into the research topics of 'Modeling, Recognizing, and Explaining Apparent Personality from Videos'. Together they form a unique fingerprint.

Cite this