Abstract
Automatic prediction of personality traits is a subjective task that has recently received much attention. Specifically, automatic apparent personality trait prediction from multimodal data has emerged as a hot topic within the filed of computer vision and, more particularly, the so called “looking at people” sub-field. Considering “apparent” personality traits as opposed to real ones considerably reduces the subjectivity of the task. The real world applications are encountered in a wide range of domains, including entertainment, health, human computer interaction, recruitment and security. Predictive models of personality traits are useful for individuals in many scenarios (e.g., preparing for job interviews, preparing for public speaking). However, these predictions in and of themselves might be deemed to be untrustworthy without human understandable supportive evidence. Through a series of experiments on a recently released benchmark dataset for automatic apparent personality trait prediction, this paper characterizes the audio and visual information that is used by a state-of-the-art model while making its predictions, so as to provide such supportive evidence by explaining predictions made. Additionally, the paper describes a new web application, which gives feedback on apparent personality traits of its users by combining model predictions with their explanations.
Original language | English |
---|---|
Title of host publication | Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 |
Number of pages | 9 |
Publisher | IEEE Signal Processing Society |
Publication date | 1 Jul 2017 |
Pages | 3101-3109 |
ISBN (Electronic) | 9781538610343 |
DOIs | |
Publication status | Published - 1 Jul 2017 |
Event | 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 - Venice, Italy Duration: 22 Oct 2017 → 29 Oct 2017 |
Conference
Conference | 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 |
---|---|
Country/Territory | Italy |
City | Venice |
Period | 22/10/2017 → 29/10/2017 |
Series | Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 |
---|---|
Volume | 2018-January |
Bibliographical note
Funding Information:Marcel van Gerven was supported by a VIDI grant (639.072.513) from the Netherlands Organization for Scientific Research and a GPU grant (GeForce Titan X) from the Nvidia Corporation. Hugo Jair Escalante was supported by CONACyT under grants CB2014-241306 and PN-215546. This work has also been partially supported by the Spanish projects TIN2015-66951-C2-2-R and TIN2016-74946-P (MINECO/FEDER, UE), by the European Comission Horizon 2020 granted project SEE.4C under call H2020-ICT-2015, by the CERCA Programme/Generalitat de Catalunya, and NVIDIA GPU Grant Program.
Funding Information:
Marcel van Gerven was supported by a VIDI grant (639.072.513) from the Netherlands Organization for Scientific Research and a GPU grant (GeForce Titan X) from the Nvidia Corporation. Hugo Jair Escalante was supported by CONACyT under grants CB2014-241306 and PN-215546. This work has also been partially supported by the Spanish projects TIN2015-66951-C2-2-R and TIN2016-74946-P (MINECO/FEDER, UE), by the European Comission Horizon 2020 granted project SEE.4C under call H2020-ICT-2015, by the CERCA Pro-gramme/Generalitat de Catalunya, and NVIDIA GPU Grant Program.
Publisher Copyright:
© 2017 IEEE.