Audio-based Granularity-adapted Emotion Classification

Sven Ewan Shepstone, Zheng-Hua Tan, Søren Holdt Jensen

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Resumé

This paper introduces a novel framework for combining the strengths of machine-based and human-based emotion classification. Peoples' ability to tell similar emotions apart is known as emotional granularity, which can be high or low, and is measurable. This paper proposes granularity-Adapted classification that can be used as a front-end to drive a recommender, based on emotions from speech. In this context, incorrectly predicted peoples' emotions could lead to poor recommendations, reducing user satisfaction. Instead of identifying a single emotion class, an adapted class is proposed, and is an aggregate of underlying emotion classes chosen based on granularity. In the recommendation context, the adapted class maps to a larger region in valence-Arousal space, from which a list of potentially more similar content items is drawn, and recommended to the user. To determine the effectiveness of adapted classes, we measured the emotional granularity of subjects, and for each subject, used their pairwise similarity judgments of emotion to compare the effectiveness of adapted classes versus single emotion classes taken from a baseline system. A customized Euclidean-based similarity metric is used to measure the relative proximity of emotion classes. Results show that granularity-Adapted classification can improve the potential similarity by up to 9.6 percent.

OriginalsprogEngelsk
TidsskriftIEEE Transactions on Affective Computing
Vol/bind9
Udgave nummer2
Sider (fra-til)176-190
Antal sider15
ISSN2371-9850
DOI
StatusUdgivet - 2018

Citer dette

@article{5ddcbd9898dd4d6db672fdcfc1164ead,
title = "Audio-based Granularity-adapted Emotion Classification",
abstract = "This paper introduces a novel framework for combining the strengths of machine-based and human-based emotion classification. Peoples' ability to tell similar emotions apart is known as emotional granularity, which can be high or low, and is measurable. This paper proposes granularity-Adapted classification that can be used as a front-end to drive a recommender, based on emotions from speech. In this context, incorrectly predicted peoples' emotions could lead to poor recommendations, reducing user satisfaction. Instead of identifying a single emotion class, an adapted class is proposed, and is an aggregate of underlying emotion classes chosen based on granularity. In the recommendation context, the adapted class maps to a larger region in valence-Arousal space, from which a list of potentially more similar content items is drawn, and recommended to the user. To determine the effectiveness of adapted classes, we measured the emotional granularity of subjects, and for each subject, used their pairwise similarity judgments of emotion to compare the effectiveness of adapted classes versus single emotion classes taken from a baseline system. A customized Euclidean-based similarity metric is used to measure the relative proximity of emotion classes. Results show that granularity-Adapted classification can improve the potential similarity by up to 9.6 percent.",
author = "Shepstone, {Sven Ewan} and Zheng-Hua Tan and Jensen, {S{\o}ren Holdt}",
year = "2018",
doi = "10.1109/TAFFC.2016.2598741",
language = "English",
volume = "9",
pages = "176--190",
journal = "IEEE Transactions on Affective Computing",
issn = "2371-9850",
publisher = "IEEE Communications Society",
number = "2",

}

Audio-based Granularity-adapted Emotion Classification. / Shepstone, Sven Ewan; Tan, Zheng-Hua; Jensen, Søren Holdt.

I: IEEE Transactions on Affective Computing, Bind 9, Nr. 2, 2018, s. 176-190.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

TY - JOUR

T1 - Audio-based Granularity-adapted Emotion Classification

AU - Shepstone, Sven Ewan

AU - Tan, Zheng-Hua

AU - Jensen, Søren Holdt

PY - 2018

Y1 - 2018

N2 - This paper introduces a novel framework for combining the strengths of machine-based and human-based emotion classification. Peoples' ability to tell similar emotions apart is known as emotional granularity, which can be high or low, and is measurable. This paper proposes granularity-Adapted classification that can be used as a front-end to drive a recommender, based on emotions from speech. In this context, incorrectly predicted peoples' emotions could lead to poor recommendations, reducing user satisfaction. Instead of identifying a single emotion class, an adapted class is proposed, and is an aggregate of underlying emotion classes chosen based on granularity. In the recommendation context, the adapted class maps to a larger region in valence-Arousal space, from which a list of potentially more similar content items is drawn, and recommended to the user. To determine the effectiveness of adapted classes, we measured the emotional granularity of subjects, and for each subject, used their pairwise similarity judgments of emotion to compare the effectiveness of adapted classes versus single emotion classes taken from a baseline system. A customized Euclidean-based similarity metric is used to measure the relative proximity of emotion classes. Results show that granularity-Adapted classification can improve the potential similarity by up to 9.6 percent.

AB - This paper introduces a novel framework for combining the strengths of machine-based and human-based emotion classification. Peoples' ability to tell similar emotions apart is known as emotional granularity, which can be high or low, and is measurable. This paper proposes granularity-Adapted classification that can be used as a front-end to drive a recommender, based on emotions from speech. In this context, incorrectly predicted peoples' emotions could lead to poor recommendations, reducing user satisfaction. Instead of identifying a single emotion class, an adapted class is proposed, and is an aggregate of underlying emotion classes chosen based on granularity. In the recommendation context, the adapted class maps to a larger region in valence-Arousal space, from which a list of potentially more similar content items is drawn, and recommended to the user. To determine the effectiveness of adapted classes, we measured the emotional granularity of subjects, and for each subject, used their pairwise similarity judgments of emotion to compare the effectiveness of adapted classes versus single emotion classes taken from a baseline system. A customized Euclidean-based similarity metric is used to measure the relative proximity of emotion classes. Results show that granularity-Adapted classification can improve the potential similarity by up to 9.6 percent.

U2 - 10.1109/TAFFC.2016.2598741

DO - 10.1109/TAFFC.2016.2598741

M3 - Journal article

VL - 9

SP - 176

EP - 190

JO - IEEE Transactions on Affective Computing

JF - IEEE Transactions on Affective Computing

SN - 2371-9850

IS - 2

ER -