Emotion rendering in auditory simulations of imagined walking styles

Luca Turchet, Antonio Rodá

Research output: Contribution to journalJournal articleResearchpeer-review

2 Citations (Scopus)

Abstract

This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to
imagined walking scenarios, five emotions (aggressive, happy, neutral, sad, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types.
Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal distance between consecutive steps. Results were in accordance
with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify
the emotions portrayed by walking sounds synthesized by setting the synthesis engine parameters to the mean values found in the first experiment. Results showed that the involved algorithms were successful in conveying
the emotional information at a level comparable with previous studies. Both experiments involved musicians and non-musicians. In both experiments, a similar general trend was found between the two groups.
Original languageEnglish
JournalIEEE Transactions on Affective Computing
VolumePP
Issue number99
ISSN1949-3045
DOIs
Publication statusPublished - 2016

Fingerprint

Acoustic waves
Experiments
Conveying
Engines

Cite this

@article{5aa925babd5145d8961b70774cdb543c,
title = "Emotion rendering in auditory simulations of imagined walking styles",
abstract = "This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to imagined walking scenarios, five emotions (aggressive, happy, neutral, sad, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types.Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal distance between consecutive steps. Results were in accordance with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify the emotions portrayed by walking sounds synthesized by setting the synthesis engine parameters to the mean values found in the first experiment. Results showed that the involved algorithms were successful in conveying the emotional information at a level comparable with previous studies. Both experiments involved musicians and non-musicians. In both experiments, a similar general trend was found between the two groups.",
author = "Luca Turchet and Antonio Rod{\'a}",
year = "2016",
doi = "10.1109/TAFFC.2016.2520924",
language = "English",
volume = "PP",
journal = "I E E E Transactions on Affective Computing",
issn = "1949-3045",
publisher = "IEEE",
number = "99",

}

Emotion rendering in auditory simulations of imagined walking styles. / Turchet, Luca; Rodá, Antonio.

In: IEEE Transactions on Affective Computing, Vol. PP, No. 99, 2016.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Emotion rendering in auditory simulations of imagined walking styles

AU - Turchet, Luca

AU - Rodá, Antonio

PY - 2016

Y1 - 2016

N2 - This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to imagined walking scenarios, five emotions (aggressive, happy, neutral, sad, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types.Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal distance between consecutive steps. Results were in accordance with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify the emotions portrayed by walking sounds synthesized by setting the synthesis engine parameters to the mean values found in the first experiment. Results showed that the involved algorithms were successful in conveying the emotional information at a level comparable with previous studies. Both experiments involved musicians and non-musicians. In both experiments, a similar general trend was found between the two groups.

AB - This paper investigated how different emotional states of a walker can be rendered and recognized by means of footstep sounds synthesis algorithms. In a first experiment, participants were asked to render, according to imagined walking scenarios, five emotions (aggressive, happy, neutral, sad, and tender) by manipulating the parameters of synthetic footstep sounds simulating various combinations of surface materials and shoes types.Results allowed to identify, for the involved emotions and sound conditions, the mean values and ranges of variation of two parameters, sound level and temporal distance between consecutive steps. Results were in accordance with those reported in previous studies on real walking, suggesting that expression of emotions in walking is independent from the real or imagined motor activity. In a second experiment participants were asked to identify the emotions portrayed by walking sounds synthesized by setting the synthesis engine parameters to the mean values found in the first experiment. Results showed that the involved algorithms were successful in conveying the emotional information at a level comparable with previous studies. Both experiments involved musicians and non-musicians. In both experiments, a similar general trend was found between the two groups.

U2 - 10.1109/TAFFC.2016.2520924

DO - 10.1109/TAFFC.2016.2520924

M3 - Journal article

VL - PP

JO - I E E E Transactions on Affective Computing

JF - I E E E Transactions on Affective Computing

SN - 1949-3045

IS - 99

ER -