Abstract
We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine
physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions,
while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used
to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular
sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden.
A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions,
including both uni- and bimodal stimuli (auditory and visual). The auditory stimuli consisted of several combinations of auditory
feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show
that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are
rendered in the environment.
physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions,
while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used
to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular
sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden.
A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions,
including both uni- and bimodal stimuli (auditory and visual). The auditory stimuli consisted of several combinations of auditory
feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show
that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are
rendered in the environment.
Original language | English |
---|---|
Journal | Eurasip Journal on Audio, Speech, and Music Processing |
Volume | 2010 |
Number of pages | 10 |
ISSN | 1687-4714 |
DOIs | |
Publication status | Published - 2010 |