This paper focuses on validating a perceptual distraction model, which aims to predict user's perceived distraction caused by audio-on-audio interference. Originally, the distraction model was trained with music targets and interferers using a simple loudspeaker setup, consisting of only two loudspeakers. Recently, the model was successfully validated in a complex personal sound-zone system with music targets and speech interferers. In this paper, a second round of validations were conducted by physically altering the sound-zone system and running listening experiments utilizing both of the two sound zones within the sound-zone system. Thus, validating the model using a different sound-zone system with both speech-on-music and music-on-speech stimuli sets. The results show that the model performance is equally good in both zones, i.e., with both speech- on-music and music-on-speech stimuli, and comparable to the previous validation round. The calculated root mean squared errors in Zone A and B were 11.3% and 10.4%, respectively, compared to the 11.0% from the previous validation round. The results further confirm that the distraction model can be used as a valuable tool in evaluating and optimizing the performance of personal sound-zone systems.
|Konference||Acoustics '17 Boston|
|Periode||25/06/2017 → 29/06/2017|
|Navn||Meetings on Acoustics. Proceedings|