Deep-Learning-Based Audio-Visual Speech Enhancement in Presence of Lombard Effect

Daniel Michelsanti, Zheng-Hua Tan, Sigurdur Sigurdsson, Jesper Jensen

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

17 Citationer (Scopus)

Abstract

When speaking in presence of background noise, humans reflexively change their way of speaking in order to improve the intelligibility of their speech. This reflex is known as Lombard effect. Collecting speech in Lombard conditions is usually hard and costly. For this reason, speech enhancement systems are generally trained and evaluated on speech recorded in quiet to which noise is artificially added. Since these systems are often used in situations where Lombard speech occurs, in this work we perform an analysis of the impact that Lombard effect has on audio, visual and audio-visual speech enhancement, focusing on deep-learning-based systems, since they represent the current state of the art in the field.

We conduct several experiments using an audio-visual Lombard speech corpus consisting of utterances spoken by 54 different talkers. The results show that training deep-learning-based models with Lombard speech is beneficial in terms of both estimated speech quality and estimated speech intelligibility at low signal to noise ratios, where the visual modality can play an important role in acoustically challenging situations. We also find that a performance difference between genders exists due to the distinct Lombard speech exhibited by males and females, and we analyse it in relation with acoustic and visual features. Furthermore, listening tests conducted with audio-visual stimuli show that the speech quality of the signals processed with systems trained using Lombard speech is statistically significantly better than the one obtained using systems trained with non-Lombard speech at a signal to noise ratio of -5 dB. Regarding speech intelligibility, we find a general tendency of the benefit in training the systems with Lombard speech.
OriginalsprogEngelsk
TidsskriftSpeech Communication
Vol/bind115
Sider (fra-til)38-50
Antal sider13
ISSN0167-6393
DOI
StatusUdgivet - dec. 2019

Fingeraftryk

Dyk ned i forskningsemnerne om 'Deep-Learning-Based Audio-Visual Speech Enhancement in Presence of Lombard Effect'. Sammen danner de et unikt fingeraftryk.

Citationsformater