Abstract
Humans tend to change their way of speaking when they are immersed in a noisy environment, a reflex known as Lombard effect. Current speech enhancement systems based on deep learning do not usually take into account this change in the speaking style, because they are trained with neutral (non-Lombard) speech utterances recorded under quiet conditions to which noise is artificially added. In this paper, we investigate the effects that the Lombard reflex has on the performance of audio-visual speech enhancement systems based on deep learning. The results show that a gap in the performance of as much as approximately 5 dB between the systems trained on neutral speech and the ones trained on Lombard speech exists. This indicates the benefit of taking into account the mismatch between neutral and Lombard speech in the design of audio-visual speech enhancement systems.
Original language | English |
---|---|
Title of host publication | ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Number of pages | 5 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Publication date | 17 Apr 2019 |
Pages | 6615-6619 |
Article number | 8682713 |
ISBN (Print) | 978-1-4799-8132-8 |
ISBN (Electronic) | 978-1-4799-8131-1 |
DOIs | |
Publication status | Published - 17 Apr 2019 |
Event | 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) - Brighton, United Kingdom Duration: 12 May 2019 → 17 May 2019 https://2019.ieeeicassp.org/ |
Conference
Conference | 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
---|---|
Country/Territory | United Kingdom |
City | Brighton |
Period | 12/05/2019 → 17/05/2019 |
Internet address |
Series | I E E E International Conference on Acoustics, Speech and Signal Processing. Proceedings |
---|---|
ISSN | 1520-6149 |
Keywords
- Audio-visual speech enhancement
- Lombard effect
- deep learning