Keyword spotting (KWS) is experiencing an upswing due to the pervasiveness of small electronic devices that allow interaction with them via speech. Often, KWS systems are speaker-independent, which means that any person --user or not-- might trigger them. For applications like KWS for hearing assistive devices this is unacceptable, as only the user must be allowed to handle them. In this paper we propose KWS for hearing assistive devices that is robust to external speakers. A state-of-the-art deep residual network for small-footprint KWS is regarded as a basis to build upon. By following a multi-task learning scheme, this system is extended to jointly perform KWS and users' own-voice/external speaker detection with a negligible increase in the number of parameters. For experiments, we generate from the Google Speech Commands Dataset a speech corpus emulating hearing aids as a capturing device. Our results show that this multi-task deep residual network is able to achieve a KWS accuracy relative improvement of around 32% with respect to a system that does not deal with external speakers.
|Status||Udgivet - sep. 2019|
|Begivenhed||Interspeech 2019 - Graz, Østrig|
Varighed: 15 sep. 2019 → 19 sep. 2019
|Periode||15/09/2019 → 19/09/2019|
|Navn||Proceedings of the International Conference on Spoken Language Processing|
Lopez-Espejo, I., Tan, Z-H., & Jensen, J. (2019). Keyword Spotting for Hearing Assistive Devices Robust to External Speakers. I Interspeech 2019 (s. 3223-3227). ISCA. Proceedings of the International Conference on Spoken Language Processing https://doi.org/10.21437/Interspeech.2019-2010