Abstract
Keyword Spotting (KWS) models are becoming increasingly integrated into various systems, e.g. voice assistants. To achieve satisfactory performance, these models typically rely on a large amount of labelled data, limiting their applications only to situations where such data is available. Self-supervised Learning (SSL) methods can mitigate such a reliance by leveraging readily-available unlabelled data. Most SSL methods for speech have primarily been studied for large models, whereas this is not ideal, as compact KWS models are generally required. This paper explores the effectiveness of SSL on small models for KWS and establishes that SSL can enhance the performance of small KWS models when labelled data is scarce. We pretrain three compact transformer-based KWS models using Data2Vec, and fine-tune them on a label-deficient setup of the Google Speech Commands data set. It is found that Data2Vec pretraining leads to a significant increase in accuracy, with label-deficient scenarios showing an improvement of 8.22% to 11.18% absolute accuracy.
Originalsprog | Engelsk |
---|---|
Titel | ICASSPW 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings |
Forlag | IEEE (Institute of Electrical and Electronics Engineers) |
Publikationsdato | aug. 2023 |
Artikelnummer | 10193371 |
ISBN (Trykt) | 979-8-3503-0262-2 |
ISBN (Elektronisk) | 979-8-3503-0261-5 |
DOI | |
Status | Udgivet - aug. 2023 |
Begivenhed | 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, ICASSPW 2023 - Rhodes Island, Grækenland Varighed: 4 jun. 2023 → 10 jun. 2023 |
Konference
Konference | 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, ICASSPW 2023 |
---|---|
Land/Område | Grækenland |
By | Rhodes Island |
Periode | 04/06/2023 → 10/06/2023 |
Sponsor | IEEE, IEEE Signal Processing Society |
Bibliografisk note
Publisher Copyright:© 2023 IEEE.