Noise-Robust Target-Speaker Voice Activity Detection Through Self-Supervised Pretraining

Research output: Working paper/PreprintPreprint

5 Downloads (Pure)

Abstract

Target-Speaker Voice Activity Detection (TS-VAD) is the task of detecting the presence of speech from a known target-speaker in an audio frame. Recently, deep neural network-based models have shown good performance in this task. However, training these models requires extensive labelled data, which is costly and time-consuming to obtain, particularly if generalization to unseen environments is crucial. To mitigate this, we propose a causal, Self-Supervised Learning (SSL) pretraining framework, called Denoising Autoregressive Predictive Coding (DN-APC), to enhance TS-VAD performance in noisy conditions. We also explore various speaker conditioning methods and evaluate their performance under different noisy conditions. Our experiments show that DN-APC improves performance in noisy conditions, with a general improvement of approx. 2% in both seen and unseen noise. Additionally, we find that FiLM conditioning provides the best overall performance. Representation analysis via tSNE plots reveals robust initial representations of speech and non-speech from pretraining. This underscores the effectiveness of SSL pretraining in improving the robustness and performance of TS-VAD models in noisy environments.
Original languageEnglish
PublisherarXiv
Number of pages13
DOIs
Publication statusPublished - 6 Jan 2025

Bibliographical note

Submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing for possible publication. 12 pages, 4 figures, 5 tables

Fingerprint

Dive into the research topics of 'Noise-Robust Target-Speaker Voice Activity Detection Through Self-Supervised Pretraining'. Together they form a unique fingerprint.

Cite this