Shouted Speech Compensation for Speaker Verification Robust to Vocal Effort Conditions

Santiago Prieto-Calero, Alfonso Ortega, Iván López Espejo, Eduardo Lleida

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

1 Citationer (Scopus)
13 Downloads (Pure)

Abstrakt

The performance of speaker verification systems degrades when vocal effort conditions between enrollment and test (e.g., shouted vs. normal speech) are different. This is a potential situation in non-cooperative speaker verification tasks. In this paper, we present a study on different methods for linear compensation of embeddings making use of Gaussian mixture models to cluster shouted and normal speech domains. These compensation techniques are borrowed from the area of robustness for automatic speech recognition and, in this work, we apply them to compensate the mismatch between shouted and normal conditions in speaker verification. Before compensation, shouted condition is automatically detected by means of logistic regression. The process is computationally light and it is performed in the back-end of an x-vector system. Experimental results show that applying the proposed approach in the presence of vocal effort mismatch yields up to 13.8% equal error rate relative improvement with respect to a system that applies neither shouted speech detection nor compensation.

OriginalsprogEngelsk
TitelInterspeech 2020
Antal sider5
Publikationsdato2020
Sider1511-1515
DOI
StatusUdgivet - 2020
BegivenhedInterspeech 2020 - Shanghai, Kina
Varighed: 25 okt. 202029 okt. 2020

Konference

KonferenceInterspeech 2020
Land/OmrådeKina
ByShanghai
Periode25/10/202029/10/2020
NavnProceedings of the International Conference on Spoken Language Processing
ISSN1990-9772

Fingeraftryk

Dyk ned i forskningsemnerne om 'Shouted Speech Compensation for Speaker Verification Robust to Vocal Effort Conditions'. Sammen danner de et unikt fingeraftryk.

Citationsformater