On Training Targets and Activation Functions for Deep Representation Learning in Text-Dependent Speaker Verification

Achintya Kumar Sarkar*, Zheng Hua Tan*

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

2 Citationer (Scopus)
35 Downloads (Pure)

Abstract

Deep representation learning has gained significant momentum in advancing text-dependent speaker verification (TD-SV) systems. When designing deep neural networks (DNN) for extracting bottleneck (BN) features, the key considerations include training targets, activation functions, and loss functions. In this paper, we systematically study the impact of these choices on the performance of TD-SV. For training targets, we consider speaker identity, time-contrastive learning (TCL), and auto-regressive prediction coding, with the first being supervised and the last two being self-supervised. Furthermore, we study a range of loss functions when speaker identity is used as the training target. With regard to activation functions, we study the widely used sigmoid function, rectified linear unit (ReLU), and Gaussian error linear unit (GELU). We experimentally show that GELU is able to reduce the error rates of TD-SV significantly compared to sigmoid, irrespective of the training target. Among the three training targets, TCL performs the best. Among the various loss functions, cross-entropy, joint-softmax, and focal loss functions outperform the others. Finally, the score-level fusion of different systems is also able to reduce the error rates. To evaluate the representation learning methods, experiments are conducted on the RedDots 2016 challenge database consisting of short utterances for TD-SV systems based on classic Gaussian mixture model-universal background model (GMM-UBM) and i-vector methods.

OriginalsprogEngelsk
TidsskriftAcoustics
Vol/bind5
Udgave nummer3
Sider (fra-til)693-713
Antal sider21
DOI
StatusUdgivet - sep. 2023

Bibliografisk note

Publisher Copyright:
© 2023 by the authors.

Fingeraftryk

Dyk ned i forskningsemnerne om 'On Training Targets and Activation Functions for Deep Representation Learning in Text-Dependent Speaker Verification'. Sammen danner de et unikt fingeraftryk.

Citationsformater