Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

Achintya Kumar Sarkar, Zheng-Hua Tan, Hao Tang, Suwon Shon, James Glass

Research output: Contribution to journalJournal articleResearchpeer-review

20 Citations (Scopus)
95 Downloads (Pure)

Abstract

There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.

Original languageEnglish
Article number8708955
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume27
Issue number8
Pages (from-to)1267-1279
Number of pages13
ISSN2329-9290
DOIs
Publication statusPublished - Aug 2019

Keywords

  • Speaker verification
  • Deep Learning
  • Bottleneck Features
  • speaker verification
  • bottleneck feature
  • GMM-UBM
  • time-contrastive learning
  • DNNs

Fingerprint

Dive into the research topics of 'Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification'. Together they form a unique fingerprint.

Cite this