Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

Achintya Kumar Sarkar, Zheng-Hua Tan, Hao Tang, Suwon Shon, James Glass

Research output: Contribution to journalJournal articleResearchpeer-review

1 Downloads (Pure)

Abstract

There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.

Original languageEnglish
Article number8708955
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume27
Issue number8
Pages (from-to)1267-1279
Number of pages13
ISSN2329-9290
DOIs
Publication statusPublished - Aug 2019

Fingerprint

learning
Speech recognition
Labels
speech recognition
Brain
brain
Clustering algorithms
Feature extraction
pattern recognition
Deep neural networks
partitions
education
Experiments
evaluation

Keywords

  • Speaker verification
  • Deep Learning
  • Bottleneck Features
  • speaker verification
  • bottleneck feature
  • GMM-UBM
  • time-contrastive learning
  • DNNs

Cite this

@article{09344e3e65454d0f96842e0e5c217594,
title = "Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification",
abstract = "There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.",
keywords = "Speaker verification, Deep Learning, Bottleneck Features, speaker verification, bottleneck feature, GMM-UBM, time-contrastive learning, DNNs",
author = "Sarkar, {Achintya Kumar} and Zheng-Hua Tan and Hao Tang and Suwon Shon and James Glass",
year = "2019",
month = "8",
doi = "10.1109/TASLP.2019.2915322",
language = "English",
volume = "27",
pages = "1267--1279",
journal = "IEEE/ACM Transactions on Audio, Speech, and Language Processing",
issn = "2329-9290",
publisher = "IEEE Signal Processing Society",
number = "8",

}

Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification. / Sarkar, Achintya Kumar; Tan, Zheng-Hua; Tang, Hao; Shon, Suwon; Glass, James.

In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. 27, No. 8, 8708955, 08.2019, p. 1267-1279.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

AU - Sarkar, Achintya Kumar

AU - Tan, Zheng-Hua

AU - Tang, Hao

AU - Shon, Suwon

AU - Glass, James

PY - 2019/8

Y1 - 2019/8

N2 - There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.

AB - There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs) trained to discriminate speakers, pass-phrases, and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL BN feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover, the clustering method improves the TD-SV performance of TCL-BN and ASR derived BN features with respect to their standalone counterparts. We further study the TD-SV performance of fusing cepstral and BN features.

KW - Speaker verification

KW - Deep Learning

KW - Bottleneck Features

KW - speaker verification

KW - bottleneck feature

KW - GMM-UBM

KW - time-contrastive learning

KW - DNNs

UR - http://www.scopus.com/inward/record.url?scp=85066748638&partnerID=8YFLogxK

U2 - 10.1109/TASLP.2019.2915322

DO - 10.1109/TASLP.2019.2915322

M3 - Journal article

VL - 27

SP - 1267

EP - 1279

JO - IEEE/ACM Transactions on Audio, Speech, and Language Processing

JF - IEEE/ACM Transactions on Audio, Speech, and Language Processing

SN - 2329-9290

IS - 8

M1 - 8708955

ER -