Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification

Achintya Kumar Sarkar, Zheng-Hua Tan

Research output: Contribution to journalJournal articleResearchpeer-review

3 Citations (Scopus)

Abstract

In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using the utterances of a particular pass-phrase. During training, pass-phrase
specific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense and the selected PBM is then used for the log likelihood ratio (LLR) calculation with respect to the claimant model. The proposed method incorporates the
pass-phrase identification step in the LLR calculation, which is not considered in conventional standalone TD-SV systems. The performance of the proposed method is compared to conventional text-independent background model based TD-SV systems using either Gaussian mixture model (GMM)-universal background model (UBM) or Hidden Markov model (HMM)-UBM or i-vector
paradigms. In addition, we consider two approaches to build PBMs: speaker-independent and speaker-dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system. Experiments are conducted on the RedDots challenge and the RSR2015 databases that consist of short utterances.
Original languageEnglish
JournalComputer Speech and Language
Volume47
Pages (from-to)259-271
Number of pages13
ISSN0885-2308
DOIs
Publication statusPublished - 1 Jan 2018

Fingerprint

Speaker Verification
Dependent
Model
Log-likelihood Ratio
Text
Background
Target
Gaussian Mixture Model
Hidden Markov models
Markov Model
Maximum Likelihood
Error Rate

Keywords

  • Pass-phrase dependent background models (PBMs)
  • GMM-UBM
  • HMM-UBM
  • I-vector
  • Text-dependent
  • Speaker verification

Cite this

@article{111377fd28ec41af8e302850b26a660a,
title = "Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification",
abstract = "In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using the utterances of a particular pass-phrase. During training, pass-phrasespecific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense and the selected PBM is then used for the log likelihood ratio (LLR) calculation with respect to the claimant model. The proposed method incorporates thepass-phrase identification step in the LLR calculation, which is not considered in conventional standalone TD-SV systems. The performance of the proposed method is compared to conventional text-independent background model based TD-SV systems using either Gaussian mixture model (GMM)-universal background model (UBM) or Hidden Markov model (HMM)-UBM or i-vectorparadigms. In addition, we consider two approaches to build PBMs: speaker-independent and speaker-dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system. Experiments are conducted on the RedDots challenge and the RSR2015 databases that consist of short utterances.",
keywords = "Pass-phrase dependent background models (PBMs), GMM-UBM, HMM-UBM, I-vector, Text-dependent, Speaker verification",
author = "Sarkar, {Achintya Kumar} and Zheng-Hua Tan",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.csl.2017.07.010",
language = "English",
volume = "47",
pages = "259--271",
journal = "Computer Speech and Language",
issn = "0885-2308",
publisher = "Academic Press",

}

Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification. / Sarkar, Achintya Kumar; Tan, Zheng-Hua.

In: Computer Speech and Language, Vol. 47, 01.01.2018, p. 259-271.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification

AU - Sarkar, Achintya Kumar

AU - Tan, Zheng-Hua

PY - 2018/1/1

Y1 - 2018/1/1

N2 - In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using the utterances of a particular pass-phrase. During training, pass-phrasespecific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense and the selected PBM is then used for the log likelihood ratio (LLR) calculation with respect to the claimant model. The proposed method incorporates thepass-phrase identification step in the LLR calculation, which is not considered in conventional standalone TD-SV systems. The performance of the proposed method is compared to conventional text-independent background model based TD-SV systems using either Gaussian mixture model (GMM)-universal background model (UBM) or Hidden Markov model (HMM)-UBM or i-vectorparadigms. In addition, we consider two approaches to build PBMs: speaker-independent and speaker-dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system. Experiments are conducted on the RedDots challenge and the RSR2015 databases that consist of short utterances.

AB - In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using the utterances of a particular pass-phrase. During training, pass-phrasespecific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense and the selected PBM is then used for the log likelihood ratio (LLR) calculation with respect to the claimant model. The proposed method incorporates thepass-phrase identification step in the LLR calculation, which is not considered in conventional standalone TD-SV systems. The performance of the proposed method is compared to conventional text-independent background model based TD-SV systems using either Gaussian mixture model (GMM)-universal background model (UBM) or Hidden Markov model (HMM)-UBM or i-vectorparadigms. In addition, we consider two approaches to build PBMs: speaker-independent and speaker-dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system. Experiments are conducted on the RedDots challenge and the RSR2015 databases that consist of short utterances.

KW - Pass-phrase dependent background models (PBMs)

KW - GMM-UBM

KW - HMM-UBM

KW - I-vector

KW - Text-dependent

KW - Speaker verification

UR - http://www.scopus.com/inward/record.url?scp=85029045929&partnerID=8YFLogxK

U2 - 10.1016/j.csl.2017.07.010

DO - 10.1016/j.csl.2017.07.010

M3 - Journal article

VL - 47

SP - 259

EP - 271

JO - Computer Speech and Language

JF - Computer Speech and Language

SN - 0885-2308

ER -