Channel robust speaker verification via bayesian blind stochastic feature transformation

Kwok Kwong Yiu, Man Wai Mak, Sun Yuan Kung

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review


In telephone-based speaker verification, the channel conditions can be varied significantly from sessions to sessions. Therefore, it is desirable to estimate the channel conditions online and compensate the acoustic distortion without prior knowledge of the channel characteristics. Because no a priori knowledge is used, the estimation accuracy depends greatly on the length of the verification utterances. This paper extends the Blind Stochastic Feature Transformation (BSFT) algorithm that we recently proposed to handle the short-utterance scenario. The idea is to estimate a set of prior transformation parameters from a development set in which a wide variety of channel conditions exists in the verification utterances. The prior transformations are then incorporated into the online estimation of the BSFT parameters in a Bayesian (maximum a posteriori) fashion. The resulting transformation parameters are therefore dependent on both the prior transformations and the verification utterances. For short (long) utterances, the prior transformations play a more (less) important role. We referred the extended algorithm to as Bayesian BSFT (BBSFT) and applied it to the 2001 NIST SRE task. Results show that Bayesian BSFT outperforms BSFT for utterances shorter than or equal to 4 seconds.
Original languageEnglish
Title of host publication9th European Conference on Speech Communication and Technology
Number of pages4
Publication statusPublished - 1 Dec 2005
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: 4 Sep 20058 Sep 2005


Conference9th European Conference on Speech Communication and Technology

ASJC Scopus subject areas

  • Engineering(all)

Cite this