Abstract
To improve the reliability of telephone-based speaker verification systems, channel compensation is indispensable. However, it is also important to ensure that the channel compensation algorithms in these systems surpress channel variations and enhance interspeaker distinction. This paper addresses this problem by a blind feature-based transformation approach in which the transformation parameters are determined online without any a priori knowledge of channel characteristics. Specifically, a composite statistical model formed by the fusion of a speaker model and a background model is used to represent the characteristics of enrollment speech. Based on the difference between the claimant's speech and the composite model, a stochastic matching type of approach is proposed to transform the claimant's speech to a region close to the enrollment speech. Therefore, the algorithm can estimate the transformation online without the necessity of detecting the handset types. Experimental results based on the 2001 NIST evaluation set show that the proposed transformation approach achieves significant improvement in both equal error rate and minimum detection cost as compared to cepstral mean subtraction and Znorm.
Original language | English |
---|---|
Pages (from-to) | 117-126 |
Number of pages | 10 |
Journal | Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology |
Volume | 42 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2006 |
Keywords
- Speaker verification
- Feature transformation
- Blind channel compensation
- Acoustic mismatch
ASJC Scopus subject areas
- Information Systems
- Signal Processing
- Electrical and Electronic Engineering