Abstract
In speech and audio applications, short-term signal spectrum is often represented using mel-frequency cepstral coefficients (MFCCs) computed from a windowed discrete Fourier transform (DFT). Windowing reduces spectral leakage but variance of the spectrum estimate remains high. An elegant extension to windowed DFT is the so-called multitaper method which uses multiple time-domain windows (tapers) with frequency-domain averaging. Multitapers have received little attention in speech processing even though they produce low-variance features. In this paper, we propose the multitaper method for MFCC extraction with a practical focus. We provide, first, detailed statistical analysis of MFCC bias and variance using autoregressive process simulations on the TIMIT corpus. For speaker verification experiments on the NIST 2002 and 2008 SRE corpora, we consider three Gaussian mixture model based classifiers with universal background model (GMM-UBM), support vector machine (GMM-SVM) and joint factor analysis (GMM-JFA). Multitapers improve MinDCF over the baseline windowed DFT by relative 20.4% (GMM-SVM) and 13.7% (GMM-JFA) on the interview-interview condition in NIST 2008. The GMM-JFA system further reduces MinDCF by 18.7% on the telephone data. With these improvements and generally noncritical parameter selection, multitaper MFCCs are a viable candidate for replacing the conventional MFCCs.
Original language | English |
---|---|
Article number | 6175110 |
Pages (from-to) | 1990-2001 |
Number of pages | 12 |
Journal | IEEE Transactions on Audio, Speech and Language Processing |
Volume | 20 |
Issue number | 7 |
DOIs | |
Publication status | Published - Apr 2012 |
Externally published | Yes |
Keywords
- Mel-frequency cepstral coefficient (MFCC)
- multitaper
- small-variance estimation
- speaker verification
ASJC Scopus subject areas
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering