Abstract
We explore a method to boost discriminative capabilities of probabilistic linear discriminant analysis (PLDA) model without losing its generative advantages. We show a sequential projection and training steps leading to a classifier that operates in the original i-vector space but is discriminatively trained in a low-dimensional PLDA latent subspace. We use extended Baum-Welch technique to optimize the model with respect to two objective functions for discriminative training. One of them is the well-known maximum mutual information objective, while the other one is a new objective that we propose to approximate the language detection cost. We evaluate the performance on NIST language recognition evaluation (LRE) 2015 and our development dataset comprised of the utterances from previous LREs. We improve the detection cost by 10% and 6% relative compared to our fine-tuned generative and discriminative baselines, and by 10% over the best of our previously reported results. The proposed approximation method of the cost function and PLDA subspace training are applicable for a broad range of tasks.
Original language | English |
---|---|
Pages (from-to) | 588-597 |
Number of pages | 10 |
Journal | IEEE/ACM Transactions on Audio Speech and Language Processing |
Volume | 25 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2017 |
Externally published | Yes |
Keywords
- Discriminative training
- factor analysis
- language detection
- language identification
- PLDA
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- Acoustics and Ultrasonics
- Computational Mathematics
- Electrical and Electronic Engineering