Discriminating languages in a probabilistic latent subspace

Aleksandr Sizov, Kong Aik Lee, Tomi Kinnunen

Research output: Unpublished conference presentation (presented paper, abstract, poster)Conference presentation (not published in journal/proceeding/book)Academic researchpeer-review

2 Citations (Scopus)

Abstract

We explore a method to boost discriminative capabilities of Probabilistic Linear Discriminant Analysis (PLDA) model without losing its generative advantages. To this end, our focus is in a low-dimensional PLDA latent subspace. We optimize the model with respect to MMI (Maximum Mutual Information) and our own objective functions, which is an approximation to the detection cost function. We evaluate the performance on NIST Language Recognition Evaluation 2015. Our model trains faster and performs more accurately in comparison to both generative PLDA and discriminative LDA baselines with 12% and 4% relative improvement in the average detection cost, respectively. The proposed method is applicable for a broad range of closed-set tasks.

Original languageEnglish
Pages81-88
Number of pages8
DOIs
Publication statusPublished - Jun 2016
Externally publishedYes
EventSpeaker and Language Recognition Workshop, Odyssey 2016 - Bilbao, Spain
Duration: 21 Jun 201624 Jun 2016

Conference

ConferenceSpeaker and Language Recognition Workshop, Odyssey 2016
Country/TerritorySpain
CityBilbao
Period21/06/1624/06/16

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Discriminating languages in a probabilistic latent subspace'. Together they form a unique fingerprint.

Cite this