Discriminative speaker embedding with serialized multi-layer multi-head attention

Hongning Zhu, Kong Aik Lee, Haizhou Li

Research output: Journal article publicationJournal articleAcademic researchpeer-review

9 Citations (Scopus)

Abstract

In this paper, a serialized multi-layer multi-head attention is proposed for extracting neural speaker embedding in text-independent speaker verification task. The majority of the recent approaches apply one attention layer to aggregate frame-level features. Inspired by the Transformer network, the proposed serialized attention contains a stack of self-attention layers. Unlike parallel multi-head attention, we propose to aggregate the attentive statistics in a serialized manner to generate the utterance-level embedding and it is propagated to the next layer by residual connection. We further propose an input-aware query for each utterance with the statistics pooling. To evaluate the quality of learned speaker embeddings, the proposed serialized attention mechanism is applied on two widely used neural speaker embedding architectures and validated on several benchmark datasets of various languages and acoustic conditions, including the VoxCeleb1, SITW, and CN-Celeb. Experimental results demonstrate the use of serialized attention can achieve better speaker verification performance.

Original languageEnglish
Pages (from-to)89-100
Number of pages12
JournalSpeech Communication
Volume144
DOIs
Publication statusPublished - Oct 2022
Externally publishedYes

Keywords

  • Attention mechanism
  • Serialized attention
  • Speaker embeddings
  • Speaker verification

ASJC Scopus subject areas

  • Software
  • Modelling and Simulation
  • Communication
  • Language and Linguistics
  • Linguistics and Language
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Discriminative speaker embedding with serialized multi-layer multi-head attention'. Together they form a unique fingerprint.

Cite this