Abstract
In this paper, a serialized multi-layer multi-head attention is proposed for extracting neural speaker embedding in text-independent speaker verification task. The majority of the recent approaches apply one attention layer to aggregate frame-level features. Inspired by the Transformer network, the proposed serialized attention contains a stack of self-attention layers. Unlike parallel multi-head attention, we propose to aggregate the attentive statistics in a serialized manner to generate the utterance-level embedding and it is propagated to the next layer by residual connection. We further propose an input-aware query for each utterance with the statistics pooling. To evaluate the quality of learned speaker embeddings, the proposed serialized attention mechanism is applied on two widely used neural speaker embedding architectures and validated on several benchmark datasets of various languages and acoustic conditions, including the VoxCeleb1, SITW, and CN-Celeb. Experimental results demonstrate the use of serialized attention can achieve better speaker verification performance.
Original language | English |
---|---|
Pages (from-to) | 89-100 |
Number of pages | 12 |
Journal | Speech Communication |
Volume | 144 |
DOIs | |
Publication status | Published - Oct 2022 |
Externally published | Yes |
Keywords
- Attention mechanism
- Serialized attention
- Speaker embeddings
- Speaker verification
ASJC Scopus subject areas
- Software
- Modelling and Simulation
- Communication
- Language and Linguistics
- Linguistics and Language
- Computer Vision and Pattern Recognition
- Computer Science Applications