Parameter-efficient Fine-tuning of Speaker-Aware Dynamic Prompts for Speaker Verification

Zhe Li, Man Wai Mak, Hung Yi Lee, Helen Meng

Research output: Journal article publicationConference articleAcademic researchpeer-review

Abstract

Prompt tuning can effectively reduce tunable parameters in pretrained Transformers. However, it is weak at capturing speaker traits because the prompts can easily overfit the adaptation utterances, resulting in poor generalization to unseen speakers. This paper introduces a prompt pool comprising learnable prompts to tackle this issue. Unlike the traditional method that learns a fixed set of prompts for each training utterance, our method uses a dynamic selection strategy to select the best matching prompts in a pool for tuning, resulting in each prompt being tuned by its closely matched speaker. The objective is to make the prompts in the pool form speaker clusters, enhancing speaker prediction in the downstream classifier while maintaining the plasticity of the pre-trained Transformers. Our experiments on language mismatch in speaker verification demonstrate that the dynamic prompt pool provides a memory- and computation-efficient solution to fine-tune pre-trained Transformers.

Original languageEnglish
Pages (from-to)2675-2679
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
Publication statusPublished - Sept 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sept 20245 Sept 2024

Keywords

  • parameter-efficient tuning
  • pre-trained Transformer
  • prompt pool
  • prompt tuning
  • Speaker verification

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Parameter-efficient Fine-tuning of Speaker-Aware Dynamic Prompts for Speaker Verification'. Together they form a unique fingerprint.

Cite this