Collaborative Contrastive Learning for Hypothesis Domain Adaptation

Jen Tzung Chien, I. Ping Yeh, Man Wai Mak

Research output: Journal article publicationConference articleAcademic researchpeer-review

2 Citations (Scopus)

Abstract

Achieving desirable performance for speaker recognition with severe domain mismatch is challenging. Such a challenge becomes even more harsh when the source data are missing. To enhance the low-resource speaker representation, this study deals with a practical scenario, called hypothesis domain adaptation, where a model trained on a source domain is adapted to a significantly different target domain as a hypothesis without access to source data. To pursue a domain-invariant representation, this paper proposes a novel collaborative hypothesis domain adaptation (CHDA) where the dual encoders are collaboratively trained to estimate the pseudo source data which are then utilized to maximize the domain confusion. Combined with the constrastive learning, this CHDA is further enhanced by increasing the domain matching as well as the speaker discrimination. The experiments on cross-language speaker recognition show the merit of the proposed method.

Original languageEnglish
Pages (from-to)3225-3229
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
Publication statusPublished - Sept 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sept 20245 Sept 2024

Keywords

  • collaborative learning
  • contrastive learning
  • Domain adaptation
  • speaker verification

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Collaborative Contrastive Learning for Hypothesis Domain Adaptation'. Together they form a unique fingerprint.

Cite this