Intramodal and intermodal fusion for audio-visual biometric authentication

Ming Cheung Cheung, Man Wai Mak, Sun Yuan Kung

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)

Abstract

This paper proposes a multiple-source multiple-sample fusion approach to identity verification. Fusion is performed at two levels: intramodal and intermodal. In intramodal fusion, the scores of multiple samples (e.g. utterances or video shots) obtained from the same modality are linearly combined, where the combination weights are dependent on the difference between the score values and a user-dependent reference score obtained during enrollment. This is followed by intermodal fusion in which the means of intramodal fused scores obtained from different modalities are fused. The final fused score is then used for decision making. This two-level fusion approach was applied to audio-visual biometric authentication, and experimental results based on the XM2VTSDB corpus show that the proposed fusion approach can achieve an error rate reduction of up to 83%.
Original languageEnglish
Title of host publication2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, ISIMP 2004
Pages25-28
Number of pages4
Publication statusPublished - 1 Dec 2004
Event2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, ISIMP 2004 - Hong Kong, China, Hong Kong
Duration: 20 Oct 200422 Oct 2004

Conference

Conference2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, ISIMP 2004
Country/TerritoryHong Kong
CityHong Kong, China
Period20/10/0422/10/04

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Intramodal and intermodal fusion for audio-visual biometric authentication'. Together they form a unique fingerprint.

Cite this