On consistent fusion of multimodal biometrics

S. Y. Kung, Man Wai Mak

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

7 Citations (Scopus)

Abstract

Audio-visual (AV) biometrics offer complementary information sources, and the use of both voice and facial images for biometric authentication has recently become economically feasible. Therefore, multi-modality adaptive fusion, combining audio and visual information, offers an efficient tool for substantially improving the classification performance. In terms of implementation, we propose to integrate an audio classifier (based on Gaussian mixture models) and a visual classifier (based on FaceIT, a commercially available software) into a well-established mixture-of-expert fusion architecture. In addition, a consistent fusion strategy is introduced as a baseline fusion scheme, which establishes the lower bound of the "consistent region" in the FAR-FRR ROC. Our simulation results indicate that the prediction performance of the proposed adaptive fusion schemes fall in the consistent region. More importantly, the notion of consistent fusion can also facilitate the selection of the best modalities to fuse.
Original languageEnglish
Title of host publication2006 IEEE International Conference on Acoustics, Speech, and Signal Processing - Proceedings
Volume5
Publication statusPublished - 1 Dec 2006
Event2006 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2006 - Toulouse, France
Duration: 14 May 200619 May 2006

Conference

Conference2006 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2006
Country/TerritoryFrance
CityToulouse
Period14/05/0619/05/06

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this