A sparse representation method of bimodal biometrics and palmprint recognition experiments

Yong Xu, Zizhu Fan, Minna Qiu, Dapeng Zhang, Jing Yu Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

30 Citations (Scopus)

Abstract

In this paper, we propose a sparse representation method for bimodal biometrics. The proposed method first accomplishes the feature level fusion by combining the samples of the two biometric traits into a real vector in advance. This method then considers that an approximate representation of the test sample might be more useful for classification and uses the approximate representation to classify the test sample. The proposed method exploits a weighted sum of the neighbors from the set of training samples of the test sample to produce the approximate representation of the test sample and bases on this representation to perform classification. A variety of experiments demonstrate that the proposed approximate representation enables us to achieve a higher accuracy. The proposed method has the following reasonable assumption: the test sample is probably from one of the classes which the neighbors of the test sample are from. In this paper, we also formally show the difference between the proposed method and conventional appearance-based methods, and demonstrate that the proposed method is able to more accurately represent the test sample than conventional appearance-based methods.
Original languageEnglish
Pages (from-to)164-171
Number of pages8
JournalNeurocomputing
Volume103
DOIs
Publication statusPublished - 1 Mar 2013

Keywords

  • Bimodal biometrics
  • Biometrics
  • Linear discriminant analysis
  • Principal component analysis
  • Sparse representation

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this