Local generic representation for face recognition with single sample per person

Pengfei Zhu, Meng Yang, Lei Zhang, Il Yong Lee

Research output: Journal article publicationConference articleAcademic researchpeer-review

19 Citations (Scopus)

Abstract

Face recognition with single sample per person (SSPP) is a very challenging task because in such a scenario it is difficult to predict the facial variations of a query sample by the gallery samples. Considering the fact that different parts of human faces have different importance to face recognition, and the fact that the intra-class facial variations can be shared across different subjects, we propose a local generic representation (LGR) based framework for face recognition with SSPP. A local gallery dictionary is built by extracting the neighboring patches from the gallery dataset, while an intra-class variation dictionary is built by using an external generic dataset to predict the possible facial variations (e.g., illuminations, pose, expressions and disguises). LGR minimizes the total representation residual of the query sample over the local gallery dictionary and the generic variation dictionary, and it uses correntropy to measure the representation residual of each patch. Half-quadratic analysis is adopted to solve the optimization problem. LGR takes the advantages of patch based local representation and generic variation representation, showing leading performance in face recognition with SSPP.
Original languageEnglish
Pages (from-to)34-50
Number of pages17
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9005
DOIs
Publication statusPublished - 1 Jan 2015
Event12th Asian Conference on Computer Vision, ACCV 2014 - Singapore, Singapore
Duration: 1 Nov 20145 Nov 2014
Conference number: 12

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this