Sparse cost-sensitive classifier with application to face recognition

Jiangyue Man, Xiaoyuan Jing, Dapeng Zhang, Chao Lan

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

12 Citations (Scopus)

Abstract

Sparse representation technique has been successfully employed to solve face recognition task. Though current sparse representation based classifier proves to achieve high classification accuracy, it implicitly assumes that the losses of all misclassifications are the same. However, in many real-world applications, different misclassifications could lead to different losses. Driven by this concern, we propose in this paper a sparse cost-sensitive classifier for face recognition. Our approach uses probabilistic model of sparse representation to estimate the posterior probabilities of a testing sample, calculates all the misclassification losses via the posterior probabilities and then predicts the class label by minimizing the losses. Experimental results on the public AR and FRGC face databases validate the efficacy of the proposed approach.
Original languageEnglish
Title of host publicationICIP 2011
Subtitle of host publication2011 18th IEEE International Conference on Image Processing
Pages1773-1776
Number of pages4
DOIs
Publication statusPublished - 1 Dec 2011
Event2011 18th IEEE International Conference on Image Processing, ICIP 2011 - Brussels, Belgium
Duration: 11 Sept 201114 Sept 2011

Conference

Conference2011 18th IEEE International Conference on Image Processing, ICIP 2011
Country/TerritoryBelgium
CityBrussels
Period11/09/1114/09/11

Keywords

  • Cost-sensitive learning
  • face recognition
  • sparse cost-sensitive classifier
  • sparse representation

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Sparse cost-sensitive classifier with application to face recognition'. Together they form a unique fingerprint.

Cite this