Region-based feature fusion for facial-expression recognition

Cigdem Turan, Kin Man Lam

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

17 Citations (Scopus)


In this paper, we propose a feature-fusion method based on Canonical Correlation Analysis (CCA) for facial-expression recognition. In our proposed method, features from the eye and the mouth windows are extracted separately, which are correlated with each other in representing a facial expression. For each of the windows, two effective features, namely the Local Phase Quantization (LPQ) and the Pyramid of Histogram of Oriented Gradients (PHOG) descriptors, are employed to form low-level representations of the corresponding windows. The features are then represented in a coherent subspace by using CCA in order to maximize the correlation. In our experiments, the Extended Cohn-Kanade dataset is used; its face images span seven different emotions, namely anger, contempt, disgust, fear, happiness, sadness, and surprise. Experiment results show that our method can achieve excellent accuracy for facial-expression recognition.
Original languageEnglish
Title of host publication2014 IEEE International Conference on Image Processing, ICIP 2014
Number of pages5
ISBN (Electronic)9781479957514
Publication statusPublished - 28 Jan 2014


  • Canonical Correlation Analysis
  • facial expression recognition
  • feature fusion
  • Local Phase Quantization
  • Pyramid of Histogram of Oriented Gradients

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this