Manifold learning has been successfully applied to facial expression recognition by modeling different expressions as a smooth manifold embedded in a high dimensional space. However, the assumption of single manifold is still arguable and therefore does not necessarily guarantee the best classification accuracy. In this paper, a generalized framework for modeling and recognizing facial expressions on multiple manifolds is presented which assumes that different expressions may reside on different manifolds of possibly different dimensionalities. The intrinsic features of each expression are firstly learned separately and the genetic algorithm (GA) is then employed to obtain the nearly optimal dimensionality of each expression manifold from the classification viewpoint. Classification is performed under a newly defined criterion that is based on the minimum reconstruction error on manifolds. Extensive experiments on both the CohnKanade and Feedtum databases show the effectiveness of the proposed multiple manifold based approach.
- Facial expression recognition
- Multiple manifolds
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence