Abstract
Manifold learning is an effective dimensional reduction technique for face feature extraction, which, generally speaking, tends to preserve the local neighborhood structures of given samples. However, neighbors of a sample often comprise more inter-class data than intra-class data, which is an undesirable effect for classification. In this paper, we address this problem by proposing a subclass-center based manifold preserving projection (SMPP) approach, which aims at preserving the local neighborhood structure of subclass-centers instead of given samples. We theoretically show from a probability perspective that, neighbors of a subclass-center would comprise of more intra-class data than inter-class data, and thus is more desirable for classification. In order to take full advantage of the class separability, we further propose the discriminant SMPP (DSMPP) approach, which incorporates the subclass discriminant analysis (SDA) technique to SMPP. In contrast to related discriminant manifold learning methods, DSMPP is formulated as a dual-objective optimization problem and we present analytical solution to it. Experimental results on the public AR, FERET and CAS-PEAL face databases demonstrate that the proposed approaches are more effective than related manifold learning and discriminant manifold learning methods in classification performance.
Original language | English |
---|---|
Pages (from-to) | 709-717 |
Number of pages | 9 |
Journal | Pattern Recognition Letters |
Volume | 33 |
Issue number | 6 |
DOIs | |
Publication status | Published - 15 Apr 2012 |
Keywords
- Discriminant SMPP (DSMPP)
- Dual-objective optimization
- Face recognition
- Manifold learning
- Subclass discriminant analysis
- Subclass-center manifold preserving projection (SMPP)
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence