Abstract
The classical linear discriminant analysis has undergone great development and has recently been extended to different cases. In this paper, a novel discriminant subspace learning method called sparse tensor discriminant analysis (STDA) is proposed, which further extends the recently presented multilinear discriminant analysis to a sparse case. Through introducing the L1and L2norms into the objective function of STDA, we can obtain multiple interrelated sparse discriminant subspaces for feature extraction. As there are no closed-form solutions, k-mode optimization technique and the L1norm sparse regression are combined to iteratively learn the optimal sparse discriminant subspace along different modes of the tensors. Moreover, each non-zero element in each subspace is selected from the most important variables/factors, and thus STDA has the potential to perform better than other discriminant subspace methods. Extensive experiments on face databases (Yale, FERET, and CMU PIE face databases) and the Weizmann action database show that the proposed STDA algorithm demonstrates the most competitive performance against the compared tensor-based methods, particularly in small sample sizes.
Original language | English |
---|---|
Article number | 6518139 |
Pages (from-to) | 3904-3915 |
Number of pages | 12 |
Journal | IEEE Transactions on Image Processing |
Volume | 22 |
Issue number | 10 |
DOIs | |
Publication status | Published - 17 Sept 2013 |
Keywords
- Face recognition
- Feature extraction
- Linear discriminant analysis
- Sparse projections
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design