Fast kernel Fisher discriminant analysis via approximating the kernel principal component analysis

Jinghua Wang, Qin Li, Jia You, Qijun Zhao

Research output: Journal article publicationJournal articleAcademic researchpeer-review

23 Citations (Scopus)


Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.
Original languageEnglish
Pages (from-to)3313-3322
Number of pages10
Issue number17
Publication statusPublished - 1 Oct 2011


  • Fast kernel fisher discriminant analysis
  • Fisher discriminant analysis
  • Kernel fisher discriminant analysis
  • Nonlinear feature extraction
  • Pattern classification

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this