Evaluate dissimilarity of samples in feature space for improving KPCA

Xu Yong, Dapeng Zhang, Jian Yang, Jin Zhong, Jingyu Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

38 Citations (Scopus)

Abstract

Since in the feature space the eigenvector is a linear combination of all the samples from the training sample set, the computational efficiency of KPCA-based feature extraction falls as the training sample set grows. In this paper, we propose a novel KPCA-based feature extraction method that assumes that an eigenvector can be expressed approximately as a linear combination of a subset of the training sample set ("nodes"). The new method selects maximally dissimilar samples as nodes. This allows the eigenvector to contain the maximum amount of information of the training sample set. By using the distance metric of training samples in the feature space to evaluate their dissimilarity, we devised a very simple and quite efficient algorithm to identify the nodes and to produce the sparse KPCA. The experimental result shows that the proposed method also obtains a high classification accuracy.
Original languageEnglish
Pages (from-to)479-495
Number of pages17
JournalInternational Journal of Information Technology and Decision Making
Volume10
Issue number3
DOIs
Publication statusPublished - 1 May 2011

Keywords

  • Feature extraction
  • kernel methods
  • kernel PCA

ASJC Scopus subject areas

  • Computer Science (miscellaneous)

Fingerprint

Dive into the research topics of 'Evaluate dissimilarity of samples in feature space for improving KPCA'. Together they form a unique fingerprint.

Cite this