Improvement of the kernel minimum squared error model for fast feature extraction

Jinghua Wang, Peng Wang, Qin Li, Jia You

Research output: Journal article publicationJournal articleAcademic researchpeer-review

4 Citations (Scopus)

Abstract

The kernel minimum squared error (KMSE) expresses the feature extractor as a linear combination of all the training samples in the high-dimensional kernel space. To extract a feature from a sample, KMSE should calculate as many kernel functions as the training samples. Thus, the computational efficiency of the KMSE-based feature extraction procedure is inversely proportional to the size of the training sample set. In this paper, we propose an efficient kernel minimum squared error (EKMSE) model for two-class classification. The proposed EKMSE expresses each feature extractor as a linear combination of nodes, which are a small portion of the training samples. To extract a feature from a sample, EKMSE only needs to calculate as many kernel functions as the nodes. As the nodes are commonly much fewer than the training samples, EKMSE is much faster than KMSE in feature extraction. The EKMSE can achieve the same training accuracy as the standard KMSE. Also, EKMSE avoids the overfitting problem. We implement the EKMSE model using two algorithms. Experimental results show the feasibility of the EKMSE model.
Original languageEnglish
Pages (from-to)53-59
Number of pages7
JournalNeural Computing and Applications
Volume23
Issue number1
DOIs
Publication statusPublished - 1 Jul 2013

Keywords

  • Efficient kernel minimum squared error
  • Feature extraction
  • Kernel minimum squared error
  • Machine learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software

Cite this