Abstract
The kernel minimum squared error (KMSE) expresses the feature extractor as a linear combination of all the training samples in the high-dimensional kernel space. To extract a feature from a sample, KMSE should calculate as many kernel functions as the training samples. Thus, the computational efficiency of the KMSE-based feature extraction procedure is inversely proportional to the size of the training sample set. In this paper, we propose an efficient kernel minimum squared error (EKMSE) model for two-class classification. The proposed EKMSE expresses each feature extractor as a linear combination of nodes, which are a small portion of the training samples. To extract a feature from a sample, EKMSE only needs to calculate as many kernel functions as the nodes. As the nodes are commonly much fewer than the training samples, EKMSE is much faster than KMSE in feature extraction. The EKMSE can achieve the same training accuracy as the standard KMSE. Also, EKMSE avoids the overfitting problem. We implement the EKMSE model using two algorithms. Experimental results show the feasibility of the EKMSE model.
Original language | English |
---|---|
Pages (from-to) | 53-59 |
Number of pages | 7 |
Journal | Neural Computing and Applications |
Volume | 23 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jul 2013 |
Keywords
- Efficient kernel minimum squared error
- Feature extraction
- Kernel minimum squared error
- Machine learning
ASJC Scopus subject areas
- Artificial Intelligence
- Software