Abstract
The number of Support Vectors (SVs) of SVM is usually large and this results in a substantially slower classification speed than many other approaches. The less SVs means the more sparseness and higher classification speed. How to reduce the number of SVs but without loss of generalization performance becomes a significant problem both theoretically and practically. Basing on the sparsity of SVs, it is proven that when clustering original SVs, the minimal upper bound of the error between the original decision function and the fast decision function can be achieved by K-means clustering the original SVs in input space, then a new algorithm called Fast Decision algorithm of Support Vector Machine (FD-SVM) is proposed, which employs K-means to cluster a dense SVs set to a sparse set and the cluster centers are used as the new SVs, then aiming to minimize the classification gap between SVM and FD-SVM, a quadratic programming model is built for obtaining the optimal coefficients of the new sparse SVs. Experiments on toy and real-world data sets demonstrate that compared with original SVM, the number of SVs decreases and the speed of classification increases, while the loss of accuracy is acceptable at the 0.05 significant level.
Original language | English |
---|---|
Pages (from-to) | 2181-2186 |
Number of pages | 6 |
Journal | Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology |
Volume | 33 |
Issue number | 9 |
DOIs | |
Publication status | Published - 1 Sept 2011 |
Keywords
- Fast classification
- K-means clustering
- Quadratic Programming (QP)
- Sparsity
- Support Vector Machine (SVM)
ASJC Scopus subject areas
- Electrical and Electronic Engineering