Fast decision using SVM for incoming samples

Zhan Cheng Zhang, Shi Tong Wang, Zhao Hong Deng, Fu Lai Korris Chung

Research output: Journal article publicationJournal articleAcademic researchpeer-review

7 Citations (Scopus)


The number of Support Vectors (SVs) of SVM is usually large and this results in a substantially slower classification speed than many other approaches. The less SVs means the more sparseness and higher classification speed. How to reduce the number of SVs but without loss of generalization performance becomes a significant problem both theoretically and practically. Basing on the sparsity of SVs, it is proven that when clustering original SVs, the minimal upper bound of the error between the original decision function and the fast decision function can be achieved by K-means clustering the original SVs in input space, then a new algorithm called Fast Decision algorithm of Support Vector Machine (FD-SVM) is proposed, which employs K-means to cluster a dense SVs set to a sparse set and the cluster centers are used as the new SVs, then aiming to minimize the classification gap between SVM and FD-SVM, a quadratic programming model is built for obtaining the optimal coefficients of the new sparse SVs. Experiments on toy and real-world data sets demonstrate that compared with original SVM, the number of SVs decreases and the speed of classification increases, while the loss of accuracy is acceptable at the 0.05 significant level.
Original languageEnglish
Pages (from-to)2181-2186
Number of pages6
JournalDianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology
Issue number9
Publication statusPublished - 1 Sept 2011


  • Fast classification
  • K-means clustering
  • Quadratic Programming (QP)
  • Sparsity
  • Support Vector Machine (SVM)

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Fast decision using SVM for incoming samples'. Together they form a unique fingerprint.

Cite this