Abstract
Kernel learning is a powerful framework for nonlinear data modeling. Using the kernel trick, a number of problems have been formulated as semidefinite programs (SDPs). These include Maximum Variance Unfolding (MVU) (Weinberger et al., 2004) in nonlinear dimensionality reduction, and Pairwise Constraint Propagation (PCP) (Li et al., 2008) in constrained clustering. Although in theory SDPs can be efficiently solved, the high computational complexity incurred in numerically processing the huge linear matrix inequality constraints has rendered the SDP approach unscalable. In this paper, we show that a large class of kernel learning problems can be reformulated as semidefinite-quadratic- linear programs (SQLPs), which only contain a simple positive semidefinite constraint, a second-order cone constraint and a number of linear constraints. These constraints are much easier to process numerically, and the gain in speedup over previous approaches is at least of the order m2.5, where m is the matrix dimension. Experimental results are also presented to show the superb computational efficiency of our approach.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference |
Pages | 1964-1972 |
Number of pages | 9 |
Publication status | Published - 1 Dec 2009 |
Externally published | Yes |
Event | 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009 - Vancouver, BC, Canada Duration: 7 Dec 2009 → 10 Dec 2009 |
Conference
Conference | 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009 |
---|---|
Country/Territory | Canada |
City | Vancouver, BC |
Period | 7/12/09 → 10/12/09 |
ASJC Scopus subject areas
- Information Systems