Neural Process Lett (2011) 33:31–44
DOI 10.1007/s11063-010-9162-9
First and Second Order SMO Algorithms for LS-SVM
Classifiers
Jorge López · Johan A. K. Suykens
Published online: 5 December 2010
© Springer Science+Business Media, LLC. 2010
Abstract Least squares support vector machine (LS-SVM) classifiers have been tradi-
tionally trained with conjugate gradient algorithms. In this work, completing the study by
Keerthi et al., we explore the applicability of the SMO algorithm for solving the LS-SVM
problem, by comparing First Order and Second Order working set selections concentrating
on the RBF kernel, which is the most usual choice in practice. It turns out that, considering
all the range of possible values of the hyperparameters, Second Order working set selection
is altogether more convenient than First Order. In any case, whichever the selection scheme
is, the number of kernel operations performed by SMO appears to scale quadratically with
the number of patterns. Moreover, asymptotic convergence to the optimum is proved and the
rate of convergence is shown to be linear for both selections.
Keywords Least squares support vector machines · Sequential minimal optimization ·
Support vector classification · Working set selection
1 Introduction
Least squares support vector machines (LS-SVMs) were introduced in [1] as a simplification
to support vector machines (SVMs) [2], where the inequality constraints are forced to become
equality constraints and a least squares loss function is taken. In a binary classification con-
text, we have a sample of N preclassified patterns { X
i
, y
i
} , i = 1,..., N , where the outputs
J. López
Departamento de Ingeniería Informática and Instituto de Ingeniería del Conocimiento,
Universidad Autónoma de Madrid, C/Francisco Tomás y Valiente 11, 28049 Madrid, Spain
J. A. K. Suykens (B )
Department ofElectrical Engineering, ESAT-SCD/SISTA, Katholieke Universiteit Leuven,
Kasteelpark Arenberg 10, Heverlee, 3001 Leuven, Belgium
e-mail: Johan.Suykens@esat.kuleuven.be
123