Dual unification of bi-class Support Vector Machine formulations L. Gonz´ alez a , C. Angulo b , F. Velasco a and A. Catal` a b a COSDE Group. Depto. de Econom´ ıa Aplicada I, Universidad de Sevilla, E-41018, Sevilla, Spain b GREC, Universitat Polit` ecnica de Catalunya, E-08800, Vilanova i Geltr´ u, Spain Abstract Support Vector Machine (SVM) theory was originally developed on the basis of a linearly separable binary classification problem, and other approaches have been later introduced for this problem. In this paper it is demonstrated that all these approaches admit the same dual problem formulation in the linearly separable case and that all the solutions are equivalent. For the non-linearly separable case, all the approaches can also be formulated as a unique dual optimization problem, however their solutions are not equivalent. Discussions and remarks in the article point to an in-depth comparison between SVM formulations and associated parameters. Key words: SVM; large margin principle; bi-classification; optimization; convex hull. 1 Introduction Support Vector Machines are learning machines which implement the struc- tural risk minimization inductive principle to obtain good generalizations on a limited number of learning patterns. This theory was originally developed by V. Vapnik on the basis of a linearly separable binary classification problem with signed outputs ±1 [1]. SVM presents sound theoretical properties and behavior in problems of binary classification [2]. Many papers generalizing the original bi-class approach to multi-classification problems [3,4] through dif- ferent algorithms exist, such as 1-v-r SVM or 1-v-1 SVM. This paper unifies known dual formulations for bi-class SVM approaches and improves their gen- eralization ability when the proposed approach is used for multi-classification problems. Preprint submitted to Elsevier Science 1st August 2005