A Linear Combination of Classifiers via Rank Margin Maximization Claudio Marrocco, Paolo Simeone, and Francesco Tortorella DAEIMI - Universit` a degli Studi di Cassino Via G. Di Biasio 43, 03043 Cassino (FR), Italia {c.marrocco,paolo.simeone,tortorella}@unicas.it Abstract. The method we present aims at building a weighted linear combi- nation of already trained dichotomizers, where the weights are determined to maximize the minimum rank margin of the resulting ranking system. This is par- ticularly suited for real applications where it is difficult to exactly determine key parameters such as costs and priors. In such cases ranking is needed rather than classification. A ranker can be seen as a more basic system than a classifier since it ranks the samples according to the value assigned by the classifier to each of them. Experiments on popular benchmarks along with a comparison with other typical rankers are proposed to show how effective can be the approach. Keywords: Margin, Ranking, Combination of Classifiers. 1 Introduction Many effective classification systems adopted in a variety of real applications make a proficient use of combining techniques to solve two class problems. As a matter of fact the combination of classifiers is a reliable technique to improve the overall performance, since it exploits the strength of the classifiers to be combined while reduces the effects of their weaknesses. Moreover the fusion of already available classifiers gives the user the opportunity to obtain simply and quickly an optimized system using them as building blocks, thus avoiding to restart from the beginning the design of a new classification system. Several methods have been proposed to combine classifiers [11] and, among them, one of the most common technique is certainly the linear combination of the outputs of the classifiers. Extended studies have been conducted on this issue [8], and in particular have considered the weighted averaging strategies which are the basis of some popular algorithms like Bagging [2] or Boosting [7]. Boosting techniques build a classifier as a convex combination of several weak classifiers; each of them is in turn generated by dynamically reweighing training samples on the basis of previous classification results provided by the weak classifiers already constructed. Such approach revealed to be really effective in obtaining classifiers with good gener- alization characteristics. To this regard, the work of Schapire et al. [13] has analyzed the boosting approach in terms of margin maximization, where the margin is a measure for the accuracy confidence of a classifier which can be considered as an important indica- tor of its generalization capacity. They calculated an upper bound on the generalization E.R. Hancock et al. (Eds.): SSPR & SPR 2010, LNCS 6218, pp. 650–659, 2010. c Springer-Verlag Berlin Heidelberg 2010