J.S. Marques et al. (Eds.): IbPRIA 2005, LNCS 3523, pp. 59–66, 2005.
© Springer-Verlag Berlin Heidelberg 2005
Dynamic and Static Weighting in Classifier Fusion
Rosa M. Valdovinos
1
, J. Salvador Sánchez
2
, and Ricardo Barandela
1
1
Instituto Tecnológico de Toluca, Av. Tecnológico s/n, 52140 Metepec, México
{li_rmvr,rbarandela}@hotmail.com
2
Dept. Llenguatges i Sistemes Informàtics, Universitat Jaume I, 12071 Castelló, Spain
sanchez@uji.es
Abstract. When a Multiple Classifier System is employed, one of the most
popular methods to accomplish the classifier fusion is the simple majority vot-
ing. However, when the performance of the ensemble members is not uniform,
the efficiency of this type of voting is affected negatively. In this paper, a com-
parison between simple and weighted voting (both dynamic and static) is pre-
sented. New weighting methods, mainly in the direction of the dynamic ap-
proach, are also introduced. Experimental results with several real-problem data
sets demonstrate the advantages of the weighting strategies over the simple vot-
ing scheme. When comparing the dynamic and the static approaches, results
show that the dynamic weighting is superior to the static strategy in terms of
classification accuracy.
1 Introduction
A multiple classifier system (MCS) is a set of individual classifiers whose decisions
are combined when classifying new patterns. There are many different reasons for
combining multiple classifiers to solve a given learning problem [6], [12]. First,
MCSs try to exploit the local different behavior of the individual classifiers to im-
prove the accuracy of the overall system. Second, in some cases MCS might not be
better than the single best classifier but can diminish or eliminate the risk of picking
an inadequate single classifier. Another reason for using MCS arises from the limited
representational capability of learning algorithms. It is possible that the classifier
space considered for the problem does not contain the optimal classifier.
Let D = { D
1
, ..., D
h
} be a set of classifiers. Each classifier assigns an input feature
vector x
P
to one of the c problem classes. The output of a MCS is an h-
dimensional vector containing the decisions of each of the h individual classifiers:
[D
1
(x),..., D
h
(x)]
T
(1)
It is accepted that there are two main strategies in combining classifiers: selection
and fusion. In classifier selection, each individual classifier is supposed to be an ex-
pert in a part of the feature space and therefore, we select only one classifier to label
the input vector x. In classifier fusion, each component is supposed to have knowl-
edge of the whole feature space and correspondingly, all individual classifiers decide
the label of the input vector.
Focusing on the fusion strategy, the combination can be made in many different
ways. The simplest one employs the majority rule in a plain voting system [4]. More