Applied Soft Computing Journal 82 (2019) 105580
Contents lists available at ScienceDirect
Applied Soft Computing Journal
journal homepage: www.elsevier.com/locate/asoc
Maximizing diversity by transformed ensemble learning
Shasha Mao
a
, Jia-Wei Chen
b,a,∗
, Licheng Jiao
a
, Shuiping Gou
a
, Rongfang Wang
a
a
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian
University, Xi’An, China
b
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
highlights
• A novel ensemble learning is proposed to balance diversity and individual accuracy.
• The whole process can be implemented as the linear transforms of individuals.
• The derived objective function can be efficiently solved by an ADMM-like algorithm.
article info
Article history:
Received 10 July 2018
Received in revised form 8 June 2019
Accepted 11 June 2019
Available online xxxx
Keywords:
Ensemble learning
Linear transformation
Majority vote
Weighted ensemble
ADMM
abstract
The diversity and the individual accuracies in an ensemble system are usually two opposite objects,
which is ignored in most preliminary ensemble learning algorithms. To alleviate this issue, in this
paper, a novel weighted ensemble learning is proposed by maximizing the diversity and the individual
accuracy simultaneously. More specifically, in the proposed framework, the combination of multiple
base learners is converted into a linear transformation of all these base learners, and the optimal
weights are obtained by pursuing the optimal projective direction of the linear transformation. Then
the derived objective function can be efficiently solved by the alternating directional multiplier method.
Finally, the proposed method is verified on UCI datasets and face databases, and the experimental
results illustrate that the proposed method effectively improves the performance compared with other
ensemble methods.
© 2019 Elsevier B.V. All rights reserved.
1. Introduction
In the past decades, ensemble learning [1,2] has been proved
to be successfully in image classification or segmentation [3,4],
face recognition [5–8], medical image analysis [9,10]. Essentially,
its success is due to the main spirits of ensemble learning that
allows us to combine multiple base learners as the final predic-
tion [11]. Recently, ensemble learning has been employed to data
stream analysis [12] and classification [13].
Generally, in an ensemble system, base learners are inde-
pendently generated and combined as the final prediction. Each
learner known as an individual can be generated by numer-
ous methods. Breiman [14] proposed the bagging method that
introduces the bootstrap algorithm [15] to generate multiple
diverse classifiers. Base on the spirit of bagging, Breiman pro-
posed [16] random forest, where a decision tree is considered as
the base learner and each node is split through various subsets
∗
Corresponding author at: .
E-mail address: jawaechan@gmail.com (J.-W. Chen).
of randomly selected features. Recently, Kantchelian et al. [17]
proposed two novel algorithms for systematically computing eva-
sions for tree ensembles to distinguish two similar instances
from different categories. At the same time, being different from
bagging, Freund and Schapire [18–20] proposed Adaptive Boost-
ing (AdaBoost) that generates various individual classifiers by
adaptive selecting training samples based on the predictions of
individuals.
When the individuals are generated, these individuals are
combined by a scored-level ensemble or a decision-level en-
semble to make the final decision on each testing sample. In
most preliminary methods [14,21,22], all the individuals are
treated equally and they are combined by the average. Next,
Zhou et al. [8,23] proposed selective ensemble learning and Zhang
et al. [24] proposed sparse ensemble learning. In these methods,
a weight is assigned to each component which indicates the
contribution of the corresponding individual on final decision and
the final decision is made by weighted ensemble score. While
Kuncheva et al. [25] proposed a majority voting strategy, also
known as decision ensemble. Recently, Van et al. [26] proposed
https://doi.org/10.1016/j.asoc.2019.105580
1568-4946/© 2019 Elsevier B.V. All rights reserved.