Computers and Electrical Engineering 85 (2020) 106639
Contents lists available at ScienceDirect
Computers and Electrical Engineering
journal homepage: www.elsevier.com/locate/compeleceng
A weighted voting ensemble of efficient regularized extreme
learning machine
✩
Mohanad Abd Shehab
a
, Nihan Kahraman
b,∗
a
Electrical Engineering Department, Engineering College, Mustansiriyah University, Baghdad, Iraq
b
Department of Electronics and Communication Engineering, Yildiz Technical University, Istanbul, Turkey
a r t i c l e i n f o
Article history:
Received 3 April 2019
Revised 4 January 2020
Accepted 12 March 2020
Keywords:
Extreme learning machines
Ensemble
PRESS
SVD
Weighted majority voting
Face recognition
a b s t r a c t
The exact evaluation of Extreme Learning Machine (ELM) compactness is difficult due to
the randomness in hidden layer nodes number, weight and bias values. To overcome this
randomness, and other problems such as resultant overfitting and large variance, a se-
lective weighted voting ensemble model based on regularized ELM is investigated. It can
strongly enhance the overall performance including accuracy, variance and time consump-
tion. Efficient Prediction Sum of Squares (PRESS) criteria that utilizing Singular Value De-
composition (SVD) is proposed to address the slow execution. Furthermore, an ensemble
pruning approach based on the eigenvalues for the input weight matrix is developed. In
this work, the ensemble base classifiers weights are calculated based on the same PRESS
error metric used for the solutions of the output weight vector (β ) in RELM, thus, it can
reduce computational cost and space requirement. Different state-of-the-art learning ap-
proaches and various well-known facial expressions faces and object recognition bench-
mark datasets were examined in this work.
© 2020 Elsevier Ltd. All rights reserved.
1. Introduction
Extreme Learning Machine (ELM) is a new learning scheme of feedforward neural networks with a sufficient number
of hidden neurons (L) and almost any nonlinear activation function. It uses arbitrarily chosen input weights and biases
without any tuning. With proper L, ELM can universally approximate any continuous functions or any compact input sets
with zero or randomly small error [1]. ELM has been proved as an efficient and effective learning algorithm for classification,
regression and many other tasks [1–5]. Many enhancements have been involved within this algorithm enabled less human
intervention, high computational scalability, small error in the learning rate, and good generalization ability at extremely fast
learning speed without the necessity for parameters adjusting, as a result, ELM has gained considerable importance in the
scientific field [6,7]. However, producing more robust and distinctive hidden layer is still a hot topic in the ELM community.
ELM method has two parameters that are set by the user, number of hidden layer neurons (L) and variance of the hidden
layer input weights (w). The improper initialization of (L) or/and (w) may impair the performance of the ELM model.
The random initialization of parameters especially weights and biases perhaps increases the complexity of hidden layer
and cause ill condition. It can affect model generalization ability and may weaken the robustness of ELM to encounter varia-
✩
This paper is for regular issues of CAEE. Reviews processed and recommended for publication to the Editor-in-Chief by Associate Editor Zhihong Man.
∗
Corresponding author.
E-mail addresses: mohanadshehab@uomustansiriyah.edu.iq (M.A. Shehab), nicoskun@yildiz.edu.tr (N. Kahraman).
https://doi.org/10.1016/j.compeleceng.2020.106639
0045-7906/© 2020 Elsevier Ltd. All rights reserved.