Soft Computing
https://doi.org/10.1007/s00500-022-07745-x
MATHEMATICAL METHODS IN DATA SCIENCE
A combination of ridge and Liu regressions for extreme learning
machine
Hasan Yıldırım
1
· M. Revan Özkale
2
Accepted: 9 December 2022
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022
Abstract
Extreme learning machine (ELM) as a type of feedforward neural network has been widely used to obtain beneficial insights
from various disciplines and real-world applications. Despite the advantages like speed and highly adaptability, instability
drawbacks arise in case of multicollinearity, and to overcome this, additional improvements were needed. Regularization is
one of the best choices to overcome these drawbacks. Although ridge and Liu regressions have been considered and seemed
effective regularization methods on ELM algorithm, each one has own characteristic features such as the form of tuning
parameter, the level of shrinkage or the norm of coefficients. Instead of focusing on one of these regularization methods, we
propose a combination of ridge and Liu regressions in a unified form for the context of ELM as a remedy to aforementioned
drawbacks. To investigate the performance of the proposed algorithm, comprehensive comparisons have been carried out by
using various real-world data sets. Based on the results, it is obtained that the proposed algorithm is more effective than the
ELM and its variants based on ridge and Liu regressions, RR-ELM and Liu-ELM, in terms of the capability of generalization.
Generalization performance of proposed algorithm on ELM is remarkable when compared to RR-ELM and Liu-ELM, and
the generalization performance of the proposed algorithm on ELM increases as the number of nodes increases. The proposed
algorithm outperforms ELM in all data sets and all node numbers in that it has a smaller norm and standard deviation of
the norm. Additionally, it should be noted that the proposed algorithm can be applied for both regression and classification
problems.
Keywords Extreme learning machine · Regularization · Liu regression · Tikhonov regularization · Multicollinearity
1 Introduction
The feed-forward neural networks (FNNs) have been seen as
powerful tools in machine learning fields due to the adapt-
ability on complex learning problems. However, difficulties
arise in choosing parameters such as learning rate, momen-
tum, period, stopping criteria, input weights, biases, and so
on in FNNS. Therefore, Huang et al. (2006) proposed a learn-
ing algorithm called extreme learning machine (ELM) which
overcomes slow learning speed and overfitting. The logic
B Hasan Yıldırım
hasanyildirim@kmu.edu.tr
M. Revan Özkale
mrevan@cu.edu.tr
1
Department of Mathematics, Karamano ˘ glu Mehmetbey
University, 70100 Karaman, Turkey
2
Department of Statistics, Çukurova University, 01330 Adana,
Turkey
behind ELM is to generate the main networks parameters
like input weights and biases randomly and to train a single
layer feed-forward network (SLFN) with a solution of classic
linear systems. This main logic brings extra speed and per-
formance improvement on the learning and generalization
aspects to the ELM.
In recent years, ELM has been attracting considerable
attention from the researchers and is widely used in real-
world applications. There are many studies published on
ELM in different research areas to demonstrate the per-
formance of ELM or to improve ELM according to the
applied area to get accurate results. Some of them are as fol-
lows: telecommunication for developing a robust and precise
indoor positioning system (IPS) (Zou et al. 2016) and for the
evaluation of intrusion detection mechanisms (Ahmad et al.
2018), neuroscience for concept drift learning (Mirza and Lin
2016), for discriminating preictal and interictal brain states
in intracranial EEG (Song and Zhang 2016), for pathologi-
cal brain detection (Lu et al. 2017), robotics for building an
123