Applied Soft Computing 71 (2018) 747–782
Contents lists available at ScienceDirect
Applied Soft Computing
journal homepage: www.elsevier.com/locate/asoc
A dynamic metaheuristic optimization model inspired by biological
nervous systems: Neural network algorithm
Ali Sadollah
a
, Hassan Sayyaadi
a,∗
, Anupam Yadav
b
a
School of Mechanical Engineering, Sharif University of Technology, Tehran 11155-9567, Iran
b
Department of Sciences and Humanities, National Institute of Technology Uttarakhand Srinagar (Garhwal), 246174, India
a r t i c l e i n f o
Article history:
Received 9 January 2018
Received in revised form 8 June 2018
Accepted 18 July 2018
Available online 21 July 2018
Keywords:
Neural network algorithm
Artificial neural networks
Metaheuristics
Global optimization
Iterative convergence
a b s t r a c t
In this research, a new metaheuristic optimization algorithm, inspired by biological nervous systems
and artificial neural networks (ANNs) is proposed for solving complex optimization problems. The pro-
posed method, named as neural network algorithm (NNA), is developed based on the unique structure
of ANNs. The NNA benefits from complicated structure of the ANNs and its operators in order to generate
new candidate solutions. In terms of convergence proof, the relationship between improvised exploita-
tion and each parameter under asymmetric interval is derived and an iterative convergence of NNA is
proved theoretically. In this paper, the NNA with its interconnected computing unit is examined for 21
well-known unconstrained benchmarks with dimensions 50–200 for evaluating its performance com-
pared with the state-of-the-art algorithms and recent optimization methods. Besides, several constrained
engineering design problems have been investigated to validate the efficiency of NNA for searching in
feasible region in constrained optimization problems. Being an algorithm without any effort for fine tun-
ing initial parameters and statistically superior can distinguish the NNA over other reported optimizers.
It can be concluded that, the ANNs and its particular structure can be successfully utilized and modeled
as metaheuristic optimization method for handling optimization problems.
© 2018 Elsevier B.V. All rights reserved.
1. Introduction
Among optimization approaches, metaheuristic optimization
algorithms have shown their capabilities for finding near-optimal
solutions to the numerical real-valued test problems. In contrast,
analytical approaches may not detect the optimal solution within
a reasonable computational time, especially when the global min-
imum is surrounded by many local minima.
Metaheuristic algorithms are usually inspired by observing phe-
nomena and rules seen in nature such as the Genetic Algorithm (GA)
[1], the Simulated Annealing (SA) [2], the Particle Swarm Optimiza-
tion (PSO) [3], the Harmony Search (HS) [4], and so forth.
The GA is based on the genetic process of biological organisms
[5]. Over many generations, natural populations evolve according
to the principles of natural selections, i.e. survival of the fittest. In
the GA, a potential solution to a problem is represented as a set
of parameters. Each independent variable is represented by a gene.
Combining the genes, a chromosome is produced which represents
a solution (individual).
∗
Corresponding author.
E-mail address: sayyaadi@sharif.edu (H. Sayyaadi).
During the reproduction phase, individuals are selected from
the population and recombined. Having selected two parents, their
chromosomes are recombined, typically using a crossover mecha-
nism. Also, in order to satisfy the population diversity, a mutation
operator is applied to some individuals [1]. The GA has been utilized
for solving various optimization problems in the literature and it is
a well-known optimization method [6–9].
The origins of SA lay in the analogy of optimization and a phys-
ical annealing process [2]. Annealing refers to an analogy with
thermodynamics, specifically with the way that metals cool and
anneal. The SA is basically hill-climbing except instead of pick-
ing the best move, it picks a random move. If the selected move
improves the solution, then it is always accepted. Otherwise, the
algorithm makes the move anyway with some probability less than
one. The probability decreases exponentially with the badness of
the move, which is the amount of by which the solution is wors-
ened. A parameter T is also used to determine this probability. At
higher values of T, uphill moves are more likely to occur. As T tends
to zero, they become more and more unlikely. In a typical SA opti-
mization, T starts with a high value and then, its value is gradually
decreased according to an annealing schedule [2]. The SA is useful
in finding global optima in the presence of large numbers of local
https://doi.org/10.1016/j.asoc.2018.07.039
1568-4946/© 2018 Elsevier B.V. All rights reserved.