Applied Soft Computing 12 (2012) 1765–1786
Contents lists available at SciVerse ScienceDirect
Applied Soft Computing
j ourna l ho mepage: www.elsevier.com/locate/asoc
Momentum coefficient for promoting accuracy and convergence speed of
evolutionary programming
Yousef Alipouri
a
, Javad Poshtan
a
, Yagub Alipouri
b
, Mohammad Reza Alipour
c,∗
a
Electrical Engineering Department, Iran University of Science and Technology, Tehran, Iran
b
Department of Civil Engineering, Amirkabir University of Technology, Tehran, Iran
c
Tuberculosis and Lung Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
a r t i c l e i n f o
Article history:
Received 8 August 2010
Received in revised form 11 August 2011
Accepted 10 January 2012
Available online 18 February 2012
Keywords:
Evolutionary programming
Gathering point
Mean value
Momentum Coefficient Evolutionary
Programming
a b s t r a c t
Many practical problems culminate with solving optimization problems. Thus, many methods have been
introduced for solving these types of problems. The need for algorithms that are fast and more accu-
rate at finding global minimums is ever increasing. One of the promising methods is a heuristic and
iterative method called Evolutionary Programming (EP). It is one of the computational methods used in
optimization that is implemented for many practical applications. Many papers have shown the capa-
bility of this algorithm for addressing a variety of optimization problems. These studies have opened a
vast new and interesting field of research. Recently, many methods have been proposed for promoting
the performance of EP when finding the optimum point of functions or applications; however, EP has
some shortcomings that cause slow convergence on some functions, especially multimodal functions.
By overcoming these shortcomings, EP could be more effective in the optimization research field. This
paper introduces new methods for overcoming these disadvantages and promoting the performance of
EP. One of these methods, which has the best results on cost functions, changes the searching procedure
by adding a new factor to produce offspring and pulling offspring toward a gathering point (the mean
value of the parents). This method was tested on 50 well-known test functions discussed in the literature
and was compared with state-of-the-art algorithms on twenty-two new cost functions. Finally, a hybrid
method of CEP and MCEP (Momentum Coefficient Evolutionary Programming) called IMCEP (Improved
Momentum Coefficient Evolutionary Programming) is introduced. The results of the calculations reported
here show the efficiency of MCEP and IMCEP.
© 2012 Elsevier B.V. All rights reserved.
1. Introduction
Darwinian evolution, proposed in 1859, is intrinsically a robust
search and optimization mechanism. Darwin’s principle of the “Sur-
vival of the fittest” captured the popular imagination. This principle
can be used as a starting point in introducing evolutionary compu-
tation.
The theory of natural selection proposes that plants and animals
that exist today are the result of millions of years of adaptation to
the demands of the environment. Evolutionary computation (EC)
techniques abstract these evolutionary principles into algorithms
that may be used to search for optimal solutions to a problem. In a
search algorithm, a number of possible solutions to a problem are
∗
Corresponding author.
E-mail addresses: alipouri yousef@elec.iust.ac.ir (Y. Alipouri),
jposhtan@iust.ac.ir (J. Poshtan), yagub.alipouri@aut.ac.ir (Y. Alipouri),
alipourmr52@gmail.com (M.R. Alipour).
available and the task is to find the best solution possible in a fixed
amount of time.
In the case of evolutionary computation, there are four historical
paradigms that have served as the basis for much of the activity of
the field: Genetic Algorithms (GA) [1], Genetic Programming (GP)
[2], Evolutionary Strategies (ES) [3], and Evolutionary Program-
ming (EP) [4]. The basic differences between these paradigms lie
in the nature of the representation schemes, the reproduction and
mutation operators and the selection methods [5].
These methods have drawn much attention in the research
community in conjunction with parallel and/or distributed com-
putations. EP especially was studied initially as a method for
generating artificial intelligence [6,7].
In the 1960s, Fogel developed EP, which originally resolved
problems in weather forecasting. He proposed a finite space evolu-
tionary model whose mutations were based on uniform stochastic
distributions. In the 1990s, Fogel put the ideas of EP into applica-
tions that involve real number spaces, which was the beginning of
resolving optimization problems in real number space. After being
1568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2012.01.010