Local Learning and Search in Memetic Algorithms
Frederico G. Guimar˜ aes, Student Member, IEEE, Elizabeth F. Wanner, Student Member, IEEE,
Felipe Campelo, Student Member, IEEE, Ricardo H.C. Takahashi, Member, IEEE,
Hajime Igarashi, Member, IEEE, David A. Lowther, Member, IEEE, Jaime A. Ram´ ırez, Member, IEEE
Abstract— The use of local search in evolutionary techniques
is believed to enhance the performance of the algorithms, giving
rise to memetic or hybrid algorithms. However, in many contin-
uous optimization problems the additional cost required by local
search may be prohibitive. Thus we propose the local learning of
the objective and constraint functions prior to the local search
phase of memetic algorithms, based on the samples gathered
by the population through the evolutionary process. The local
search operator is then applied over this approximated model.
We perform some experiments by combining our approach
with a real-coded genetic algorithm. The results demonstrate
the benefit of the proposed methodology for costly black-box
functions.
I. I NTRODUCTION
Nowadays, the combination of local search operators and
evolutionary techniques is argued to greatly improve the
performance of the basic evolutionary technique by com-
bining the global search ability of these methods with the
advantages provided by local search techniques. Such class
of hybrid methods is known as memetic algorithms (MAs)
[1], [2], [3]. The main idea of memetic evolution is that a
given individual in the population may be improved through
individual evolution.
Initially, MAs were developed for combinatorial optimiza-
tion problems [4], [5], [6], by exploring the use of very
specific local search operators for the problem being solved.
In this context, MAs have overcome the basic evolutionary
techniques in many applications. Nonetheless, it did not take
long for some works to appear in the literature dealing
with continuous search spaces [7], [8], [9]. However, the
local search phase consumes a great number of function
evaluations. When dealing with optimization problems in
which the objective function evaluation is fast, this charac-
teristic is not critical. In general, combinatorial optimization
problems and some continuous optimization problems fall
in this class of problems. Conversely, there are many real-
world problems, particularly some engineering problems,
F.G. Guimar˜ aes, E.F. Wanner, and J.A. Ram´ ırez are with the Depart-
ment of Electrical Engineering, Federal University of Minas Gerais, Av.
Antˆ onio Carlos, 6627, Belo Horizonte, MG, 31270-010, Brazil (e-mail:
fgg@ufmg.br, elizabeth@cpdee.ufmg.br and jramirez@ufmg.br).
R.H.C. Takahashi is with the Department of Mathematics, Federal Uni-
versity of Minas Gerais, Av. Antˆ onio Carlos, 6627, Belo Horizonte, MG,
Brazil (e-mail: taka@mat.ufmg.br).
D.A. Lowther is with the Department of Electrical and
Computer Engineering, McGill University, Montreal, Canada (e-mail:
david.lowther@mcgill.ca).
F. Campelo and H. Igarashi are with the Research Group of Informatics
for System Synthesis Graduate School of Information Science and Technol-
ogy, Hokkaido University, Sapporo 060-0814, Japan. (fax: +81 11-706-7670;
email: pinto@em-si.eng.hokudai.ac.jp).
whose objective function demands much time to evaluate, i.e.
some seconds to minutes. Optimization problems associated
to computer aided design (CAD), in which the designer
needs to model electromagnetic, thermal or fluid phenomena
that lead to the implicit solution of differential or integral
equations, often fall in this class of problem [10]. When
multiplying the time of one single evaluation by thousands
of evaluations required for a single run of an evolutionary
algorithm, we get a computationally expensive optimization
process. In this context, the use of local search operators
elevates the computational cost in such a way that it makes
the employment of MAs prohibitive.
We provide an alternative methodology for using MAs
with continuous and costly optimization problems. It is based
on the employment of local approximations before the local
search phase. When an individual is selected for local search,
it “builds” a local model of the function behavior. After that,
the local search operator uses the estimates provided by the
local model to enhance the individual. The local model is
generated through the learning of the input-output mapping
performed by the black-box function based on current and
the past samples gathered during the evolutionary process.
Evolutionary algorithms can be viewed as adaptive sampling
techniques, in the sense that they sample the search space
in seeking for the optimal solution. This adaptive sampling
process is guided by the heuristic operators of the algorithm,
which direct the search to the most promising regions.
Assuming that we are dealing with an expensive-to-evaluate
function, each sample is very valuable. We may store all
samples in an input-output data set, which represents all the
knowledge acquired by the algorithm for the problem. Points
from this data set in the vicinity of the individual may be
used to fit a given parameterized model, for instance, a neural
network model, that provides a local approximation to the
function.
The use of approximations to deal with costly functions is
not new. A traditional approach is to sample the functions,
generally in a random manner, prior to the optimization
[11], [12], [13]. A global approximation is built and the
optimization is performed over this global model. This global
model can be a static model or a dynamic one, in the sense
that the model is further refined during the optimization
process. However, this approach is very limited. For more
complex functions and for higher dimensions, it is necessary
to use many samples to produce a good approximation.
Additionally, the complexity of the model will increase in
order to capture the global behavior of the function.
0-7803-9487-9/06/$20.00/©2006 IEEE
2006 IEEE Congress on Evolutionary Computation
Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada
July 16-21, 2006
2936