Journal of Intelligent & Fuzzy Systems 36 (2019) 4957–4967
DOI:10.3233/JIFS-179042
IOS Press
4957
Cellular Estimation Gaussian Algorithm
for Continuous Domain
Yoan Mart´ ınez-L´ opez
a
, Julio Madera
a
, Ansel Y. Rodr´ ıguez-Gonz´ alez
b,c,∗
and Stephen Barigye
d
a
Department of Computer Sciences, Faculty of Informatics, Camag¨ uey University,
Camag¨ uey City, Camag¨ uey, Cuba
b
Mexican National Research Council (CONACyT), Mexico
c
CICESE-UT3, Ciudad del Conocimiento, Tepic, Nayarit, M´ exico
d
Department of Chemistry, McGill University, Montreal, Canada
Abstract. Optimization algorithms are important in problems of pattern recognition and artificial intelligence, i.e., the image
recognition, face recognition, data analysis, optical recognition, etc. Estimation distribution algorithms (EDAs) is kind
of optimization algorithms based on substituting the crossover and mutation operators of the Genetic Algorithms by the
estimation and later sampling the probability distribution learned from the selected individuals. However, a weakness of
these algorithms is the efficiency in terms of the number of evaluations of the fitness function. In this paper, a Cellular
Gaussian Estimation Algorithm (CEGA) for solving continuous optimization problems is proposed. CEGA is derived from
evidence-based learning of independence and decentralized schemes of local populations. The experimental results showed
that the present proposal reduces the number of evaluations of the fitness function in the search for optimums, maintaining
its effectiveness in comparison to other algorithms of state-of-art using the same benchmark of continuous functions.
Keywords: Cellular EDA, learning, probabilistic graph model, Gaussian networks
1. Introduction
Estimation of Distribution Algorithms (EDAs) [1,
2] have been widely used to find solutions in dis-
crete [3] and continuous [4] optimization problems.
These kinds of algorithms are based on substituting
the crossover and mutation operators of the Genetic
Algorithms (GAs) [5, 6] by the estimation and later
sampling the probability distribution learned from the
selected individuals. In every optimization problem,
there are dependencies between the variables, which
are not inferred by most of the current optimization
methods (Genetic Algorithms, Particle Swarm Opti-
mization, etc.). To detect the dependencies, EDAs
use statistical techniques. The main advantage of
∗
Corresponding author. Ansel Y. Rodr´ ıguez-Gonz´ alez. E-mail:
ansel@cicese.mx.
EDAs over GAs is that they estimate the values of
each variable using a probability distribution, while
Genetic Algorithms seek a solution to a problem by
directly coding the variables.
For continuous optimization problems, several
EDAs have been proposed: UMDA
c
[4], PBIL
c
[7],
MIMIC
c
[8], EMNA (and its variants) [8–10] and
PolyEDA [11]. However, a weakness of these algo-
rithms, as well as the EDAs for discrete optimization,
is the efficiency in terms of the number of evalua-
tions of the objective function. In order to deal with
this weakness for discrete optimization, a new kind
of EDAs named the Cellular EDAs was proposed
[12, 13], which allow for the decentralization of the
individuals in the population. But in the best of our
knowledge, this idea has not been used for continuous
optimization problems.
ISSN 1064-1246/19/$35.00 © 2019 – IOS Press and the authors. All rights reserved