424 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 9, NO. 4, AUGUST 2005
A Generic Framework for Constrained Optimization
Using Genetic Algorithms
Sangameswar Venkatraman and Gary G. Yen, Senior Member, IEEE
Abstract—In this paper, we propose a generic, two-phase frame-
work for solving constrained optimization problems using genetic
algorithms. In the first phase of the algorithm, the objective func-
tion is completely disregarded and the constrained optimization
problem is treated as a constraint satisfaction problem. The ge-
netic search is directed toward minimizing the constraint violation
of the solutions and eventually finding a feasible solution. A linear
rank-based approach is used to assign fitness values to the indi-
viduals. The solution with the least constraint violation is archived
as the elite solution in the population. In the second phase, the
simultaneous optimization of the objective function and the sat-
isfaction of the constraints are treated as a biobjective optimiza-
tion problem. We elaborate on how the constrained optimization
problem requires a balance of exploration and exploitation under
different problem scenarios and come to the conclusion that a non-
dominated ranking between the individuals will help the algorithm
explore further, while the elitist scheme will facilitate in exploita-
tion. We analyze the proposed algorithm under different problem
scenarios using Test Case Generator-2 and demonstrate the pro-
posed algorithm’s capability to perform well independent of var-
ious problem characteristics. In addition, the proposed algorithm
performs competitively with the state-of-the-art constraint opti-
mization algorithms on 11 test cases which were widely studied
benchmark functions in literature.
Index Terms—Constrained optimization, constraint handling,
genetic algorithm (GA), hyperheuristic.
I. INTRODUCTION
M
OST REAL-WORLD optimization problems involve
constraints. Consider an optimization problem such
as maximizing the profits of a particular production line. The
objective function to be maximized could be a function of
various manipulating variables, including but not limited to the
material consumption, the labor cost, the operating hours of the
machines, and many additional factors. If the raw materials,
manpower, and machines can be made available without limi-
tation then there is no limit to the profit that can be achieved.
However, in face of real-world complications, they are most
likely limited in the form of constraints imposed upon the
optimization function. What constitute the difficulties of the
constrained optimization problem are various limits on the
decision variables, the constraints involved, the interference
among constraints, and the interrelationship between the con-
straints and the objective function. Taking a numerical example,
Manuscript received May 19, 2004; revised November 17, 2004. This work
was supported in part by the Center for Aircraft Systems/Support Infrastructure
(CASI) and in part by the Oklahoma City Air Logistics Center.
The authors are with the Intelligent Systems and Control Laboratory, School
of Electrical and Computer Engineering, Oklahoma State University, Stillwater,
OK 74078 USA (e-mail: gyen@ceat.okstate.edu).
Digital Object Identifier 10.1109/TEVC.2005.846817
suppose we want to maximize a function ,
where the two variables are defined by , . Under
the presence of no additional constraint, an optimum value of
can be reached when and . Assume
that there is an equality constraint imposed on these variables
described by . Considering a resolu-
tion of up to two decimal places in the discrete search space,
there are only 50 feasible solutions among 10 000 possible
candidates. This implies that feasible space is only 0.5% of the
actual parameter space. The best objective function value that
can be reached is (for and ). The
problem complexity can be greatly increased by the number of
constraints or the types of constraints. The general constrained
continuous-parameter optimization problem as succinctly de-
fined in [29] is to search for so as to
(1)
where . The objective function is defined on the
search space , and the set defines the feasible
region. Usually, the search space is an -dimensional hyper box
in . The domains of the variables are defined by their lower
and upper bounds as
(2)
whereas the feasible region is restricted by a set of addi-
tional constraints
(3)
and
(4)
The inequality constraints that take the value of 0, i.e.
at the global optimum to the problem are called
the active constraints. In the following discussion and in the
remainder of this paper, without loss of generality we shall
consider the minimization of the objective function unless
specified otherwise. In addressing the constrained optimization
problem in the real-world scenario, we can arguably say that
obtaining a feasible solution (one that is usable under the
problem formulation) takes precedence over optimizing the
objective function (which minimizes the cost involved). There
are also problems with higher complexity in which finding a
single feasible solution itself can be a monumental task. These
problems are treated as constraint satisfaction problems and
various evolutionary algorithms have been proposed to solve
them effectively, e.g., [10]. The main challenge in constrained
1089-778X/$20.00 © 2005 IEEE