1-4244-2384-2/08/$20.00 ©2008 IEEE SMC 2008 Partial Decomposition and Parallel GA (PD-PGA) for Constrained Optimization Ehab Z. Elfeky School of IT&EE UNSW@ADFA Canberra, Australia e.elfeky@adfa.edu.au Ruhul A. Sarker School of IT&EE UNSW@ADFA Canberra, Australia r.sarker@adfa.edu.au Daryl L. Essam School of IT&EE UNSW@ADFA Canberra, Australia daryl@tst.adfa.edu.au AbstractLarge scale constrained optimization problem solving is a challenging research topic in the optimization and computational intelligence domain. This paper examines the possible division of computational tasks, into smaller interacting components, in order to effectively solve constrained optimization problems in the continuous domain. In dividing the tasks, we propose problem decomposition, and the use of GAs as the solution approach. In this paper, we consider problems with block angular structure with or without overlapping variables. We decompose not only the problem but also the chromosome as suitable for different components of the problem. We also design a communication process for exchanging information between the components. The research shows an approach of dividing computation tasks, required in solving large scale optimization problems, which can be processed in parallel machines. A number of test problems have been solved to demonstrate the use of the proposed approach. The results are very encouraging. Keywords— Large-scale constrained continuous optimization, Parallel Genetic Algorithms. I. INTRODUCTION Researchers are still trying to define the term ‘large-scale problems’, in particular, what does the word `large’ mean? Some researchers have defined largeness as the number of variables and/or constraints of a problem, while others have considered high complexity problems (even with small numbers of variables and/or constraints) as large-scale problems. The first type could also be called computer dependent, as it depends mainly on the capabilities of the computer, on the other hand, researchers have called the second type as problem dependent [1], and this could be in the nature of the constraints or objective functions (Linearity/Non- linearity) or even in the structure of the problem itself (completely decomposable or not). Researchers could also classify large-scale optimization problems depending upon the nature of the variables, such as continuous or integer. Most of the current efforts are focused on integer variables; and this is because it gives a special nature to the problem, thus giving researchers a chance to customize their algorithms based on the problem under consideration. On the other hand, this study is targeting problems with continuous decision variables, which contributes as one of the challenges in this work. In this paper, we are considering large numbers of variables and/or constraints, as well as non-linearity in the objective and/or the constraint functions. Even in some of the small-scale problems, finding the exact optimal solution is not an easy task, moreover, rather it may be one that is very difficult to be achieved [2]. In comparison, finding the optimal solution in large-scale optimization is much more difficult; therefore, the main objective in large-scale optimization is to find an acceptable solution within a reasonable time limit. The addition of functional constraints to large problems makes the problem more challenging. In this paper, we define our large scale problem as an optimization problem with many decision variables and functional constraints. The problem we consider in this paper can be stated as follows: . ,..., 2 , 1 , , ,..., 2 , 1 , 0 ) ( , ,..., 2 , 1 , 0 ) ( ) ( min : n i U x L p j X h m i X g to subject X f LNLP i i i j i = = = = (1) Where ` n R X is the vector of solutions X=[x 1 ,x 2 ,…,x n ] T The objective function is f(X), m is the number of inequality constraints, g i (X) is the i th inequality constraint, p is the number of equality constraints, and h j (X) is the j th equality constraint. Each decision variable x i has a lower bound L i and an upper bound U i . Although we consider continuous variables in this research, the constraints and objective function are not assumed to have mathematical properties that make the problem easy to solve, such as differentiability and convexity. Over the last few decades, researchers and practitioners have introduced many different approaches for solving large scale problems. In the traditional optimization domain, these methods include decomposition approaches and problem specific heuristics. In the classical decomposition approaches, the problem is divided into a number of smaller sub-problems by exploiting the problem structure and then solving each of them independently [3]. These approaches are currently applicable only to certain classes of mathematical programming models.