Combined Mutation Differential Evolution to Solve Systems of Nonlinear Equations Gisela C.V. Ramadas and Edite M.G.P. Fernandes Department of Mathematics, School of Engineering, Polytechnic of Porto, 4200-072 Porto, Portugal Algorithm R&D Center, University of Minho, 4710-057 Braga, Portugal Abstract. This paper presents a differential evolution heuristic to compute a solution of a system of nonlinear equations through the global optimization of an appropriate merit function. Three different mutation strategies are combined to generate mutant points. Preliminary numerical results show the effectiveness of the presented heuristic. Keywords: Nonlinear Equations, Differential Evolution, Combined Mutation PACS: 02.60.Pn INTRODUCTION The primary goal of the paper is to show that an evolutionary heuristic – the differential evolution – very popular in global optimization can be effective and as efficient as classical methods in solving systems of nonlinear equations. We examine the behavior of different mutation strategies in the differential evolution context to solve a system of the form: f (x)= 0, f (x)=( f 1 (x), f 2 (x),..., f n (x)) T (1) where each f i : Ω R n R and Ω is a closed convex set, herein defined as [l , u]= {x : -< l i x i u i < , i = 1,..., n}. We assume that all functions f i (x), i = 1,..., n are continuous in the search space although differentiability may not be guaranteed. The motivation of this work comes mainly from the detection of feasibility in nonlinear optimization problems. The most famous techniques to solve nonlinear equations are based on the Newton’s method [1, 2]. They require analytical or numerical first derivative information. Newton’s method is the most widely used algorithm for solving nonlinear systems of equations. It is computationally expensive, in particular if n is large, since the Jacobian matrix and the solution of a system of linear equations are required at each iteration. On the other hand, Quasi-Newton methods avoid either the necessity of computing derivatives, or the necessity of solving a full linear system per iteration or both tasks [3]. Thus, Quasi-Newton methods have less expensive iterations than Newton, and their convergence properties are not very different from the Newton one. Problem (1) is equivalent to min xΩR n M (x) n i=1 f i (x) 2 , (2) in the sense that they have the same solutions. These required solutions are the global minima, and not just the local minima, of the function M (x), known as merit function, in the set Ω. Problem (2) is similar to the usual least squares problem for which many iterative methods have been proposed. They basically assume that the objective function is twice continuously differentiable. However, the objective M in (2) is only once differentiable if some, or just one, of the f i , (i = 1,..., n) are not differentiable. Thus, methods for solving the least squares problem cannot be directly applied to solve (2). When a global solution of a nonlinear optimization problem is required, Newton-type methods have some disadvantages, when compared with global search methods, because they rely on searching locally for the solution. The final solution is heavily dependent on the initial approximation of the iterative process and they can be trapped in a local minimum. Preventing premature convergence to a local while trying to locate a global solution of problem (2) is the goal of the present study. Here, we aim to investigate the performance of a new version of the differential evolution (DE) algorithm when globally solving problem (2). DE is a population-based evolutionary algorithm introduced in 1997 by Storn and Price [4]. It is a simple, efficient and robust metaheuristic to search for promising regions and locate a global solution. Our proposal joins three mutation strategies. It combines two classic mutation strategies aiming to explore 11th International Conference of Numerical Analysis and Applied Mathematics 2013 AIP Conf. Proc. 1558, 582-585 (2013); doi: 10.1063/1.4825558 © 2013 AIP Publishing LLC 978-0-7354-1184-5/$30.00 582