IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 15, NO. 1, FEBRUARY 2011 99 Enhancing Differential Evolution Utilizing Proximity-Based Mutation Operators Michael G. Epitropakis, Student Member, IEEE, Dimitris K. Tasoulis, Member, IEEE, Nicos G. Pavlidis, Vassilis P. Plagianakos, and Michael N. Vrahatis Abstract —Differential evolution is a very popular optimization algorithm and considerable research has been devoted to the development of efficient search operators. Motivated by the different manner in which various search operators behave, we propose a novel framework based on the proximity characteristics among the individual solutions as they evolve. Our framework incorporates information of neighboring individuals, in an at- tempt to efficiently guide the evolution of the population toward the global optimum, without sacrificing the search capabilities of the algorithm. More specifically, the random selection of parents during mutation is modified, by assigning to each individual a probability of selection that is inversely proportional to its distance from the mutated individual. The proposed frame- work can be applied to any mutation strategy with minimal changes. In this paper, we incorporate this framework in the original differential evolution algorithm, as well as other recently proposed differential evolution variants. Through an extensive experimental study, we show that the proposed framework results in enhanced performance for the majority of the benchmark problems studied. Index Terms—Affinity matrix, differential evolution, mutation operator, nearest neighbors. I. Introduction E VOLUTIONARY algorithms (EAs) are stochastic search methods that mimic evolutionary processes encountered in nature. The common conceptual base of these methods is to evolve a population of candidate solutions by simulating the main processes involved in the evolution of genetic material of organism populations, such as natural selection and biological evolution. EAs can be characterized as global optimization al- gorithms. Their population-based nature allows them to avoid Manuscript received November 30, 2009; revised April 17, 2010, July 6, 2010, and September 14, 2010; accepted September 15, 2010. Date of publication January 6, 2011; date of current version February 25, 2011. This work was financially supported by the European Social Fund (ESF), the Operational Program for EPEDVM, and the Program Herakleitos II. M. G. Epitropakis and M. N. Vrahatis are with the Department of Mathemat- ics, University of Patras, Patras GR-26110, Greece (e-mail: mikeagn@math. upatras.gr; vrahatis@math.upatras.gr). D. K. Tasoulis is with the Department of Mathematics, Imperial College London, London SW7 2AZ, U.K. (e-mail: d.tasoulis@imperial.ac.uk). N. G. Pavlidis is with the Department of Management Science, Lancaster University, Lancaster, Lancashire LA1 4YW, U.K. (e-mail: n.pavlidis@ imperial.ac.uk). V. P. Plagianakos is with the Department of Computer Science and Biomed- ical Informatics, University of Central Greece, Lamia 35100, Greece (e-mail: vpp@math.upatras.gr). All authors are members of Computational Intelligence Laboratory (CILab), Department of Mathematics, University of Patras, Patras 26500, Greece. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TEVC.2010.2083670 getting trapped in a local optimum and consequently provides a great chance to find global optimal solutions. EAs have been successfully applied to a wide range of optimization problems, such as image processing, pattern recognition, scheduling, and engineering design [1], [2]. The most prominent EAs proposed in the literature are genetic algorithms [1], evo- lutionary programming [3], evolution strategies [4], genetic programming [5], particle swarm optimization (PSO) [6], and differential evolution [7], [8]. In general, every EA starts by initializing a population of candidate solutions (individuals). The quality of each solution is evaluated using a fitness function, which represents the problem at hand. A selection process is applied at each iteration of the EA to produce a new set of solutions (population). The selection process is biased toward the most promising traits of the current population of solutions to increase their chances of being included in the new population. At each iteration (generation), the individuals are evolved through a predefined set of operators, like mutation and recombination. This procedure is repeated until convergence is reached. The best solution found by this procedure is expected to be a near-optimum solution [2], [9]. Mutation and recombination are the two most frequently used operators and are referred to as evolutionary operators. The role of mutation is to modify an individual by small random changes to generate a new individual [2], [9]. Its main objective is to increase diversity by introducing new genetic material into the population, and thus avoid local optima. The recombination (or crossover) operator combines two, or more, individuals to generate new promising candidate solutions [2], [9]. The main objective of the recombination operator is to explore new areas of the search space [2], [10]. In this paper, we study the differential evolution (DE) algo- rithm, proposed by Storn and Price [7], [8]. This method has been successfully applied in a plethora of optimization prob- lems [7], [11]–[19]. Without loss of generality, we only con- sider minimization problems. In this case, the objective is to locate a global minimizer of a function f (objective function). Definition 1: A global minimizer x ⋆ ∈ R D of the real– valued function f : E → R is defined as f (x ⋆ ) f (x) ∀ x ∈ E where the compact set E ⊆ R D is a D-dimensional scaled translation of the unit hypercube. 1089-778X/$26.00 c 2011 IEEE