Differential Evolution Made Easy Technical report no. 2005-01 Rasmus K. Ursem ursem@stofanet.dk Notes of use: Please cite my paper [15] if you use this work in your own research. 1 Introduction Optimization techniques for numerical problems are very well-explored and several hundred algorithms exist. Despite more than four decades of extensive research and thousands of comparative studies no algorithm has yet proven to be the overall best. In fact, Wolpert and Macready have shown that no algorithm is superior on all problems [17]. Although this is a strong proof, it is merely of theoretical interest – all problems literally means all problems, including those yielding an arbitrary and determinstic function value, i.e., problems with no correlation between neighboring solutions. Naturally, no algorithm is better than random search on such problems... As mentioned, four decades of research has not yet nominated one single algorithm as the overall best. However, in 1995 Storn and Price suggested the “Differential Evolution” (DE) algorithm [12]. Since then, the algorithm has been tested on numerous artificial and real problems and out to be a strong candidate for the title [2; 3; 4; 6; 11; 14; 16; 18; 20]. Published work, work by colleagues, and my own recent experiences with DE on motor system identification [15], has convinced me to always try this algorithm first. 2 Differential Evolution Algorithm As other evolutionary algorithms, DE maintains a population (a set) of solutions to the optimization problem at hand. The main idea in DE is to use vector differences in the creation of new candidate solutions, whereas traditional EAs rely on random pertubation (mutation) of a solution and mixing of two or more solutions (recombination). Another major difference is that the three phases of a standard EA (selection, recombination, and mutation) are combined to one operation, which is carried out for each individual. In the standard EA, each phase is performed on the entire population. In contrast, the DE algorithm iterates through the population and creates, for each population index i, a potential candidate C[i] by vector addition (mutation) and a variant of uniform crossover (recombination). Selection is straightforward and very simple; the candidate solution C[i] replaces P [i] if it is better. Figure 1 illustrates the DE algorithm. The key to success in DE is the creation of the candidate solution C[i]. Until now, several schemes have been suggested (for variants, see [12] and [13]). Storn and Price suggest a two-point crossover scheme where a number of consecutive genes are copied from the parent P [i]. I have experimented with a uniform crossover variant described by the pseudocode in figure 2, which has shown good performance in my previous work [15]. Figure 3 illustrates a two-dimensional example where the objective is to determine the parameters X1 and X2. In the general case of N -dimensional problems, the final candidate C[i] will be a corner in the hypercube spanned by P [i] and the initial candidate C 1 [i]. The algorithm is quite robust with respect to the algorithmic parameters f and p c . Setting f =0.35 and p c =0.2 will give generally good convergence on a wide range of problems. 1