FOCUS Shuffle or update parallel differential evolution for large-scale optimization Matthieu Weber • Ferrante Neri • Ville Tirronen Published online: 14 September 2010 Ó Springer-Verlag 2010 Abstract This paper proposes a novel algorithm for large-scale optimization problems. The proposed algo- rithm, namely shuffle or update parallel differential evo- lution (SOUPDE) is a structured population algorithm characterized by sub-populations employing a Differential evolution logic. The sub-populations quickly exploit some areas of the decision space, thus drastically and quickly reducing the fitness value in the highly multi-variate fitness landscape. New search logics are introduced into the sub- population functioning in order to avoid a diversity loss and thus premature convergence. Two simple mechanisms have been integrated in order to pursue this aim. The first, namely shuffling, consists of randomly rearranging the individuals over the sub-populations. The second consists of updating all the scale factors of the sub-populations. The proposed algorithm has been run on a set of various test problems for five levels of dimensionality and then com- pared with three popular meta-heuristics. Rigorous statis- tical and scalability analyses are reported in this article. Numerical results show that the proposed approach sig- nificantly outperforms the meta-heuristics considered in the benchmark and has a good performance despite the high dimensionality of the problems. The proposed algorithm balances well between exploitation and exploration and succeeds to have a good performance over the various dimensionality values and test problems present in the benchmark. It succeeds at outperforming the reference algorithms considered in this study. In addition, the sca- lability analysis proves that with respect to a standard Differential Evolution, the proposed SOUPDE algorithm enhances its performance while the dimensionality grows. Keywords Differential evolution Distributed algorithms Large-Scale optimization Randomization Scale factor update Shuffling mechanism 1 Introduction According to the common sense, for a given problem characterized by dimensionality n, an increase in the dimensionality results in an increase in the problem diffi- culty. Clearly, this increase is not linear with respect to the dimensionality but follows an exponential rule. In other words, if we double the dimensionality of a problem we do not just double its difficulty, but increase it many times. Without getting into a mathematical proof, we can consider an optimization problem characterized by a flat fitness landscape and a small basin of attraction within a hyper- cube whose side is 1 100 of the side of the entire domain. Under these conditions, we can consider this optimization problem to be solved when the optimization algorithm employed samples at least one point within the basin of attraction. If n = 1, the basin of attraction is 1 100 of the search domain, i.e., on average, a simple random search algorithm would need to sample 100 points before detect- ing the basin of attraction. This means that the problem for n = 1 is fairly easy. If n = 2, a random search algorithm This research is supported by the Academy of Finland, Akatemiatutkija 130600, Algorithmic Design Issues in Memetic Computing. M. Weber (&) F. Neri V. Tirronen Department of Mathematical Information Technology, University of Jyva ¨skyla ¨, P.O. Box 35, 40014 Agora, Finland e-mail: matthieu.weber@jyu.fi F. Neri e-mail: ferrante.neri@jyu.fi V. Tirronen e-mail: ville.tirronen@jyu.fi 123 Soft Comput (2011) 15:2089–2107 DOI 10.1007/s00500-010-0640-9