Noname manuscript No. (will be inserted by the editor) Best Practices in Measuring Algorithm Performance for Dynamic Optimization Problems Hajer Ben-Romdhane · Enrique Alba · Saoussen Krichen Received: date / Accepted: date Abstract Dynamic optimization problems (DOPs) have at- tracted considerable attention due to the wide range of prob- lems they can be applied to. Lots of efforts have been ex- pended in modeling dynamic situations, proposing algorith- ms, and analyzing the results (too often in a visual way). Numeric performance measurements and their statistical val- idation have been however barely used in the literature. Most of works in DOPs report only the best-of-generation fitness, due to its simplicity of computation. Although this measure indicates the best algorithm in terms of fitness, it does not provide any details about the actual strength and weakness of each algorithm. In this article, we conduct a comparative study among algorithms of different search modes via several performance measures to demonstrate their relative advan- tages. We discuss the role of using different performance measures in drawing balanced conclusions about algorithms for dynamic optimization problems. Keywords Dynamic Optimization Problems · Evolutionary Algorithms · Genetic algorithms · Performance Measure 1 Introduction In the last two decades, we have witnessed a growing in- terest in studying dynamic optimization problems (DOPs), as they have proven their usefulness in solving real-world H. Benromdhane LARODEC Laboratory, ISG of Tunis, 41 Rue de la Libert ´ e, Le Bardo, Tunisia E-mail: hajer.ben.romdhan@hotmail.com E. Alba Universidad de M´ alaga, Boulevard Louis Pasteur s/n E-mail: eat@lcc.uma.es S. Krichen FSJEG de Jendouba, Avenue de l’U.M.A , 8189 Jendouba, Tunisia E-mail: saoussen.krichen@isg.rnu.tn complex changing tasks. In fact, realistic applications are more likely to happen with uncertain scenarios, in the sense that they involve the change of one or more of the problem specifications: i.e., the objective function, problem parame- ters, and problem constraints may vary in time [32]. In such environments, optimization algorithms are not only required to optimize the problem in its actual state, but also to adapt to the new optima whenever an environmental change is de- tected, and then, to continuously track the moving optima throughout the whole optimization process. Several approaches and techniques were addressed over the years to solve DOPs [9], among them: particle swarm optimization, cooperative strategies, and stochastic diffusion search. However, a great deal of attention goes towards evolu- tionary algorithms (EAs) due to their suitability for modeling Natural evolution processes [4][22]. Although traditional EAs were essentially dedicated to solve static optimization problems in the past, several steps have been taken to adapt them to dynamic environments. These steps aim to enhance the performance of EAs to locate the moving optima in the landscape and avoid premature convergence. Among the most common approaches, we can mention hyper-mutation [11] [15], random immigrants [27][31], and the use of multiple populations [1][8]. Developed EAs were tested on well-known DOPs: dy- namic job shop scheduling problems [5][7], dynamic knap- sack problems [12][23], dynamic traveling salesman prob- lems [13][14], etc. The other way of evaluating EAs is to build up dynamic benchmark problems. Branke [6] devel- oped the moving peaks benchmark problem which consists of a number of peaks changing in height, width, and location. Also, the XOR generator -introduced by Yang [29]- creates the dynamic counterpart of a given stationary problem via the bitwise exclusive-or operator. Another important test prob- lem generator is the DF1 generator introduced by Morrison and De Jong [17].