Stochastic Reverse Hillclimbing and Iterated Local Search Carlos Cotta Dept. Lenguajes y Ciencias de la Computaci´ on, University of M´alaga Campus de Teatinos (3.2.49) 29071-M´ alaga (Spain) ccottap@lcc.uma.es Enrique Alba Dept. Lenguajes y Ciencias de la Computaci´ on, University of M´alaga Campus de Teatinos (3.2.12) 29071-M´alaga(Spain) eat@lcc.uma.es Jos´ eM a Troya Dept. Lenguajes y Ciencias de la Computaci´on,UniversityofM´alaga Campus de Teatinos (3.2.14) 29071-M´alaga(Spain) troya@lcc.uma.es Abstract- This paper analyzes the detection of stagnation states in iterated local search algo- rithms. This is done considering elements such as the population size, the length of the encoding and the number of observed non-improving iter- ations. This analysis isolates the features of the target problem within one parameter for which three different estimations are given: two static a priori estimations and a dynamic approach. In the latter case, a stochastic reverse hillclimbing algorithm is used to extract information from the fitness landscape. The applicability of these es- timations is studied and exemplified on different problems. 1 Introduction The reverse hillclimbing (RHC) algorithm (designed by Jones and Rawlins [JR93]) is a very adequate tool for studying fitness landscapes. This algorithm can be used for determining the basin of attraction of a desired point of the search space. However, one of drawbacks of this algorithm is the fact that, in some situations, the size of this basin of attraction may be very large (even of the same magnitude than the whole search space). This is especifically true in smooth landscapes. In fact, and as pointed out by the authors, the more rugged the fitness landscape, the more efficient the algorithm is. This work presents a modification of the reverse hill- climbing algorithm with application to iterated local search. To be precise, the algorithm is used to extract some measures of the fitness landscape. These measures are subsequently utilized to determine when to terminate the execution of the algorithm. This is done by calcu- lating the probability of stagnation provided that the algorithm has not yielded better solutions for a certain number of evaluations. Since local search algorithms are not appropriate for very rugged landscapes, the use of this stochastic version of the RHC algorithm is justified. The remainder of the article is organized as follows: first, a mathematical analysis is done in section 2 to de- termine the probability of stagnation of a local search algorithm as a function of the parameters of the algo- rithm. This analysis isolates the features of the objective problem into one measure for which different estimations are given in section 3. First, two static estimations are discussed in subsection 3.1. Then, a dynamic estima- tion is presented in subsection 3.2, based on the use of a stochastic reverse hillclimbing algorithm. These estima- tions are evaluated with respect to some common ter- mination criteria in section 4. Finally, some conclusions are outlined in section 5. 2 A Probabilistic Analysis of Stagnation The following analysis is valid for discrete (μ + λ)-LS algorithms. These algorithms maintain a population of μ individuals, create new λ individuals in each iteration and select the best μ individuals from both the current and the newly created populations for the next gener- ation. Due to its elitist behavior, it can be easily seen that the population will remain unchanged if the algo- rithm has stagnated. Hence, the analysis is based on this observation, i.e., we will calculate the probability of stagnation conditioned to the population remaining unchanged for a certain number of iterations. Initially, suppose that μ = 1 and λ = 1. Assume that no new individual has been accepted in the popula- tion after r iterations. The probability of this situation (denoted by E r ) is P (E r )= M max X i=0 P (M i ) · P (E r /M i )= (1) = M max X i=0 P (M i ) · M max - i M max ¶ r (2) where M max is the total number of ways in which an individual can be mutated and P (M i ) is the probability that i mutations produce an individual better than, at least, another individual in the population. The last term represents the conditioned probability of none of these mutations being performed after r iterations. The measure P (M i ) is very important since it carries all the information about the problem being solved. Section 3 is entirely devoted to discuss several estimations of this parameter. Now, since P (E r /M 0 ) = 1, using Bayes’ Theorem to