A new approach to global optimization using a closed loop control system with fuzzy logic controller B. Ustundag * , I. Eksin, A. Bir Electrical & Electronics Engineering Faculty, I ˙ stanbul Technical University, 80626 Maslak, I ˙ stanbul, Turkey Accepted 2 September 2002 Abstract In this study, a new global optimization method that uses a closed loop control system is proposed. If a plant, in a feedback control system with a reference input r, is replaced by the objective function f ð xÞ then the output of a properly designed controller approaches the solution of the equation f ð xÞ 2 r ¼ 0 at the steady state. An algorithm is then designed such that the reference point and the objective function representing the plant are continuously changed within the control loop. This change is done in accordance with the result of the steady-state controller output. This algorithm can find the global optimum point in a bounded feasible region. Even though the new approach is applicable to the optimization of single and multivariable non-linear objective functions, only the results related to some test functions with single variable are presented. The results of the new algorithm are compared with some well-known global optimization algorithms. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Global optimization; Feedback control system; Fuzzy logic controller; Root search algorithm 1. Introduction Optimization is the act of obtaining the ‘best’ result under given circumstances. This problem of finding the best solution is of great importance to all fields of engineering and science, though the meaning of best is often not clear. To simplify the problem, one resorts to defining the problem in mathematical representation such that a measure of the performance is given by f, some real-valued non-linear function of n parameters. Then, the problem of finding the best solution can be stated in more compact and mathemat- ical terms as follows. Let X be a compact set called feasible region and f be an objective function such that X , R n ; and f : R n ! R 1 : The minimization problem can then be defined as finding the point x p [ X such that f p ¼ min f ðxÞ; x [ X ð1Þ where f p denotes the minimum value of f ðxÞ [1]. Without loss of generality, it is sufficient to consider only minimization tasks, since maximizing f ð·Þ is equivalent to minimizing 2f ð·Þ: Therefore, it may be considered as an optimization problem in general case. There are many ways of accomplishing the optimization analytically, including derivative methods and Lagrange multipliers. However, there exists a large class of problems that involve a system whose performance function is either unknown or hope- lessly cumbersome. In these cases, one must resort to the use of a direct search method of optimization. The choice of search method to use is a common problem in dealing with the optimization since various methods exist, each with its own set of advantages and disadvantages [2,3]. When one considers the different classes of problems in which a search routine is used, the most difficult class of problems will be the ones that have many local minimum points. These problems require a search that is global in nature, i.e. they have to take the entire space into consideration, not simply a small part of it. The current methods related to global optimization can be separated into following categories. 1.1. Non-sequential and sequential random search In the simplest form of non-sequential random search, the solution space is divided by a large evenly spaced grid consisting of discrete values for each of the n parameters and the function is evaluated at all possible combinations of 0965-9978/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S0965-9978(02)00036-4 Advances in Engineering Software 33 (2002) 309–318 www.elsevier.com/locate/advengsoft * Corresponding author. E-mail address: berk@cs.itu.edu.tr (B. Ustundag).