ORIGINAL ARTICLE Evolutionary boundary constraint handling scheme Amir Hossein Gandomi • Xin-She Yang Received: 15 March 2012 / Accepted: 22 June 2012 / Published online: 12 July 2012 Ó Springer-Verlag London Limited 2012 Abstract The performance of an optimization tool is largely determined by the efficiency of the search algorithms used in the process as well as the proper handling of complex constraints. From the implementation point of view, an important part of task ensuring an efficient algorithm to work to its best capability is to handle the boundary constraints properly and effectively. As most studies in the literature have focused on the development of algorithms and performance evaluation and comparison of optimization algorithms, this crucial step has not been explored very well, and conse- quently only limited studies have been carried out in this field. This paper intends to propose a simple and yet efficient evolutionary scheme for handling boundary constraints. The simplicity of this approach means that the proposed scheme is very easy to implement and thus can be suitable for many applications. We demonstrate this approach with an efficient algorithm, differential evolution, and we also compare it with other boundary constraint handling approaches for a wide set of benchmark problems. Based on statistical parameters and especially mean values, the results obtained by the evolu- tionary scheme are better than the best known solutions obtained by the existing methods. Keywords Evolutionary scheme Boundary constraint Differential evolution Benchmark 1 Introduction Computational optimization has become increasingly popular in recent years because design optimization and its applications in engineering and industry have become ever more important due to more stringent designs requirements in modern engineering practice. In addition, design prob- lems of interest and importance nowadays are often much harder to solve, as we intend to consider more realistic, large-scale, and nonlinear optimization problems under limited resources, money and time constraints [1]. The conventional optimization techniques cannot deal with such hard problems, and researchers have to search for better algorithms. One class of promising algorithms is metaheuristics, especially those based on evolutionary characteristics and swarm intelligence [1, 2]. In fact, metaheuristic algorithms have been one of the hot and active researches in algorithm development con- cerning optimization. There are many reasons for the success of metaheuristics, and one of the reasons is that methaheuristic algorithms have both deterministic and stochastic components. Deterministic components are often based on the classical methods, while stochastic compo- nents are usually inspired by evolutionary aspects of some biological systems. A good combination of both compo- nents can lead to a good combination of local search and global search, or intensive local exploitation and global exploration [2, 3]. For optimization to be truly successful, an efficient algorithm is just an important core part. Another important part is the proper handling of constraints and limits. A. H. Gandomi (&) Young Researchers Club, Central Tehran Branch, Islamic Azad University, Tehran, Iran e-mail: a.h.gandomi@gmail.com X.-S. Yang Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK e-mail: xy227@cam.ac.uk X.-S. Yang Mathematics and Scientific Computing, National Physical Laboratory, Teddington TW11 0LW, UK 123 Neural Comput & Applic (2012) 21:1449–1462 DOI 10.1007/s00521-012-1069-0