ORIGINAL ARTICLE Bat algorithm for constrained optimization tasks Amir Hossein Gandomi Xin-She Yang Amir Hossein Alavi Siamak Talatahari Received: 28 January 2012 / Accepted: 21 June 2012 / Published online: 20 July 2012 Ó Springer-Verlag London Limited 2012 Abstract In this study, we use a new metaheuristic optimi- zation algorithm, called bat algorithm (BA), to solve constraint optimization tasks. BA is verified using several classical benchmark constraint problems. For further validation, BA is applied to three benchmark constraint engineering problems reported in the specialized literature. The performance of the bat algorithm is compared with various existing algorithms. The optimal solutions obtained by BA are found to be better than the best solutions provided by the existing methods. Finally, the unique search features used in BA are analyzed, and their implications for future research are discussed in detail. Keywords Bat algorithm Constraint optimization Metaheuristic algorithm 1 Introduction Most design optimization problems in real world are highly nonlinear, involving many different design variables under complex constraints. These constraints can be written either as simple bounds such as the ranges of material properties, or more often as nonlinear relationships. The nonlinearity in the objective function often results in multimodal response landscape, while such nonlinearity in the constraints leads to the complex search domains. Local search algorithms such as hill-climbing and Nelder–Mead downhill simplex methods are most likely to be trapped in a local optimum. Therefore, they are not generally suitable for solving complex optimi- zation problems. In this case, global algorithms are considered as efficient tools to obtain optimal solutions [1, 2]. Metaheuristic algorithms can be considered as upper-level methodologies to solve specific optimization problems [3]. Two important characteristics of metaheuristics are intensi- fication and diversification [4]. Intensification, also called exploitation, intends to use the information from the current best solutions. This process searches around the neighborhood of the current best solutions and selects the best candidates. Diversification, also called exploration, guarantees that the algorithm can explore the search space more efficiently, often by randomization. This is the essential step that guarantees that the system can jump out of any local optima and can generate new solutions as diversely as possible. The main purposes of development of modern metaheu- ristic algorithms are to: solve problems faster, solve large complex problems, and obtain robust solutions [511]. Genetic algorithms (GA) and particle swarm optimization (PSO) are typical metaheuristic algorithms. The efficiency of the metaheuristic algorithms can be attributed to the fact that they imitate the best features in nature. Obviously, the choice of the optimization algorithms largely depends on the type of the problem of interest and the expected quality of solution. For a specific category of problems, some algorithms may produce better results faster and more efficiently. A very promising recent development in the field of metaheuristic algorithms is bat algorithm (BA) proposed A. H. Gandomi (&) Young Researchers Club, Central Tehran Branch, Islamic Azad University, Tehran, Iran e-mail: a.h.gandomi@gmail.com X.-S. Yang Mathematics and Scientific Computing, National Physical Laboratory, Teddington TW11 0LW, UK A. H. Alavi Young Researchers Club, Mashhad Branch, Islamic Azad University, Mashhad, Iran S. Talatahari Marand Faculty of Engineering, University of Tabriz, Tabriz, Iran 123 Neural Comput & Applic (2013) 22:1239–1255 DOI 10.1007/s00521-012-1028-9 Author's personal copy