Proceedings of GO 2005, pp. 1 – 5. Variable Neighbourhood Search for the Global Optimization of Constrained NLPs Leo Liberti, 1 and Milan Draži´ c 2 1 DEI, Politecnico di Milano, P.zza L. da Vinci 32, 20133 Milano, Italy, liberti@elet.polimi.it 2 Faculty of Mathematics, University of Belgrade, Studentski trg 16, 11000 Belgrade, Serbia and Montenegro, mdrazic@matf.bg.ac.yu Abstract We report on the theory and implementation of a global optimization solver for general constrained nonlinear programming problems based on Variable Neighbourhood Search, and we give compara- tive computational results on several instances of continuous nonconvex problems. Compared to an efficient multi-start global optimization solver, the VNS solver proposed appears to be significantly faster. Keywords: VNS, global optimization, nonconvex, constrained, NLP. 1. Introduction This paper describes a Variable Neighbourhood Search (VNS) solver for the global solution of continuous constrained nonlinear programming problems (NLPs) in general form: min x∈R n f (x) s.t. l ≤ g(x) ≤ u x L ≤ x ≤ x U . (1) In the above formulation, x are the problem variables. f : R n → R is a possibly nonlinear func- tion, g : R n → R m is a vector of m possibly nonlinear functions, l, u ∈ R m are the constraint bounds (which may be set to ±∞ as needed), and x L ,x U ∈ R n are the variable bounds. Previous work on Variable Neighbourhood Search applied to global optimization was re- stricted to box-constrained NLPs (m =0 in the above formulation) [19]. To the best of our knowledge, a VNS solver for constrained global optimization targeted at problems in general form (1) has not been implemented yet. It is worth noting, however, that the box-constrained VNS solver described in [19] is currently being tested on a reformulation of constrained prob- lems based on penalization of explicit constraints. 2. The Variable Neighbourhood Search algorithm Variable Neighbourhood Search (VNS) is a relatively recent metaheuristic which relies on iter- atively exploring neighbourhoods of growing size to identify better local optima [6–8]. More precisely, VNS escapes from the current local minimum x ∗ by initiating other local searches from starting points sampled from a neighbourhood of x ∗ which increases its size iteratively until a local minimum better than the current one is found. These steps are repeated until a given termination condition is met.