Proceedings of the 2010 Winter Simulation Conference
B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, and E. Y¨ ucesan, eds.
CALIBRATING SIMULATION MODELS USING THE KNOWLEDGE GRADIENT WITH
CONTINUOUS PARAMETERS
Warren R. Scott
Operations Research and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
Warren B. Powell
Operations Research and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
Hugo P. Sim˜ ao
Operations Research and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
ABSTRACT
We describe an adaptation of the knowledge gradient, originally developed for discrete ranking and selection
problems, to the problem of calibrating continuous parameters for the purpose of tuning a simulator. The
knowledge gradient for continuous parameters uses a continuous approximation of the expected value of a single
measurement to guide the choice of where to collect information next. We show how to find the parameter
setting that maximizes the expected value of a measurement by optimizing a continuous but nonconcave
surface. We compare the method to sequential kriging for a series of test surfaces, and then demonstrate its
performance in the calibration of an expensive industrial simulator.
1 INTRODUCTION
We consider the problem of tuning the parameters of an expensive simulator to achieve specific performance
metrics. This problem requires searching over a continuous, multidimensional parameter space to find the
settings which produce the best results. Each measurement is time consuming and yet produces noisy
measurements. In this paper, we propose an algorithm that is asymptotically optimal but which promises fast
convergence.
We use as our algorithmic framework the correlated knowledge gradient algorithm presented in
(Frazier, Powell, and Dayanik 2009) which combines a model for the function being maximized with a
criterion for choosing the sampling decision based on the value of an observation. The knowledge gradient
policy is a widely applicable policy that has had promising results maximizing a variety of standard test func-
tions. The strength of the policy is based on the policy’s implicit handling of exploration versus exploitation
when choosing the sampling decision which leads to convergence properties.
A popular approach for optimizing expensive functions is Bayesian global optimization which combines a
model and a sampling criterion to sequentially choose points to sample in order to maximize a function. Common
sampling criteria include the probability of improvement presented in (Kushner 1964) or expected improve-
ment criteria (see (Zhilinskas 1975); (Mockus 1993); (Locatelli 1997); (Jones, Schonlau, and Welch 1998);
(Huang, Allen, Notz, and Zeng 2006)). These sampling criteria can easily be exactly calculated with a krig-
ing or Gaussian process regression model (see (Matheron 1963); (Sacks, Welch, Mitchell, and Wynn 1989);
(Cressie 1990); (Kleijnen 2009); (Rasmussen and Williams 2006)). Gaussian process regression treats the
truth as a realization of a Gaussian process and is convenient for interpretation purposes because it com-
bines a regression function with the distribution of uncertainty about the regression function. The correlated
knowledge gradient presented in (Frazier, Powell, and Dayanik 2009) computes the expected improvement
in the performance of a design as a result of a single measurement, for problems with a finite (and not
too large) number of potential measurements. In this paper, we present an adaptation of the knowledge
gradient for problems with multidimensional, continuous parameters. Our presentation is based on the work
in (Scott, Frazier, and Powell 2010).
We begin by writing our model calibration problem as an optimization model. Next we describe the
knowledge gradient framework for sequentially searching for the best set of parameters, where the goal is fast
convergence in the face of noisy measurements. We sketch the logic used to handle continuous parameters by
1099 978-1-4244-9864-2/10/$26.00 ©2010 IEEE