Proceedings of the 2018 Winter Simulation Conference
M. Rabe, A. A. Juan, N. Mustafee, A. Skoogh, S. Jain, and B. Johansson, eds.
UNBIASED METAMODELING VIA LIKELIHOOD RATIOS
Jing Dong
Graduate School of Business
Columbia University
New York, NY 10027, USA
M. Ben Feng
Deparment of Statistics and Actuarial Science
University of Waterloo
Waterloo, Ontario, CANADA
Barry L. Nelson
Department of Industrial Engineering and Management Sciences
Northwestern University
Evanston, IL 60208, USA
ABSTRACT
Metamodeling has been a topic of longstanding interest in stochastic simulation because of the usefulness of
metamodels for optimization, sensitivity, and real- or near-real-time decision making. Experiment design is
the foundation of classical metamodeling: an effective experiment design uncovers the spatial relationships
among the design/decision variables and the simulation response; therefore, more design points, providing
better coverage of space, is almost always better. However, metamodeling based on likelihood ratios (LRs)
turns the design question on its head: each design point provides an unbiased prediction of the response
at any other location in space, but perhaps with such inflated variance as to be counterproductive. Thus,
the question becomes more which design points to employ for prediction and less where to place them. In
this paper we take the first comprehensive look at LR metamodeling, categorizing both the various types
of LR metamodels and the contexts in which they might be employed.
1 INTRODUCTION
Simulation metamodeling—representing some aspect of the performance of a system that is described by
a stochastic simulation via a functional model—has been of interest since at least the 1960’s; see Kleijnen
(1974), Kleijnen (1975) for one of the first comprehensive treatments. Early works focused on the mean
response and linear regression metamodels, with an emphasis on experiment designs that exploited the
advantages of simulation over a physical experiment; see for instance Schruben and Margolin (1978). There
has been substantial progress since then for different responses and different metamodel forms.
The value of metamodeling is that it draws statistical strength from simulations run at a number
of distinct design points to make better predictions at settings not yet simulated, or even at the design
points themselves. Once created, a metamodel can typically be evaluated with little computational effort,
while simulations at new settings take time. Further, the fitted metamodel can provide insight into system
behavior—e.g., the coefficients of a linear regression may be interpreted as rates of change with respect to the
design variables—or even be used for system optimization. Experiment design for fitting linear regression
metamodels, and more recently inference based on Gaussian process metamodels, are well-studied topics
in the simulation literature and beyond (Barton and Meckesheimer 2006).
Metamodeling inherently involves a bias-variance tradeoff: bias because the underlying functional
model, even if “fitted” optimally, is not of the same form as the true, unknown response surface; and
variance because the more flexible the base metamodel is the more sensitive it is to the random simulation
1778 978-1-5386-6572-5/18/$31.00 ©2018 IEEE