IIE Transactions (2006) 38, 445–461 Copyright C “IIE” ISSN: 0740-817X print / 1545-8830 online DOI: 10.1080/07408170500440437 Designing experiments for robust-optimization problems: the V s -optimality criterion HILLA GINSBURG and IRAD BEN-GAL ∗ Department of Industrial Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel E-mail: bengal@eng.tau.ac.il Received August 2004 and accepted September 2005 We suggest an experimentation strategy for the robust design of empirically fitted models. The suggested approach is used to design experiments that minimize the variance of the optimal robust solution. The new design-of-experiment optimality criterion, termed V s -optimal, prioritizes the estimation of a model’s coefficients, such that the variance of the optimal solution is minimized by the performed experiments. It is discussed how the proposed criterion is related to known optimality criteria. We present an analytical formulation of the suggested approach for linear models and a numerical procedure for higher-order or nonpolynomial models. In comparison with conventional robust-design methods, our approach provides more information on the robust solution by numerically generating its multidimensional distribution. Moreover, in a case study, the proposed approach results in a better robust solution in comparison with these standard methods. 1. Introduction In the last three decades, the Taguchi method of robust design has been widely applied to the design of various systems. In many cases, the exact underlying relationship between the design factors and the system response is un- known. Hence, there is a need to design and conduct ex- periments to gain information. The manner in which these experiments are performed clearly affects the obtained so- lution for the robust-design problem. Yet, there is no stan- dard method by which to conduct these experiments. As a result, various experimentation strategies are being used that depend on the applied methodologies. The Taguchi method proposes a set of experimental design matrices (“orthogonal arrays”) to estimate the ef- fects of the design factors and select the combination that yields the highest Signal-to-Noise (S/N) ratio (Taguchi, 1978, 1986; Phadke, 1989). Later approaches use a stan- dard canonical approach, in which the experimenter im- plements the following two-step procedure. First, the ex- perimenter estimates an empirical response model for the unknown system by using conventional experimental matri- ces, such as factorial designs. These matrices are often based on known Design Of Experiment (DOE) optimality crite- ria, such as D-optimal designs (e.g., as in Myers and Mont- gomery (1995)). Second, he/she minimizes a loss function that is based on the estimated model and obtains its opti- ∗ Corresponding author mal solution. The canonical approach is, thus, problematic as long as the estimated model deviates from the “real” unknown model. If the estimated model is noisy, different “optimal” solutions will be obtained for each set of experi- mental results. We aim to address this problem already at the experimental stage and combine the above two-step proce- dure in a unified DOE protocol. In particular, we suggest a DOE optimality criterion, termed V s -optimal, that seeks to minimize the variance of the optimal solution rather than, for example, minimizing the variance of the regression co- efficients as done by the D-optimal criterion. The proposed criterion minimizes the variance of the solution by prior- itizing the estimation of various model coefficients. Thus, at each experimental stage, it indicates which coefficients should be estimated more accurately with respect to others to obtain a consistent solution. The area of robust optimization (Kouvelis and Yu, 1997; Xu and Albin, 2003) addresses the above problem caused by canonical approach. Similar to the proposed approach, in robust optimization the coefficients of the response model are considered to be unknown and therefore are estimated and treated as random variables. However, the objective of robust optimization is to identify solutions that are insensitive to the estimation errors. A specific ob- jective function is defined for loss minimization by the minimax criterion (Ben-Tal and Nemirovski, 1998). It re- quires the obtained solution to be small in the worst case, namely over a family of response models for which the coefficients are chosen from predefined confidence intervals. 0740-817X C 2006 “IIE”