European Journal of Operational Research 289 (2021) 595–610
Contents lists available at ScienceDirect
European Journal of Operational Research
journal homepage: www.elsevier.com/locate/ejor
Decision Support
Probabilistic sensitivity measures as information value
Emanuele Borgonovo
a,∗
, Gordon B. Hazen
b
, Victor Richmond R. Jose
c
, Elmar Plischke
d
a
Bocconi University and BIDSA, Milan, Italy
b
Northwestern University, Evanston, IL, USA
c
Georgetown University, Washington, DC, USA
d
Clausthal University of Technology, Clausthal-Zellerfeld, Germany
a r t i c l e i n f o
Article history:
Received 12 November 2019
Accepted 6 July 2020
Available online 12 July 2020
Keywords:
Decision support systems
Information value
Probabilistic sensitivity analysis
Renyi’s postulates
a b s t r a c t
Decision makers increasingly rely on forecasts or predictions generated by quantitative models. Best prac-
tices recommend that a forecast report be accompanied by a sensitivity analysis. A wide variety of prob-
abilistic sensitivity measures have been suggested; however, model inputs may be ranked differently by
different sensitivity measures. Is there some way to reduce this disparity by identifying what probabilis-
tic sensitivity measures are most appropriate for a given reporting context? We address this question by
postulating that importance rankings of model inputs generated by a sensitivity measure should corre-
spond to the information value for those inputs in the problem of constructing an optimal report based
on some proper scoring rule. While some sensitivity measures have already been identified as informa-
tion value under proper scoring rules, we identify others and provide some generalizations. We address
the general question of when a sensitivity measure has this property, presenting necessary and sufficient
conditions. We directly examine whether sensitivity measures retain important properties such as trans-
formation invariance and compliance with Renyi’s Postulate D for measures of statistical dependence.
These results provide a means for selecting the most appropriate sensitivity measures for a particular
reporting context and provide the analyst reasonable justifications for that selection. We illustrate these
ideas using a large scale probabilistic safety assessment case study used to support decision making in
the design and planning of a lunar space mission.
© 2020 Elsevier B.V. All rights reserved.
1. Introduction
Forecasts or predictions generated by quantitative models
support decision makers in areas ranging from business planning
(Baucells & Borgonovo, 2013) to climate change modeling (Stehfest
et al., 2019). Frequently, these models are built to estimate a key
quantity of interest (Y), which is one of the inputs to a panel where
representative agents conduct the decision making process follow-
ing a pre-established protocol (French & Argyris, 2018). The analyst
who develops or implements the simulation is expected to provide
a forecast of Y, which can be a point estimate, a quantile, or a cu-
mulative distribution function of Y. Best practices recommend that
such a report be accompanied by a sensitivity analysis that pro-
vides a description of the level of uncertainty in the forecast and
that identifies what model inputs are the drivers of the forecast,
and are therefore candidates for additional information acquisition.
∗
Corresponding author.
E-mail addresses: emanuele.borgonovo@unibocconi.it (E. Borgonovo),
gbh305@northwestern.edu (G.B. Hazen), vrj2@georgetown.edu (V.R.R. Jose),
elmar.plischke@tu-clausthal.de (E. Plischke).
Common approaches to sensitivity analysis study the determin-
istic variation of the quantity of interest about a base value or best
estimate. This type of analysis is at the basis of popular tools such
as tornado diagrams (Howard, 1988) or spider plots (Eschenbach,
1992). Analysts also have the option of assigning probability distri-
butions to uncertain model inputs, and of using these distributions
to construct numerical measures of sensitivity, which we synony-
mously refer to as probabilistic or probabilistic sensitivity measures.
However, the analyst may find a variety of such sensitivity mea-
sures to use, e.g., variance-based, (Saltelli & Tarantola, 2002;
Wagner, 1995), quantile-based (Browne, Fort, Iooss, & Le Gratiet,
2017), distribution-based (Gamboa, Klein, & Lagnoux, 2018), and
some authors (Felli & Hazen, 1998; 1999; Oakley, 2009; Strong,
Oakley, & Brennan, 2014) advocate the use of value of information.
The very variety of available sensitivity measures could be a
stumbling block for the analyst: Which is the right one to use?
One of the difficulties associated with complex decision making
problems is that analysts may be required to work in contexts in
which there is no specified objective function or set of alternatives,
perhaps due to a decision having already been made (Eschenbach,
1992). In these cases, because there is no explicit comparison of
https://doi.org/10.1016/j.ejor.2020.07.010
0377-2217/© 2020 Elsevier B.V. All rights reserved.