Planning for success: The social approach to building Bayesian models Krol Kevin Mathias Cynthia Isenhour Alex Dekhtyar Judy Goldsmith Beth Goldstein March 14, 2007 Abstract We introduce a new variant of Markov decision processes called MDPs with action results, and a variant of dynamic Bayesian networks called bowties, for modeling the effects of stochastic actions. Bowties grew out of our work on decision-support systems for advisors in the US social welfare system. Bowties, and our elicitation process for them, are designed to elicit dynamic Bayesian network fragments from domain experts who think narratively instead of quantitatively. Our elicitation process has worked well with the welfare case managers and other domain experts, in the sense of capturing consistent and validated models. 1 Introduction Planning under uncertainty with constraints is a metaphor for our times and a computational problem of interest in AI, operations research, and decision science. Within AI, planning under uncertainty is most commonly modelled using Markov decision processes (MDPs). There are well-understood solvers, hereby called planners, for MDPs. A major issue in using MDPs to solve real-world problems is the problem of building appropriate mathematical models of the phenomena of interest. This paper addresses the problem of acquiring models from domain ex- perts. We concentrate on the model-building process for the domain of the Kentucky “welfare-to-work” social welfare system. Those who guide wel- fare recipients (clients) through the Byzantine maze of requirements and constraints and negotiate plans with and for the clients are called case man- agers. Our domain experts have primarily been case managers, with a few welfare agency managers. 1