> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 Abstract— Quantitative methods of analysis have progressed faster than quantitative methods of capturing, representing, propagating and analyzing uncertainty in the realm of computational thinking, adversely affecting the quality of both scientific computational analysis, and important policy decisions. Uncertainty arises from incomplete model input information (aleatory uncertainty), incomplete model structure information (epistemic uncertainty), and incomplete understanding of model dynamics. We describe a work in progress computational approach, framework, and language, RiskModelica, that will 1) support representation, propagation, and calibration of aleatory uncertainty using probability theory, probability boxes, and Dempster-Shafer theory of evidence; 2) develop reliable methodologies — algorithms, data acquisition and management procedures, software and theory — for quantifying uncertainty in computer predictions; 3) support exploration of epistemic uncertainty utilizing causal analysis, and static and dynamic program slicing to characterize the dependencies, causal relationships, and interactions of design decisions; and 4) as a way of gaining insight into uncertainties, enable subject matter experts to observe model characteristics under novel conditions of interest. These capabilities represent a revolutionary approach to capturing, representing, propagating, and analyzing quantitatively, uncertainties that arise in the process of computational thinking. Index Terms— computer languages, emergent behavior, quantifying uncertainty, risk analysis I. INTRODUCTION omputational thinking is ubiquitous. Unfortunately so are the risks associated with an unsystematic management of uncertainty during the design, development and use of computational models. One only needs to study conflicting model results in nearly every scientific endeavor to appreciate the problem. Many risks arise because uncertainties are quantified in a supplementary, rather than integrative manner. There are primarily two reasons they aren’t integrated: 1) uncertainties are often epistemic and 2) no good general purpose methods exist for capturing and propagating expert characterizations of uncertainty in their models. The impact is Manuscript received January 11, 2008. This work was supported in part by the National Science Foundation under Grant 0426971. The authors are at the Universtiy of Virginia. Department of Computer Science, School of Engineering and Applied Science, 151 Engineer's Way, P.O. Box 400740, Charlottesville, VA 22904-4740. email: [mspiegel, rjg7v, reynolds]@cs.virginia.edu. profound. How can policy makers make informed decisions involving billions of dollars and millions of people in confidence when poor management of uncertainty pervades model development and analysis? We present several work in progress methods for capturing and propagating characterizations of uncertainty in computational thinking- based models and for exploring uncertainties that emerge during model execution. Modeling under uncertainty has been of paramount importance in the past half century, as quantitative methods of analysis have been developed to take advantage of computational resources. Simulation is gaining prominence as the proper tool of scientific analysis under circumstances where it is infeasible or impractical to directly study the system in question. According to a February 2006 report of the NSF Blue Ribbon Panel on Simulation-Based Engineering Science (SBES): “The development of reliable methodologies – algorithms, data acquisition and management procedures, software, and theory – for quantifying uncertainty in computer predictions stands as one of the most important and daunting challenges in advancing SBES” [1]. Its daunting nature is evident in the results of epidemiology studies conducted this century. Epidemiologists have addressed the question of government level actions and reactions regarding the spread of infectious diseases such as smallpox and bird flu. Should a comprehensive vaccination program be initiated? How and to what degree should infected individuals be isolated, and for how long? The range of answers to these questions is broad and full of conflict. Recently Elderd [2] has shown analytically that just four of the potentially hundreds of critical independent variables in these studies induce extreme sensitivity in model predictions, leading to serious conflict regarding remedial approaches involving billions of dollars and millions of people. Clearly there is a need for robust uncertainty representation and analysis methods in computational thinking so that scientists and policy makers can better understand and characterize the properties of the predictions they make based on their models. Our envisioned solution builds on the acausal modeling language Modelica, producing a language we call “RiskModelica,” by incorporating novel methods for quantifying uncertainty formally and robustly, for propagating that uncertainty through the modeling process and revealing its effects on model outcomes, for later use by scientists and policymakers. Further, semi-automated methods for exploring Quantifying and Analyzing Uncertainty in Simulations to Enable User Understanding Michael Spiegel, Ross Gore, and Paul F. Reynolds, Jr., University of Virginia C