A cognitively plausible algorithm for causal inference Gordon Briggs and Sangeet Khemlani {gordon.briggs, sangeet.khemlani}@nrl.navy.mil Navy Center for Applied Research in Artificial Intelligence US Naval Research Laboratory, Washington, DC 20375 USA Abstract People without any advanced training can make deductions about abstract causal relations. For instance, suppose you learn that habituation causes seriation, and that seriation prevents methylation. The vast majority of reasoners infer that habituation prevents methylation. Cognitive scientists disagree on the mechanisms that underlie causal reasoning, but many argue that people can mentally simulate causal interactions. We describe a novel algorithm that makes domain-general causal inferences. The algorithm constructs small-scale iconic simulations of causal relations, and so it implements the “model” theory of causal reasoning (Goldvarg & Johnson- Laird, 2001; Johnson-Laird & Khemlani, 2017). It distinguishes between three different causal relations: causes, enabling conditions, and preventions. And, it can draw inferences about both orthodox relations (habituation prevents methylation) and omissive causes (the failure to habituate prevents methylation). To test the algorithm, we subjected participants to a large battery of causal reasoning problems and compared their performance to what the algorithm predicted. We found a close match between human causal reasoning and the patterns predicted by the algorithm. Keywords: causation; mental models; reasoning; simulation Introduction People routinely make inferences about complex causal matters. For instance, consider the following description about a particular farm: 1. Flourishing weeds will cause a lack of nutrients. A lack of nutrients will prevent the vegetables from growing. The lack of vegetables will enable an early harvest. What is the relation between the growth of weeds and an early harvest? Reasoners needn’t have a background in botany to infer a possible causal relation between the two events, such as in (2): 2. Flourishing weeds will cause an early harvest. People’s inferences are systematic, and at least some errors are obvious, i.e., anyone who infers (3) from the information in the description above is grossly mistaken: 3. Flourishing weeds will prevent an early harvest. How do people infer causal relations between events? Sometimes, perceptual cues may drive people to infer a causal connection between one event and another: if you observe that when a man flips a switch, a particular light goes off, it seems reasonable to infer a causal relation between the switch and the light. Indeed, the temporal contiguity of two events can be sufficient to imply causality (e.g., Lagnado & Sloman, 2006; Rottman & Keil, 2012). But the preceding farming example demonstrates that people can infer causal relations from descriptions, not just observations, and that they can do so in the absence of any explicit temporal information. How do people make causal inferences? A popular approach in artificial intelligence simulates human causal reasoning using causal Bayes nets and a calculus developed by Pearl (2009). It allows precise calculations of conditional probabilities, e.g., the probability of an early harvest given flourishing weeds, P(early harvest | flourishing weeds), provided that relevant causal relations are translated into the notation of a graphical network. While the approach can distinguish between causes and mere associations, Pearl’s calculus cannot explain how reasoners infer novel causal relations where none had existed before, i.e., it cannot explain how people infer (2) from (1). Cognitive scientists disagree on the mechanisms and representations that underlie causal reasoning (Ahn & Bailenson, 1996; Cheng, 1997; Sloman, 2005; White, 2014; Wolff & Barbey, 2015). Mental simulation is central to many psychological accounts of the process: theorists agree that people construct small-scale simulations to predict outcomes (Kahneman & Tversky, 1982), to understand mechanistic relations (Hegarty, 2004), to comprehend physical scenes (Battaglia, Hamrick, & Tenenbaum, 2013), to resolve inconsistent and contradictory information (Khemlani & Johnson-Laird, 2011), to deduce the consequences of sequences of events (Khemlani, Mackiewicz, Bucciarelli, & Johnson-Laird, 2013), and to make counterfactual inferences (Byrne, 2005; Galinsky & Moskowitz, 2000). Recent approaches to modeling causal reasoning in AI and cognitive science face two overarching challenges: first, people distinguish between causal relations such as cause, enable, and prevent. They understand, for instance, that (4a) and (4b) mean different things: 4a. A lack of vegetables will cause an early harvest. b. A lack of vegetables will enable an early harvest. Graphical networks have difficulty capturing the difference between the two relations. Various psychological theories have invoked the transmission of causal forces (Wolff, 2007), causal model structures (Sloman et al., 2009), and mental simulations of possibilities (Goldvarg & Johnson-Laird, 2001) to explain what different causal relations mean (for a review, see Khemlani, Barbey, & Johnson-Laird, 2014). But there exists no robust computational model that predicts what causal relations people generate from descriptions such as (1) above.