The Role of Explanation in Very Simple Tasks Eric G. Taylor (etaylor4@illinois.edu) Department of Psychology, 603 East Daniel St. Champaign, IL 61820 USA David H. Landy (dlandy@illinois.edu) Department of Psychology, 603 East Daniel St. Champaign, IL 61820 USA Brian H. Ross (bhross@illinois.edu) Department of Psychology, 603 East Daniel St. Champaign, IL 61820 USA Abstract Much research on explanation has focused on the ability of explanations to draw upon relevant knowledge to aid in understanding some event or observation. However, explanations may also structure our understanding of events and related tasks more generally, even when they add no relevant information. In three experiments, we show that explanations affect performance in simple, binary decision tasks where they could not possibly add relevant information. Whereas people with no explanation for differences in event probabilities tended to “probability-match,” people with an explanation tended to “over-match” (behave more normatively). The results suggest that explanations play a role in structuring our understanding of events, in addition to adding relevant information. Keywords: explanation, probability matching, decision- making, understanding Explanations support much intelligent behavior. We explain trends in the stock market in hopes of avoiding future economic woes, explain car failure to diagnose a problem, and we even explain why works of art gives us a chill just to enhance our appreciation (Keil, 2006). In recent years, cognitive scientists have begun to examine the importance of explanation (Lombrozo, 2006; Keil & Wilson, 2000), but despite agreement that explanations serve many goals, the empirical literature has focused on a limited set of tasks and functions. The purpose of this paper is to show a novel (and perhaps unintuitive) case where having an explanation changes performance in order to suggest a broader utility of explanation than currently exists in the literature. Most work on explanation has examined cases where the explanation provides additional relevant information to help one understand the connection between an observation and other knowledge. For example, category learners often explain the correlations between an exemplar’s properties to better understand the category structure (e.g., a bird nests in trees because it has wings), and this affects their applications of the category (e.g., Murphy & Wisniewski, 1989). Explanations also improve our understanding of social events, where we often call upon prior social experiences to make sense of others’ behavior (Jones & Nisbtt, 1972). Laboratory studies of how explanations draw upon relevant knowledge relate directly to cases in the real world, where, for example, explaining the cause of a social problem (e.g., homelessness, global warming) by incorporating knowledge of social structures affects how we might try to solve that problem. A major goal in our research program to examine and understand the role of explanation in cognition is to identify and explore the many ways that explanations can influence behavior. Although we are very interested in how explanations invoke relevant knowledge to help us understand events (Hummel, Landy, & Devnich, 2008; Hummel & Ross, 2006; Taylor, Landy, Ross, & Hummel, 2008), in this paper we investigate a different aspect of how explanations may influence performance. We consider whether explanations sometimes affect performance in very simple tasks without adding relevant information. Our novel theoretical claim is that explanations can affect performance without adding task-relevant information by providing general ways to organize an understanding of a situation or event. We evaluated this idea by examining how explanations impacted behavior on a relatively low-level task, in which additional causal information is of no use. In our view, the explanations served as a task frame, which led participants who received it to structure their understanding of the task differently from those without an explanation. We chose a binary prediction task, in which participants predict which of two outcomes will occur on the next trial, for many trials. On these tasks, people tend to “probability match,” or predict each outcome roughly the percentage of times that the outcome tends to occur (for a review, see Vulkan, 2000). This behavior is non-normative, since predicting the most likely event on each trial maximizes correct predictions. We added explanations to this paradigm in the following way: Participants in the No Explanation condition were told they would be predicting which of two events would occur on the next trial, from trial to trial, and that one event was more likely than another. Participants in the Explanation 2463