Advances in Cognitive Systems 3 (2014) 107–122 Submitted 9/2013; published 7/2014 Reasoning About Belief Revision to Change Minds: A Challenge for Cognitive Systems Will Bridewell WILL. BRIDEWELL@NRL. NAVY . MIL Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375 USA Paul Bello PAUL. BELLO@NAVY . MIL Office of Naval Research, 875 N. Randolph Street, Arlington, VA 22203 USA Abstract In this paper, we explore the representational and inferential requirements for supporting a rich notion of belief revision. Our analysis extends beyond the typical case of a single agent revising its beliefs in light of new information into the realm of social engagement. More to the point, we argue that, although belief revision mechanisms surely operate at the level of single agents, we must also consider the need to lift an agent’s understanding of the belief revision process to the knowledge level in order to intentionally guide other agents’ revision processes with whom it socially inter- acts. In exploring belief revision at the knowledge level, we identify reasons for rejecting classical formulations of the problem and identify constraints by which alternative accounts must abide. 1. Introduction Belief revision is a common result of human dialog and the reason for many conversations. Peo- ple chat about the world, learn new facts, and give up outdated or incorrect beliefs. Sometimes this process happens without obvious results, but other times we see it played out in arguments and discussions. Clearly, intelligent agents must be able to update their own beliefs and knowl- edge. Traditionally, artificial intelligence (AI) has taken a limited view of belief revision, seeing it as an automatic, formalized means for truth maintenance. Here, we claim that, as researchers move toward modeling socially aware cognitive systems, they must also change their view of belief revision—its purpose, its operation, and its flexibility. Historically, AI inherited its view on belief revision from logicians, who strongly emphasize the importance of consistency (Alchourrón, Gärdenfors, & Makinson, 1985). In this work, beliefs are typically seen as elements in a theory about the world that may be expanded, contracted, and when consistency is threatened by a new belief, revised. Carrying out revision involves removing the set of elements from the theory that imply a conflict with a new belief. Since there may be several such sets, the standard approach involves an appeal to epistemic entrenchment (Gärdenfors & Makinson, 1988), which orders axioms based on logical entailment. Loosely stated, the general principle is to be conservative, removing as few beliefs as possible to maintain consistency. The prominence given to not only consistency but also automaticity is shared by many systems in AI, including varieties of truth maintenance systems (de Kleer, 1986; McAllester, 1990). c 2014 Cognitive Systems Foundation. All rights reserved.