The Very Idea of Computer Self-Knowledge and Self-Deception SANFORD C. GOLDBERG Department of Philosophy, Grinnell College, Grinnell, IA 50112, USA. Abstract. Do computers have beliefs? I argue that anyone who answers in the affirmative holds a view that is incompatible with what I shall call the commonsense approach to the propositional attitudes. My claims shall be two. First,the commonsense view places important constraints on what can be acknowledged as a case of having a belief. Second, computers – at least those for which having a belief would be conceived as having a sentence in a belief box – fail to satisfy some of these constraints. This second claim can best be brought out in the context of an examination of the idea of computer self-knowledge and self-deception, but the conclusion is perfectly general: the idea that computers are believers, like the idea that computers could have self-knowledge or be self-deceived, is incompatible with the commonsense view. The significance of the argument lies in the choice it forces on us: whether to revise our notion of belief so as to accommodate the claim that computers are believers, or to give up on that claim so as to preserve our pretheoretic notion of the attitudes. We cannot have it both ways. Key words: computer, intentionality, belief, self-knowledge, self-deception 1. The Commonsense View of the Mind Let me begin with the presuppositions of what I am calling the “commonsense view of the mind”. 1 I will focus on the views of Davidson as expressing such a view, since his is the most widely-known of these views; though the details between various versions need not concern us here. 2 To begin, such a view approaches matters from the assumption that the mind is the locus of propositional attitudes, and it sees the attribution of these attitudes as constrained in various ways. The result is a conception of intentional states which imposes important restrictions on what counts as a case of a having a belief (having a desire, being self-deceived, and so on). A general constraint on belief-attribution derives from what to the common- sense view is perhaps the central point of belief-attribution: to explain (and so to understand) the behavior of other agents. 3 This generates what I will call the explanation constraint: nothing is to be considered an agent with beliefs 4 unless attributing beliefs is required for understanding some portion of the agent’s behavioral repertoire. Minimally, this means that in cases where this repertoire would be at least as explicable (in some intuitive sense) without appeal to beliefs as it is by appealing (e-mail:goldberg@ac.grin.edu) Minds and Machines 7: 515–529, 1997. c 1997 Kluwer Academic Publishers. Printed in the Netherlands.