A Cognitive Model of Argumentation Kevin B. Korb, Richard McConachy and Ingrid Zukerman Department of Computer Science Monash University Clayton, Victoria 3168, AUSTRALIA email: korb, ricky, ingrid @cs.monash.edu.au Abstract In order to argue effectively one must have a grasp of both the normative strength of the inferences that come into play and the effect that the proposed inferences will have on the audience. In this paper we describe a pro- gram, NAG (Nice Argument Generator), that attempts to generate arguments that are both persuasive and correct. To do so NAG incorporates two models: a normative model, for judging the normative correctness of an argu- ment, and a user model, for judging the persuasive effect of the same argument upon the user. The user model incorporates some of the common errors humans make when reasoning. In order to limit the scope of its rea- soning during argument evaluation and generation NAG explicitly simulates attentional processes in both the user and the normative models. Introduction In order to argue well one must have a grasp of both the normative strength of the inferences that come into play and the effect that the proposed inferences will have on the au- dience. Our program NAG (Nice Argument Generator) is intended to argue well — that is, to present arguments that are persuasive for an intended audience and also are as close to normatively correct as such persuasiveness allows. In order to develop such a system we have had to incorporate two models within NAG: a normative model, for judging the normative correctness of an argument, and a user model, for judging the persuasive effect of the same argument upon the user. The user model should ideally reflect all of the human cognitive heuristics and weaknesses that cognitive psychologists may establish as widespread, such as the failure to use base rate information in inductive reasoning (Tversky and Kahneman, 1982a) and overconfidence (Lichtenstein et al., 1982). The normative model should ideally incorporate as many items of knowledge as we can muster and the best evaluative tools for judging their relationships. Neither the user being modeled by the system nor NAG itself are unlimited cognitive agents, of course. In order to limit the scope of what might be drawn into consideration during argument evaluation and generation we explicitly simulate attentional processes in both the user and the normative models. In this paper we first sketch the overall architecture of NAG. We then describe those design features specific to implement- ing the psychological mechanisms mentioned, some possible directions for extending them, and the effects of such psycho- logical modeling on NAG’s argumentation. An Overview of NAG NAG is designed to analyze arguments and to compose its own arguments intended to be persuasive for particular inter- locutors. Given a user model, a context and a goal proposi- tion, NAG produces an argument supporting the goal which, according to its user model, will be effective in bringing the user to a degree of belief in the goal proposition within a target range. When presented with an argument by the user, NAG will respond either by agreeing or by presenting an effective counterargument. The system is composed of the following modules: Argument Generator, Abduction Engine, Argument Analyzer, and Argument Strategist (Figure 1). 1 The Argument Strategist governs the argumentation pro- cess. In the first instance it either receives a goal proposition or a user argument. Given a goal proposition it invokes the Generator to initiate the construction of an argument. The Generator uses the argumentative context and the goal to con- struct an Argument Skeleton. The Argument Skeleton forms the initial basis for the system’s argument, which is represent- ed as a Bayesian network we call an Argument Graph. The Strategist passes this initial argument to the Analyzer, which tests the effect of the argument on the goal proposition in both the user and the normative models, using Bayesian network propagation in the submodels corresponding to the Argument Graph (Pearl, 1988; Neapolitan, 1990), while taking into ac- count the psychological mechanisms described below in The Psychology of Inference. In this way the Analyzer may dis- cover that some of the premises employed are insufficiently supported in either the user or the normative model, or that an inference employed in the argument is weak. The Strate- gist uses the evaluation returned by the Analyzer to determine whether, and how, to strengthen the argument, for example by providing a weak premise in the argument to the Generator as a new goal, so that a supporting subargument may be built. The iterative process of invoking Generator and Analyzer con- tinues until either some Argument Graph is generated which brings the original goal proposition into the target range for strength of belief, the Strategist is unable to fix a problem re- ported by the Analyzer, some operating constraint is violated which cannot be overcome (e.g., the overall complexity of the argument cannot be reduced to an acceptable level) or time runs out. Finally, the Strategist will report the argument to the user, if a suitable one has been produced. 1 For a more detailed description of NAG’s architecture, see Zuk- erman, Korb and McConachy (1996) or McConachy, Zukerman and Korb (1996).