Arguing with Confidential Information Nir Oren and Timothy J. Norman and Alun Preece 1 Abstract. While researchers have looked at many aspects of argu- mentation, an area often neglected is that of argumentation strate- gies. That is, given multiple possible arguments that an agent can put forth, which should be selected in what circumstances. In this paper, we propose a heuristic that implements one such strategy. The heuris- tic assigns a utility cost to revealing information, as well as a utility to winning, drawing and losing an argument. An agent participating in a dialogue then attempts to maximise its utility. We present a for- mal argumentation framework in which this heuristic may operate, and show how it functions within the framework. Finally, we discuss how this heuristic may be extended in future work, and its relevance to argumentation theory in general. 1 Introduction Argumentation has emerged as a powerful reasoning mechanism in many domains. One common dialogue goal is to persuade, where one or more participants attempt to convince the others of their point of view. This type of dialogue can be found in many areas including distributed planning and conflict resolution, education and in mod- els of legal argument. At the same time that the breadth of appli- cations of argumentation has expanded, so has the sophistication of formal models designed to capture the characteristics of the domain. Prakken [11] for example, has focused on legal argumentation, and has identified four layers with which such an argumentation frame- work must concern itself. These are: The logical layer, which allows one to represent basic concepts such as facts about the world. Most commonly, this layer consists of some form of non–monotonic logic. The dialectic layer, in which argument specific concepts such as the ability of an argument to defeat another are represented. The procedural layer governs the way in which argument takes place. Commonly, a dialogue game [17] is used to allow agents to interact with each other. The heuristic layer contains the remaining parts of the system. Depending on the form of the underlying layers, these may include methods for deciding which arguments to put forth and techniques for adjudicating arguments. While many researchers have focused on the lowest two levels (excellent surveys can be found in [3, 11, 12]), and investigation into various aspects of the procedural layer is ongoing (for example, [16, 6]), many open questions remain at the heuristic level. In this paper, we propose a decision heuristic for an agent allowing it to decide which argument to put forth. The basis for our idea is sim- ple; the agent treats some parts of its knowledge as more confidential than other parts, and, while attempting to win the argument, attempts 1 Department of Computing Science, University of Aberdeen, AB24 3UE, Scotland, email: noren,tnorman,apreece@csd.abdn.ac.uk to reveal as little of the more secret information to others as possible. This concept often appears in adversarial argumentation under the guise of not mentioning information that might help an opponent’s case, but also appears in many other settings. For example, reveal- ing trade secrets, even to win an argument, may damage an agent in the long term. The heuristic often emerges in negotiation dialogues, as well as persuasion dialogues in hostile setting (such as takeover talks or in some legal cases). Utilising this heuristic in arguments between computer agents can also be useful; revealing confidential information in an ongoing dialogue may damage an agent’s chances of winning a future argument. In the next section, we examine a few existing approaches to strat- egy selection, after which we discuss the theoretical foundations of our approach. We then present the heuristic, after which we see how it operates by means of an example. We conclude the paper by look- ing at possible directions in which this work can be extended. 2 Background and related research Argumentation researchers have recognised the need for argument selection strategies for a long time. However, the field has only re- cently started receiving more attention. Moore, in his work with the DC dialectical system [7], suggested that an agent’s argumentation strategy should take three things into account: Maintaining the focus of the dispute. Building its point of view or attacking the opponent’s one. Selecting an argument that fulfils the previous two objectives. The first two items correspond to the military concept of a strat- egy, i.e. a high level direction and goals for the argumentation pro- cess. The third item corresponds to an agent’s tactics. Tactics allow an agent to select a concrete action that fulfils its higher level goals. While Moore’s work focused on natural language argument, these requirements formed the basis of most other research into agent ar- gumentation strategies. In 2002, Amgoud and Maudet [1] proposed a computational sys- tem which would capture some of the heuristics for argumentation suggested by Moore. Their system requires very little from the argu- mentation framework. A preference ordering is needed over all pos- sible arguments, and a level of prudence is assigned to each agent. An argument is assigned a strength based on how convoluted a chain of arguments is required to defend it. An agent can then have a “build” or “destroy” strategy. When using the build strategy, an agent asserts arguments with a strength below its prudence level. If it cannot build, it switches to a destroy strategy. In this mode, it attacks an oppo- nent’s arguments when it can. While the authors note other strategies are reasonable, they make no mention of them. Shortcomings of their approach include its basis on classical propositional logic and the as- sumption of unbounded rationality; computational limits may affect