A Reactive Approach to Explanation Johanna D. Moore University of California, Los Angeles and USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 William R. Swartout USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 Abstract Explanation is an interactive process, requir- ing a dialogue between advice-giver and advice- seeker. Yet current expert systems cannot par- ticipate in a dialogue with users. In particular these systems cannot clarify misunderstood ex- planations, elaborate on previous explanations, or respond to follow-up questions in the con- text of the on-going dialogue. In this paper, we describe a reactive approach to explana- tion - one that can participate in an on-going dialogue and employs feedback from the user to guide subsequent explanations. Our system plans explanations from a rich set of explana- tion strategies, recording the system's discourse goals, the plans used to achieve them, and any assumptions made while planning a response. This record provides the dialogue context the system needs to respond appropriately to the user's feedback. We illustrate our approach with examples of disambiguating a follow-up question and producing a clarifying elaboration in response to a misunderstood explanation. 1 Introduction Explanation requires a dialogue. Users need to be able to ask follow-up questions if they do not understand an explanation or want further elaboration. Answers to such questions must take into account the dialogue con- text. Studies of advisory consultations between humans bear out this observation, showing that explanation is an interactive process between explainer and advice- seeker [Pollack et a/., 1982]. Studying student-teacher interactions, we found that advice-seekers frequently did not fully understand the instructor's response. They frequently asked follow-up questions requesting clarifi- cation, elaboration, or re-explanation. In some cases, follow-up questions took the form of a well-articulated query; in other cases, the follow-up was a vaguely artic- ulated mumble or sentence fragment. Often the instruc- The research described in this paper was supported by the Defense Advanced Research Projects Agency (DARPA) under a NASA Ames cooperative agreement number NCC 2-520. tor did not have much to go on, but still had to provide an appropriate response. Unfortunately, current expert systems cannot partici- pate in a dialogue with users. In particular these systems cannot clarify misunderstood explanations, elaborate on previous explanations, or respond to follow-up questions in the context of the on-going dialogue. In part, the explanation components of current expert systems are limited because they are quite simple. However, even the more sophisticated generation techniques employed in computational linguistics are inadequate for respond- ing to follow-up questions. The problem is that both expert system explanation and natural language gener- ation systems view generating responses as a one-shot process. That is, a system is assumed to have one op- portunity to produce a response that the user will find satisfactory. This one-shot approach is clearly inconsistent with analyses of naturally occurring advisory dialogues. Moreover, if a system has only one opportunity to pro- duce a text that achieves the speaker's goals without over- or under-informing, boring or confusing the listener then that system must have an enormous amount of de- tailed knowledge about the listener. This has led to a view that improvements in explanation will come from improvements in the user model and considerable effort has been expended in representing a detailed model of the user - including the user's goals, what the user knows about the domain, how information should be presented to that user, and so forth [Appelt, 1981, McCoy, 1985, Paris, 1988, Kass and Finin, 1989]. However, following Sparck Jones [Sparck Jones, 1984], we question whether it will be possible to build complete and correct user models. Further, by focusing on user models, researchers have ignored the rich source of guidance that people use in producing explanations, namely feedback from the lis- tener [Ringle and Bruce, 1981]. By throwing out the one-shot assumption, we can make use of that guidance. Thus, a reactive approach to explanation is required - one in which feedback from the user is an integral part of the explanation process. A reactive explanation fa- cility should include the ability to: 1) accept feedback from the listener, 2) recover if the listener indicates he is not satisfied with the response, 3) answer follow-up questions taking into account previous explanations, not as independent questions, 4) offer further explanations 1504 Speech and Natural Language