An Integrated Environment for Knowledge Acquisition Jim Blythe Information Sciences Institute University of Southern California Marina del Rey, CA 90292 blythe@isi.edu Jihie Kim Information Sciences Institute University of Southern California Marina del Rey, CA 90292 jihie@isi.edu Surya Ramachandran Information Sciences Institute University of Southern California Marina del Rey, CA 90292 surya@isi.edu Yolanda Gil Information Sciences Institute University of Southern California Marina del Rey, CA 90292 gil@isi.edu Best Paper, International Conference on Intelligent User Interfaces, 2001 ABSTRACT This paper describes an integrated acquisition interface that includes several techniques previously developed to support users in various ways as they add new knowledge to an intelligent system. As a result of this integration, the individual techniques can take bet- ter advantage of the context in which they are invoked and provide stronger guidance to users. We describe the current implementation using examples from a travel planning domain, and demonstrate how users can add complex knowledge to the system. 1. INTRODUCTION An important area of user interface research is the development of practical approaches that enable users to add new knowledge to an intelligent system, which would bring computers closer to meeting the challenge of end-user programming. These acquisi- tion interfaces need to have many intelligent capabilities in order to support the complex dialogues that they must conduct with the user, integrate the new knowledge with existing knowledge, and make appropriate generalizations. In past research, we developed several acquisition interfaces [10, 2, 18], all using EXPECT as an underlying framework for knowl- edge representation and reasoning [17]. Each interface addressed different issues and helped the user in different ways as they add knowledge to a system, yet none could individually claim to be able to support a user appropriately. This paper presents an inte- grated acquisition interface that combines these approaches, pro- viding stronger guidance to users. The paper begins with a brief overview of the individual pieces of knowledge acquisition research that our interface integrates. Then we show an example scenario in which a user interacts with the im- plemented tool. Next, we analyze the different kinds of knowledge that users need to specify and discuss the challenges they pose to a knowledge acquisition tool. We then describe the components of our implemented system in detail, highlighting the benefits of the integrated acquisition environment. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IUI’01, January 14-17, 2001, Sante Fe, New Mexico. Copyright 2001 ACM 1-58113-325-1/01/0001 ..$5.00 2. PREVIOUSLY DEVELOPED TOOLS AND TECHNIQUES Our past research on knowledge acquisition can be described in terms of typical concerns that users may have about adding new knowledge to an intelligent system: Users do not know formal languages. We have developed English-based editors that allow users to modify English paraphrases of the internal, more formal representations [3]. Users can only se- lect the portions of the paraphrase that correspond to a valid expression in the internal language, and pick from a menu of suggested possible replacements for that portion. This approach enables the system to com- municate with the user in English while circumventing the challenges of full natural language processing. How do users know where to start? Intelligent systems use knowledge to perform tasks for the user. If the acquisition tool has a model of those tasks, then it can reason about what kinds of knowl- edge it needs to acquire from the user. We have de- veloped acquisition tools that reason about general task models and other kinds of pre-existing knowl- edge (such as domain-specific knowledge that is ini- tially included in the system’s knowledge base) in or- der to guide users to provide the knowledge that is rel- evant to those tasks [2]. Our work has concentrated on plan evaluation and assessment tasks, but could be used with other task models. How do users know that they are adding the right things? Users need to know that they are providing knowl- edge that is useful to the system and whether they have given the system enough knowledge to do something on its own. Our approach is to use Interdependency Models that capture how the individual pieces of knowl- edge provided work together to reason about the task [10]. These Interdependency Models are derived auto- matically by the system, and are used to detect incon- sistencies and missing knowledge that turn into follow- up questions to the user. Users are often not sure whether they are on the right track even if they have been mak- ing progress, and we have found that it is very useful to show the user some aspects of this Interdependency Model (for example, showing the substeps involved in doing a task) and how it changes over time.