Cornput. & Graphics, Vol. 19, No. 6, pp. 873884, 1995 Copyright 0 1995 Elsevier Science Ltd Printed in Great Britain. All rights reserved OO97-8493195 $9.50+ 0.00 0097-8493(!a5)ooo74-7 Technical Section ADAPTIVITY IN GRAPHICAL USER INTERFACES: AN EXPERIMENTAL FRAMEWORK L. MIGUEL ENCARNACAO University of Tiibingen-Wilhelm Schickard Institute of Computer Science (WSI), Interactive Graphics Systems Lab (GRIS), Auf der Morgenstelle, 10 C-!J-D-72076 Ttibmgen, Germany e-mail: miguel@ris.infoxmatik.um-tuebingende WWW: http://www.@gris.informatik.uni-tuebingen.de/ Abstract-Several user and task modeling approaches evolved during the past years and were applied to certain problem areas showing different strengths and weaknesses. A qualitative comparison of these approaches and techniques is di&.ult since the application and experimentation environments vary. On the other hand, the integration of approved user modeling techniqueswith different application environments is usually difficult if not impossible. We propose a framework that, in a first step, allows the direct comparison of results of different user and task modeling approaches in graphical user interfaces. The objective is the development of appropriate adaptive help systems for new and existing applications. The system is therefore designed as a client-server architecture to support multi-user operation. The implementation can be easily adapted to different application systems. Applications can be upgraded in a well-defined way, and with a minimal amount of effort by using the approach and tools presented in this paper. A prototype-implementation is presented consisting of an interaction protocoling and managing kernel, a user evaluating module and a corresponding adaptive help system applied to sample medical and CAD experimentation environments. 1. INTRODUCTION During the last few years different approaches have been proposed to model users and the tasks users want to accomplish at the interface to an application. These approaches were applied to all kinds of human-computer interfaces and, nowadays, specially focus on graphical user interfaces (GUI’s). For the development of adaptive or self-adaptive systems and interfaces professionals from the most distinct fields of research activity, such as human+mputer inter- action, cybernetics, artificial intelligence and in- formation sciences introduced different techniques and systems to evaluate and learn on users’ actions: Some researchers proposed neural architectures, fuzzy logic, or fuzzy neural systems to model the user’s behavior (for more information on these techniques see e.g. refs [14]). A different approach taken is the use of paradigms of artificial intelligence research that are based on the exploitation of a priori knowledge, such as plan- based reasoning [5], rule-based knowledge represen- tation, and the use of memory-based learning techniques [6]. These approaches and techniques and others proved to be more or less useful in a special application environment to support a certain amount of (self-)adaptivity of the system. But it is well-known that the acceptance of an innovation is strongly dependent on its comparison with the status-quo and other (innovative) approaches addressing the same ambiguities. Yet, the possibilities for qualitative comparisons among the approaches mentioned above, in the same experimentation environment and on several different application platforms, is limited and only few attempts have focused on this difficulty. The introduction of graphical user interfaces (GUI’s) to almost all areas of computer-human interaction (see e.g. refs [7-g]) brought a new dimension into the research on user modeling: The behavior of the user at the interface is not restricted anymore to only the input data, but to the way how the user interacts with the system. While the input data is highly application-dependent, the way the data is put into the system is much more user- dependent, and thus extremely valuable for user modeling attempts. For this reason we focus on graphical direct-manipulative user interfaces when talking about user interaction. The underlying concepts presented in this paper, however, are applicable to all kinds of human-computer interfaces since the basic data structures are based on the definition of abstract actions and therefore not restricted t’o GUI-specific features and functional- ities. We propose a framework that allows for a qualitative (comparison among such approaches and that is able to be easily ported on different applica- tion platforms in order to obtain application- independent results. The final objective is the development of appropriate adaptive help systems for new and existing applications as shown in Fig. 1. 873