Interaction Modes for Augmented Reality Visualization Hannah Slay, *Matthew Phillips, *Dr Rudi Vernik, Dr Bruce Thomas Wearable Computer Lab School of Computer and Information Science University of South Australia Mawson Lakes 5095, South Australia *Defence Science and Technology Organisation Salisbury 5108, South Australia Slahy001@students.unisa.edu.au, Matthew.Phillips@dsto.defence.gov.au, Rudi.Vernik@dsto.defence.gov.au, Bruce.Thomas@unisa.edu.au Abstract In this paper we describe a novel use of augmented reality for information visualization. We detail the use of augmented reality as a component of InVision — an open framework for the development and deployment on visualization systems. The research discussed in this paper is part of an ongoing project into pervasive computing. 1 Introduction The current trend towards pervasive computing [1] suggests that future work environments will comprise a range of information display and interaction devices. This will include the use of more conventional approaches such as notepad computers and PDAs together with 3D immersive displays, augmented reality approaches, and other personal display devices such as IBM s bluetooth enabled LINUX watch [2]. Consistent approaches to interaction in these heterogeneous environments will be important if people are to make best use of the information available. We believe 2D and 3D information utilization is crucial to making this information accessible to people. This paper presents our investigation of extending interactions from a traditional desktop interaction paradigm to a tangible augmented reality paradigm. The visualization system we extended is InVision. InVision (Pattison T.R., Vernik R.J., Goddburn D.P.J. and Phillips M.P., 2001) is a component based visualization environment developed by DSTO and research partners to investigate a range of issues related to the rapid assembly and deployment of adaptive visualization systems. Issues being investigated include view management and sharing, novel visualization approaches, view interaction, process-based visualization, and intelligent agent support. InVision has been used as the research platform for the work discussed in this paper, Copyright ' 2001, Commonwealth of Australia. This paper appeared at the Australian Symposium on Information Visualisation, Sydney, December 2001. Conferences in Research and Practice in Information Technology, Vol. 9. Peter Eades and Tim Pattison, Eds. Reproduction for academic, not- for profit purposes permitted provided this text is included. which looks at the integration and use of Augmented Reality (AR) views. Augmented reality refers to the process of using a head mounted display to overlay computer-generated imagery to the real world. This allows users to visualize three- dimensional objects in the real world and interact with them in a natural way. The computer-generated overlays provide additional information to the user to enhance the real world. Augmented reality approaches provide the potential for generating new types of visual forms. For example, the approach discussed in this paper makes the use of tangible fiducial marker cards, see-through head mounted displays in a form factor of glasses, wearable computers, and position-orientation tracking approaches to allow the generation of 3D images in space within real contextual surroundings. The potential benefits of these types of views over traditional virtual reality include better human interaction, contextual awareness, and less likelihood of simulator sickness. Moreover, these types of views could be generated anywhere, without the need for costly and rigid display and tracking infrastructure. This form of AR technology aligns well with the concept of pervasive computing which aims to unleash the user from the computer and provide a rich integrated ambient environment. Selection of objects is fundamental for interactions with objects. Many different methods have been suggested, but these can all be broken into two categories, ray casting and arm-extension. Ray casting allows the user to select objects at a distance by shining a virtual ray from their hand / head or any other pointing device. The user selects the object when a collision occurs between their ray and one of the virtual objects. Arm-extension techniques allow the user to select objects by providing them with a procedure to extend the virtual representation of their arms. When the arms have grown to a suitable length, selection can be performed with objects within the users reach. Most research has been performed where objects are large and out of reach. Some techniques can be used from both the ray casting and arm-extension categories, but most are not applicable. This paper discusses our research investigating the issues and problems related to interaction modes within