Position Paper which describes a pre-project status of a piece of work in human-computer interaction with ubiquitous systems. For consideration at 3rd UK-UbiNet Workshop: Designing, evaluating and using ubiquitous computing systems, 9-11 February 2005, University of Bath http://www.bath.ac.uk/comp-sci/hci/uk-ubinet.html Using thought to interact with future augmented spaces Shaun W. Lawson School of Computing, Napier University, 10 Colinton Road, Edinburgh, Scotland, EH10 5DT s.lawson@napier.ac.uk Karla Felix Navarro Department of Computer Systems, Faculty of Information Technology, University of Technology, Sydney, Australia. karla@it.uts.edu.au David Benyon School of Computing, Napier University 10 Colinton Road, Edinburgh, Scotland, EH10 5DT d.benyon@napier.ac.uk 1. Introduction In the envisaged pervasive-computerized society of the future, microprocessor devices will be woven into the everyday fabric of our existence [1]. Significant progress has been made in a number of technological areas which is edging us slowly towards Weiser’s vision. Advances in software engineering and network systems methods which allow for standardized device- to-device ad-hoc communication, reconfiguration and service discovery coupled with the commercial availability of handheld and embedded mote-like devices has accelerated research in the field. Additionally, the human-computer interaction (HCI) community has reported an abundance of work which envisages the scenario of a user, perhaps equipped with a handheld PDA type device, interacting in an ad- hoc manner with small numbers of other similarly equipped users, ambient displays, or software agents (e.g. [2,3]). However, one area which has been rather neglected by the HCI community is that of simultaneous user interaction with large numbers of devices. Whilst there has been a good deal of interest in using wireless networks of devices for collecting and recording spatio-temporal information, very little work aims to present any information to a user as they move through a networked environment - instead most proposed systems only gather data for offline perusal and interpretation. In fact, there has been very little published literature which describes work in allowing users to interact with large numbers of pervasive devices at all . This is in contrast to work in the software engineering and network technology communities who have long since recognized the unique and new problems associated with the deployment of networks of hundreds or thousands (or more) wirelessly networked devices. Though their own discussion is biased towards accommodating the needs of multiple users rather than multiple devices, Kray et al [4] highlight the sheer scale of the problem facing designers of interfaces to ubiquitous systems in terms of interactions, devices and social considerations (amongst others). In this position paper we pose the question of exactly how should we be proposing to interact with future spaces that have been augmented with very large numbers of communicating devices, and can we envisage socially satisfactory solutions using novel or emerging approaches? In particular we explore the potential use of augmented reality (AR) for presenting real time displays of data acquired by a wireless network of pervasive devices, and how we may in future exploit the potential of Brain Computer Interfaces (BCIs) for interaction with these displays. 2. Augmented Spaces – seeing invisible devices The emergence of inexpensive portable computing devices combined with readily available see-through head mounted displays (HMDs) has enabled researchers to begin to develop wearable mobile augmented reality (AR) systems [5]. This in turn has led to a spate of prototype applications which support the autonomous visual tagging of, and interaction with, objects of interest in a user’s field of view. On the one hand, we are wary of the social implications of this kind of technology - the prospect of being constantly fed with streams of digital information wherever we happen to be is perturbing. The end result of such a scenario is potentially an over- abundance of data, much of which might be spurious and unsolicited, being directed in a one-way fashion, at the unfortunate user: a so-called information push scenario [6]. On the other hand we see great scope for using wearable AR systems to solve one aspect of the problems of interacting with ubiquitous computing systems – AR could be used, for instance, to display clouds or mists of pixel data that directly show the information recorded by a wireless sensor network as the user physically moves through that network. We term this scenario an augmented space, and similar concepts have been recently described in [7,8]. AR displays could also be used to attempt to allow users mediate the flow of data (some of it perhaps unsolicited) from devices embedded in the infrastructure around them. If a simple object selection tool could be provided then users could activate, or de- activate devices or groups of devices in their vicinity. The use of object selection tools in mixed reality environments has been studied by a number of researchers (e.g. [9]) though, to-date the primary