225 www.gerontechjournal.net May 2008, Vol 7, No 2 Torricelli et al. D. Torricelli, I. Bernabucci, M. Goffredo, T. D'Alessio. Eye-reach: a multimodal interface based on user intention prediction. Gerontechnology 2008; 7(2):225. Multimodal systems represent an effective modality of interaction between man and machine in both assistive and rehabilitative contexts 1 . An important aspect relates to the extraction of user inten- tions, in order to realize more intuitive interfaces and deepen cognitive aspects involved in rehabilitation. The potential impact of this technology on clinical practice of physical medi- cine and rehabilitation is inversely proportional to the intrusiveness of the technology used to provide this kind of treatment. Having the possibility of monitoring movement intent with unethered technology, such as a video capture system, represents a step forward in the direction of minimally intrusive assistive systems. The present work proposes a novel ap- proach to system integration, based on the study of gaze as estimator of intentionality, combined with a bio-inspired arm control module to assist the patient in upper extremities reaching tasks. This proposal is thus a proof of concept for a system able to provide the patient with the electrical stimulation patterns necessary to perform functional movements in a cooperative way, thus promoting the possibility of user-driven functional electrical therapy. Methods The system is composed of two main modules (Figure 1): (i) An inten- tion prediction module, based on a neural approach 2 , that accomplishes the task of esti- mating the gaze direction by analysing the image of the eyes of the user from a commer- cial webcam. A simple training procedure is required to classify the gaze to determine the object to reach. (ii) An arm control module, based on a bio-inspired neural internal model of the arm 3 , gives the muscular stimulation patterns that will drive the impaired arm FES (Functional Electrical Stimulation) assisted towards the desired location on the table. To accomplish this objective, a specifically trained artificial neural network (multilayer percep- tion) uses the intention prediction module to give the FES patterns for the shoulder and elbow joint muscles. The experimental procedure is based on the gaze detection of sub- jects sitting in front of a working desk and asked to look at one out of 4 objects placed on the desk. A webcam is able to capture gaze direction of the subject. No visual interfaces (e.g. a computer monitor) are needed, with the aim of mimicking a natural interaction with the environment. In the first phase, the stimulation patterns are fed to a synthetic arm model with hill-based muscles. Results and discussion Preliminary results have shown a classification rate of 95% from the gaze estimation, and a mean absolute error of 35mm for the planar position of the endpoint in the bio-inspired simulated environment. Further experiments will test the method in a realistic context, in terms of gaze classification rate over the real objects and percentage of successful reaching in the simulated context. References 1. Oviatt SL. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. Mahwah: Erlbaum; 2003; pp 286-304 2. Torricelli D, Goffredo M, Conforto S, Schmid M, D’Alessio T. Proceedings of the 2 nd International Workshop on Biosignal Processing and Classification; pp 86-95 3. Bernabucci I, Conforto S, Capozza M, Accornero N, Schmid M, D'Alessio T. Journal of Neuroen- gineering for Re- habilitation 2007;4:33 Keywords: hu- man machine interaction, gaze tracking, FES, neural networks Address: Uni- versity Roma TRE, Italy; E: d.torricelli@unir oma3.it Figure 1 Overview of the system