A MULTIMODAL USER INTERFACE FOR GREEK SIGN LANGUAGE TRAINING Georgios Stylianou Department of Computer Science and Engineering, European University Cyprus 6 Diogenes str. P.O.Box 22006 Engomi 1516 Cyprus Evangelos Englezakis European University Cyprus 6 Diogenes str. P.O.Box 22006 Engomi 1516 Cyprus ABSTRACT Sign language is a language mainly known by the community of deaf people and other people related to this community. It is based on a set of hand gestures that differ depending on the language. Training more people in sign language is a difficult task as there are no adequate resources to aid independent study. This constraint limits deaf mobility and raising the general public's awareness towards the deaf. As of today a limited amount of research has been performed on sign language recognition, using pattern recognition and image processing techniques for recognizing hand gestures, but not sign language training. In this work, we demonstrate an extensible software used for Greek sign language (GSL) recognition and training, using direct input from an extremely cheap and affordable virtual glove. This software can be used within minutes and teaches the user the hand gestures used in GSL’s alphabet. KEYWORDS Multimodal interfaces, sign language, virtual glove, 3D interfaces. 1. INTRODUCTION A sign language uses manual communication, body language and lip patterns instead of sound to convey meaning by simultaneously combining hand gestures, orientation and movement of the hands, arms or body, and facial expressions to express a speaker's thoughts. Sign languages commonly develop in deaf communities, which mainly include deaf (or hearing impaired) people, interpreters and friends and families of deaf people. Contrary to the common belief sign languages are independent of oral languages and follow their own paths of development. For example, British Sign Language and American Sign Language are quite different and mutually incomprehensible, even though the hearing people of Britain and America share the same oral language. Even though the deaf community uses sign languages for many centuries, there is a deficiency of sign language learning resources. Available resources are composed of visual (image based), textual or video based descriptions and demonstrations. However, these are inadequate as they are not interactive and they are incomplete due to gesture occlusion. Hence a person interested in learning a sign language must seek the help of an instructor. Consequently, due to the lack of sign language stimulations virtually none outside the deaf community can communicate with or understand a deaf person, limiting the mobility of deaf people to public places such as banks, hospitals and public services as well as performing actions such as meeting and collaborating with people outside their community. Previous research includes work implemented during the European projects VisiCast (VisiCast project, 2000; Verlinden et al, 2001) and eSign (2002) and also by some independent researchers that mainly created tools for sign language recognition using camera based tracking, pattern recognition and hidden Markov models (Davis and Shah, 1994; Gao et al, 2000; Starner and Pentland, 1995; Starner et al, 1998, Paschaloudi and Margaritis, 2006). Also, state of the art reviews summarize the gesture recognition techniques (Pavlovic et al, 1997, LaViola, 1999). IADIS International Conference Computer Graphics and Visualization 2008 67