A Real-time Continuous Alphabetic Sign Language to Speech Conversion VR System Rung-Huei Liang Ming Ouhyoung Communications & Multimedia Lab., Computer Science and Information Engineering Dept., National Taiwan University, Taipei, Taiwan email: ming@csie.ntu.edu.tw, FAX: 886-2-3628167 Abstract Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural way in a virtual reality system. Because of its intuitiveness and its capability of helping the hearing impaired or speaking impaired, we develop a gesture recognition system. Considering the world-wide use of ASL (American Sign Language), this system focuses on the recognition of a continuous flow of alphabets in ASL to spell a word followed by the speech synthesis, and adopts a simple and efficient windowed template matching recognition strategy to achieve the goal of a real-time and continuous recognition. In addition to the abduction and the flex information in a gesture, we introduce a concept of contact-point into our system to solve the intrinsic ambiguities of some gestures in ASL. Five tact switches, served as contact-points and sensed by an analogue to digital board, are sewn on a glove cover to enhance the functions of a traditional data glove. Keywords: gesture recognition, virtual reality applications 1. Introduction The ways of communication have attracted much of interest for different reasons. Backtrack to the early 1900s, scientists had tried to solve the myth of communication among human beings and animals. In 1951, Keith and Hayes [1] had conducted an experiment in which they tried to teach a chimpanzee, named Viki, to speak English, however, only 4 words, papa, mama, cup, and up, could be uttered after more than six years of training. In the 1960s, Lieberman[1] discovered that a chimpanzee is incapable of human speech for anatomical reasons. Nevertheless, in 1976, Fouts, Chown, Kimble, and Couch[1] showed that a chimpanzee could respond correctly to signed commands by teaching the subject Ali to learn ASL. This project also showed that Ali could respond correctly to spatial arrangements and learn the grammar of the requested response consisting of an ordered sequence during training. The results above reveal the possibility to communicate with the chimpanzee and the human neonate by gestures rather than by other means. Kunii [2] developed a system to translate natural language to sign language and then synthesised through corresponding computer animation, making much effort on the analysis of the grammar of natural