245 COMPUTER RECOGNITION OF THE GESTURES OF PEOPLE WITH DISABILITIES Richard Foulds, Ph.D. and Andrew Moynahan, M.S. Applied Science and Engineering Laboratories University of Delaware/A.I. duPont Institute The use of gesture-based communication systems by individuals with significant expressive communication difficulties holds promise, but is limited by the need for interpretation by the receiver of the message. The development of a computer recognition technique that accommodates articulation variations and classifies gestures within streams of dynamic data is described. BACKGROUND Gestural communication is believed to precede vocal communication in most individuals. Manual pointing, reaching and grasping movements, eyegaze, and facial expressions appear at early developmental levels (Kinsbourne, 1986). In some instances, the use of gestures is not replaced by speech, but is allowed to fully develop into an articulation of language. Among people who are profoundly deaf and are unable to use the auditory channel to reinforce the development of speech, the use of sign language (e.g. American Sign Language, ASL) is common. The use of gesture as a mode of augmentative communication is less common than its use as a full sign language, but has been discussed in several contexts (Foulds, 1990; Lloyd and Karlan, 1984). The examples below describe two instances where gestural systems are used as alternatives to other augmentative communication techniques. Example 1 An adolescent male with cerebral palsy has an extensive gesture vocabulary (+150 signs) comprised of conventional ASL signs, modified ASL signs, and gestures of his own invention. The modified signs represent compromises in his precision and range of movement. Handshapes are important in differentiating one sign from another.