Real Time Prediction of American Sign Language Using Convolutional Neural Networks Shobhit Sinha , Siddhartha Singh , Sumanu Rawat , and Aman Chopra (B ) Manipal Institute of Technology, Manipal 576104, Karnataka, India shobhit.sinha19@gmail.com, singh.siddhartha23@gmail.com, sumanurawat12@gmail.com, amanchopra64@gmail.com Abstract. The American Sign Language (ASL) was developed in the early 19 th century in the American School for Deaf, United States of America. It is a natural language inspired by the French sign language and is used by around half a million people around the world with a majority in North America. The Deaf Culture views deafness as a dif- ference in human experience rather than a disability, and ASL plays an important role in this experience. In this project, we have used Convo- lutional Neural Networks to create a robust model that understands 29 ASL characters (26 alphabets and 3 special characters). We further host our model locally over a real-time video interface which provides the pre- dictions in real-time and displays the corresponding English characters on the screen like subtitles. We look at the application as a one-way translator from ASL to English for the alphabet. We conceptualize this whole procedure in our paper and explore some useful applications that can be implemented. Keywords: American Sign Language · Convolution Neural Network · Image processing · Video processing 1 Introduction The National Institute on Deafness and Other Communication Disorders is an organization that conducts biomedical research processes of hearing, balance, smell, taste, voice, speech, and language. Their statistics dictate that within the USA, 3 in every 1000 people are born with a certain degree of impaired hearing capacity in one or both ears. Around 30 million people above the age of 12 years have hearing loss in both ears. A number of solutions have been developed to ease communication without being able to hear clearly. The American Sign Language provides a wide array of symbols, actions, and movements which enables communication without sound. It has a vast scope, hence for the purpose of conceptualization, we have concentrated our research on the ASL Alphabet dataset only. In this paper, we develop a Convolutional Neural Network model using the Keras library. After making sure that the CNN c Springer Nature Singapore Pte Ltd. 2019 M. Singh et al. (Eds.): ICACDS 2019, CCIS 1045, pp. 22–31, 2019. https://doi.org/10.1007/978-981-13-9939-8_3