                 !    "      #$ %%&’%% 191 Image Processing based Language Converter for Deaf and Dumb S.N.Boraste K.J.Mahajan Abstract – This application helps the deaf and dumb person to communicate with the rest of the world using sign language. Suitable existing methods are integrated in this application. Computer recognition of sign language is an important research problem for enabling communication with hearing impaired people. The Computer based intelligent system will enable deaf & dumb people significantly to communicate with all other people using their natural hand gestures. Keywords- sign language, hand gestures, RGB, Binary conversion, opencv I. INTRODUCTION Deaf and Dumb people are usually deprived of normal communication with other people in the society. It has been observed that they find it really difficult at times to interact with normal people with their gestures, as only a very few of those are recognized by most people. Since people with hearing impairment or deaf people cannot talk like normal people so they have to depend on some sort of visual communication in most of the time. Sign Language is the primary means of communication in the deaf and dumb community. As like any other language it has also got grammar and vocabulary but uses visual modality for exchanging information. The problem arises when dumb or deaf people try to express themselves to other people with the help of these sign language grammars. This is because normal people are usually unaware of these grammars. As a result it has been seen that communication of a dumb person are only limited within his/her family or the deaf community. The importance of sign language is emphasized by the growing public approval and funds for international project. At this age of Technology the demand for a computer based system is highly demanding for the dumb community. However, researchers have been attacking the problem for quite some time now and the results are showing some promise. Interesting technologies are being developed for speech recognition but no real commercial product for sign recognition is actually there in the current market. So, to take this field of research to another higher level this project was studied and carried out. The basic objective of this research was to develop a computer based intelligent system that will enable dumb people significantly to communicate with all other people using their natural hand gestures. The idea consisted of designing and building up an intelligent system using image processing, data mining and artificial intelligence concepts to take visual inputs of sign language’s hand gestures and generate easily recognizable form of outputs in the form of text & Voice. II. RELATED WORK The literature survey is carried out as a part of the project work. It has provided review of the past research about image processing based language converter and other researchers. The past research effort will properly guide to justify the scope and direction of the present effort. Soumya Dutta and Bidyut B. Chaudhuri propose Color-based target recognition which is inherently difficult, due to variation in the apparent color of targets under varying image in conditions. A number of factors might lead to the problem, namely, the color of incident daylight, surface reflectance properties of the target, illumination geometry, and viewing geometry. A color image edge detection algorithm is proposed in this paper. [1]. Christopher Lee and Yangsheng Xu developed a glove-based gesture recognition system that was able to recognize 14 of the letters from the hand alphabet, learn new gestures and able to update the model of each gesture in the system in online mode, with a rate of 10Hz. They developed a gesture recognition system, based on Hidden Markov Models, which can interactively recognize gestures and perform online learning of new gestures [2]. Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, and Tsukasa Ogasawara propose a novel method for a hand-poseestimation that can be used for vision-based human interfaces. “Voxel model” is used in this system for integrating multiview point. At present, there are two major problems, accuracy and speed [3].P. Subha Rajama, G. Balakrishnan proposes a method that provides the conversion of a set of 32 combination of the binary number 25 which represents the UP‘and DOWN‘positions of five fingers into decimal numbers. They used Palm Extraction method, Feature point extraction method, training phase [4]. T. Starner, J. Weaver, and A. Pentland proposed that combine Dynamic time wrapping or Hidden Markov Models (HMMs) with discrimination classifier for recognizing speech, handwriting or Sign Language. This system involved in two proposed methods by using 40 signs. The first method obtained 92% accuracy and second method obtained 98% accuracy. [5]. Fu-hua Chou, Yung-Chun Su, presents novel processing algorithms for the gesture images detection and recognition. In this method the static hand gesture of sign language is constructed by the Gaussian mixture model, and the unknown gesture image is identified by Gaussian model match. Based on presented static sign language detection and recognition algorithms, the correct recognition rate is about 94% in average [6]. Shreyashi Narayan Sawant presents design and implementation of real time Sign Language Recognition system base on vision to recognize 26 gestures from the Indian Sign Language using MATLAB. Principle Component