International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1907
Sign Language Interpreter using Image Processing and Machine
Learning
Omkar Vedak
1
, Prasad Zavre
2
, Abhijeet Todkar
3
, Manoj Patil
4
1,2,3
Student, Department of Computer Engineering, Datta Meghe College of Engineering, Mumbai University,
Airoli, India
4
Assistant Professor, Department of Computer Engineering, Datta Meghe College of Engineering, Mumbai
University, Airoli, India
----------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Speech impairment is a disability which affects
one’s ability to speak and hear. Such individuals use sign
language to communicate with other people. Although it is an
effective form of communication, there remains a challenge for
people who do not understand sign language to communicate
with speech impaired people. The aim of this paper is to
develop an application which will translate sign language to
English in the form of text and audio, thus aiding
communication with sign language. The application acquires
image data using the webcam of the computer, then it is pre-
processed using a combinational algorithm and recognition is
done using template matching. The translation in the form of
text is then converted to audio. The database used for this
system includes 6000 images of English alphabets. We used
4800 images for training and 1200 images for testing. The
system produces 88% accuracy.
Key Words: Pre-processing, Feature Extraction, Edge
Detection, Classification.
1. INTRODUCTION
Sign language is an important part of life for deaf and mute
people. They rely on it for everyday communication with
their peers. A sign language consists of a well-structured
code of signs, and gestures, each of which has a particular
meaning assigned to it. They have their own grammar and
lexicon. It includes a mixture of hand positioning, shapes and
movements of the hand.
The people who know sign language can communicate with
each other efficiently. However, when it comes to
communicating with people who don’t understand sign
language it causes a lot of problems. Communication is a
very important part of our lives. We interact with our mates
at offices, schools, hospitals and other public places. Deaf and
mute people may find it difficult to express themselves in
such situations because not everyone understands sign
language. There are many highly talented people suffering
from speech impediment. We feel that their disability should
become a hindrance to achieve their goals. Adding them into
the workforce will only improve the socio-economic
development of the country.
Deaf and mute people usually depend on sign language
interpreters for communication. However, finding a good
interpreter is difficult and often expensive. Thus, a
computerized interpreter could be a reliable and cheaper
alternative. A system that can translate sign language into
plain text or audio can help in real-time communication. It
can also be used to provide interactive learning of sign
language.
There is no universal sign language for deaf people. Different
countries use their own sign language, although there some
striking similarities among them. It is yet unclear how many
sign languages exist in the world. Some languages have got
legal recognition and some have not. India’s National
Association of Deaf estimates that there are 18 million
people in India with hearing impairment. This paper
discusses the implementation of a system which translates
Indian Sign Language gestures to its English language
interpretation.
2. LITERATURE SURVEY
Several types of researches have been done in translating
Indian Sign language using deep learning. Some of them used
instrument-based approach and some have used a video-
based approach.
In Ref. [1] Pham The Hai uses Microsoft Kinect to translate
Vietnamese Sign Language. In the proposed system, the
person has to place himself with Kinect’s field of view and
then perform sign language gestures. It can recognize both
static and dynamic gestures using multiclass Support Vector
Machine. During recognition, the gesture features are
extracted, normalized and filtered out based on Euclidean
distance.
Purva Badhe [2] uses Fourier Descriptor for feature
extraction. The system translates Indian Sign language
gestures to English language. To represent the boundary
points, the Fourier Series were calculated using Fast Fourier
Transform (FFT) algorithm. The extracted data being too
large is compressed using vector quantization. This data is
then stored into a codebook. For testing purpose, the code
vector generated from gestures is compared with existing
codebook and gesture is recognized.