Multi-level Taxonomy Review for Sign Language Recognition:
Emphasis on Indian Sign Language
NIMRATVEER KAUR BAHIA
∗
and RAJNEESH RANI
∗
, National Institute of Technology, India
With the phenomenal increase in image and video databases, there is an increase in the human-computer interaction that
recognizes Sign Language. Exchanging information using diferent gestures between two people is sign language, known
as non-verbal communication. Sign language recognition is already done in various languages; however, for Indian sign
language, there is no adequate amount of work done. This paper presents a review on sign language recognition for multiple
languages. Data acquisition methods have been over-viewed in four ways (a) Glove-based, (b) Kinect-based, (c) Leap motion
controller and (d) Vision-based. Some of them have pros and cons that have also been discussed for every data acquisition
method. Applications of sign language recognition are also discussed.
Furthermore, this review also creates a coherent taxonomy to represent the modern research divided into three levels:
Level 1 Elementary level (Recognition of sign characters), Level 2 Advanced level (Recognition of sign words) and Level
3 Professional level (Sentence interpretation). The available challenges and issues for each level are also explored in this
research to provide valuable perceptions into technological environments. Various publicly available data-sets for diferent
sign languages are also discussed. An eicient review of this paper shows that the signiicant exploration of communication
via sign acknowledgment has been performed on static, dynamic, isolated and continuous gestures using various acquisition
methods. Comprehensively, the hope is, this study will enable readers to learn new pathways and gain knowledge to carry
out further research work in the domain related to sign language recognition.
CCS Concepts: • Computing methodologies → Artiicial intelligence .
Additional Key Words and Phrases: Indian Sign Language (ISL), Sign Language Recognition (SLR), Vision-based, Feature
extraction, Support vector machine (SVM), Region of Interest (ROI).
1 INTRODUCTION
1.1 Sign Language
Sign Language (SL) is introduced as nonverbal communication between the hearing impaired and speechless
community by using hand gestures, facial expressions and body postures to express their feelings, emotions
and convey messages. It is complicated for deaf people to express their feelings to ordinary people; meanwhile,
regular people are unaware of the sign language used by deaf people [91]. In the informative arm/hand gesture
categorizations, SL is considered the most coordinated and structured out of diferent gesture kinds. SL involves
hand/arm gestures and signs conveying semantic meaning through facial expressions and other body postures
[114].
SL is not a universal language. According to the World Federation of the Deaf, more than 200 sign languages
are present worldwide. For instance, American Sign Language (ASL), British Sign Language (BSL), Australian
Sign Language (Aus Lan), French Sign Language (FSL), Indian Sign Language (ISL), Japanese Sign Language (JSL)
all use diferent sign language. Curiously, most nations that share a similar spoken language don’t have similar
∗
Both authors contributed equally to this research.
Authors’ address: Nimratveer Kaur Bahia, nimratbahia@gmail.com; Rajneesh Rani, ranir@nitj.ac.in, National Institute of Technology, Dr. B.R.
Ambedkar, Jalandhar, Punjab, India, 144 011.
ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or ailiate of a national government.
As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for
Government purposes only.
© 2022 Association for Computing Machinery.
2375-4699/2022/9-ART $15.00
https://doi.org/10.1145/3530259
ACM Trans. Asian Low-Resour. Lang. Inf. Process.