Hand Gesture Recognition Using Leap Motion Controller for Recognition of Arabic Sign Language Bassem Khelil #1 , Hamid Amiri #2 # University of Tunis El Manar, Electrical Engineering Department, National Engineering School of Tunis SITI-LAB, Tunis, Tunisia 1 khelil.bassem@gmail.com 2 hamidlamiri@gmail.com Abstract— Introduction of novel acquisition devices, such as Leap Motion Controller (LMC) and the Microsoft Kinect system, allows to give a precise informative description of the hand pose, which can be exploited for accurate gesture recognition, namely hand gesture recognition of sign language. In this paper, we propose a novel method of pattern recognition to recognize symbols of the Arabic Sign Language (ArSL). Our proposal is based on the sparse data provided by the LMC sensor. The scheme extracts meaningful characteristics from the data, such as angles between fingers, to achieve a high-accuracy, which uses a classifier to decide which gesture is being performed. We show that our approach allows to recognize 28 static hand gestures of ArSL for letter "alif"-"yah" and digits 0-9 successfully. An experiment study of our approach is addressed and we show that recognition rate could be improved. KeywordsHand Gesture Recognition, Leap Motion Controller, SVM, Arabic Sign Language. I. INTRODUCTION A sign language is a communication method for deaf people. By using a sign language as an input interface to Information and Communications Technology (ICT) devices, it possible for becomes impaired people to hear, something which is hard to perform by using conventional keyboard or touch pad. A sign language uses visual information associated to fingers, hand and arm acts. At the same time, several fingers acts are used with a part of the face such as line of sight and mouth. Fingerspelling could signify one of the Arabic alphabet 28 letters in the form of fingers. In the current research for Sign Language Recognition (SLR), the image recognition of colored images, depth images and hand shapes are used in [1]. Since it must be taken with colored gloves [1], the glove worn is not suitable. The image recognition requires long calculation time to detect the hand and the fingers. Therefore, it takes relatively a long interval to attain the final recognition result. In the case of the recognition with Kinect sensor [2], a large space is required for skeletal tracking. It is hard to recognize the fingerspelling anywhere with Kinect sensor. Therefore, SLR is required using a smart device like leap motion controller, which can easily recognize the shape of fingers or hands anywhere. In this project, we propose a hand gesture recognition approach using LMC ([3]-[4]). This latter has skeletal tracking that recognizes the framework of fingers to obtain a highly accurate several data such as the index finger, the position of finger bones and the degree of the thumb. In addition, the use of LMC allows recognizing 28 static hand gestures of ArSL for letters "alif"-"yah" and digits 0-9 successfully in real time. This paper is organized around seven sections: section 2 introduces the LMC. Section 3 presents the literature review of sign language recognition in details. The following section details our suggested gesture recognition system using LMC. Section 5 highlights the simulation results. This paper is enclosed by a conclusion and future perspectives. II. LEAP MOTION CONTROLLER The LMC is a compact device that can be connected to a PC using a USB. It uses InfraRed (IR) imaging to define the position of predefined objects in a limited space in real time. It can then sense hand and finger movements in the air above it, and these movements are recognized and translated into actions by the approach to be developed. The sensor software analyzes the objects detected in the device’s field of view. It recognizes hands, fingers, and tools, to permit reporting discrete positions, gestures, and motion [4]. The controller’s field of view is an inverted pyramid centered on the device, as represented by figure 1.The effective range of the controller extends from approximately 25 to 600 millimetres above the device. The controller itself is accessed and programmed through Application Programming Interfaces (APIs), with support for a variety of programming languages, ranging from C++ to Python and JavaScript. The positions of the recognized objects are acquired through these APIs. The Cartesian and spherical coordinate systems are used to describe the positions in the controller’s sensory space. Fig. 1 Field of view and coordinate system of LMC