Indonesian Journal of Electrical Engineering and Computer Science Vol. 28, No. 1, October 2022, pp. 346~357 ISSN: 2502-4752, DOI: 10.11591/ijeecs.v28.i1.pp346-357 346 Journal homepage: http://ijeecs.iaescore.com Online hand position detection and classification system using multiple classification algorithms Ahmed Etihad Jaleel, Hesham Adnan Alabbasi Computers Science Department, College of Education, Mustansiriyah University, Baghdad, Iraq Article Info ABSTRACT Article history: Received Mar 19, 2022 Revised Jun 11, 2022 Accepted Jul 19, 2022 Hand position recognition is very significant for human-computer interaction. Different kinds of devices and technologies can be used for data acquisition; each has its specification and accuracy, one of these devices is Kinect V2 sensor. A three-dimensional location of the skeleton joints is taken from the Kinect device to create three types of data, the first is joint position raw data, the second is angles between joints, the third is combined of both types. These three types of data are used to train four classifiers, which are support vector machines, random forest, k nearest neighbors, and multilayer perceptron. The experiments are done on the datasets of 30,480 frames from 127 volunteers with saved trained models are used to predict and classify the eight positions of hand in a real-time system. The results show that our proposed approach performs well with highly efficient and accuracy reaching up to 99.07% in some cases and an average time spent on checking frame by frame sequentially very short period, and some cases, it reaches 0.59*10-3 seconds. This system can used in many applications such as controlling robots or devices, comparing physical exercises, or even monitoring elderly and patients, and more. Keywords: Hand position Kinect sensor K-nearest neighbors Multilayer perceptro Random forest Skeleton Support vector machines This is an open access article under the CC BY-SA license. Corresponding Author: Ahmed Etihad Jaleel Computers Science Department, College of Education, Mustansiriyah University Falastin (Palestine) St. Baghdad-Resafa, Baghdad, 00964, Iraq Email: ahmed.etihad@uomustansiriyah.edu.iq 1. INTRODUCTION The Microsoft Kinect sensor V2 device is used in many scientific fields because of its specification like being cheap, very accurate [1], [2], easy to set up technology, and fast. To extract position skeleton data, Kinect provides to us the locations of 25 virtual anatomical joint trajectories which can be extracted from depth map with a per-pixel semantic segmentation algorithm [3], with the ability to track 6 people, the Kinect sensor provides a powerful software development kit (SDK). Its technology allowed many applications to be developed beyond the original scope of gaming, covering several categories like detection of the human body or a part of it, such as the face, hands, or legs, and distinguishing movements and gestures in the field of sign language, gait recognition as in research [4]-[9]. Also, to monitor patients and the elderly for healthcare or from falling and alert those concerned where one or several devices are used [10]-[12]. To monitor exercises with the design of an avatar to teach and display movements and compare the correctness of their implementation [10], [13]. Controlling the robot as a whole or as an arm through gestures or imitation of movements [6], [14], it has the possibility of implementation in real-time application [15], can be used as a scanner for 3D printing [16], and because artificial intelligence has a large income in controlling these areas. We apply multiple classification algorithms on three types of data extracted from the second version of the