International Journal of Reconfigurable and Embedded Systems (IJRES) Vol. 13, No. 2, July 2024, pp. 286~295 ISSN: 2089-4864, DOI: 10.11591/ijres.v13.i2.pp286-295 286 Journal homepage: http://ijres.iaescore.com Continuous hand gesture segmentation and acknowledgement of hand gesture path for innovative effort interfaces Prashant Richhariya 1 , Piyush Chauhan 1,2 , Lalit Kane 1,3 , Bhupesh Kumar Dewangan 4 1 Schools Computer Science and Engineering, University of Petroleum and Energy Studies, Dehradun, India 2 Department of Computer Science and Engineering, JAIN (Deemed to be University), Bangalore, India 3 Department of Computer Science and Engineering, MIT World Peace University, Pune, India 4 Department of Computer Science and Engineering, OP Jindal University, Raigarh, India Article Info ABSTRACT Article history: Received Oct 12, 2022 Revised Sep 5, 2023 Accepted Sep 17, 2023 Human-computer interaction (HCI) has revolutionized the way we interact with computers, making it more intuitive and user-friendly. It is a dynamic field that has found it is applications in various industries, including multimedia and gaming, where hand gestures are at the forefront. The advent of ubiquitous computing has further heightened the interest in using hand gestures as input. However, recognizing continuous hand gestures presents a set of challenges, primarily stemming from the variable duration of gestures and the lack of clear starting and ending points. Our main objective is to propose a solution: the framework for “continuous palm motion analysis and retrieval” based on “Spatial-temporal and path knowledge”. Framework harnesses the power of cognitive deep learning networks (DLN), offering a significant advancement in the continuous hand gesture recognition domain. we conducted rigorous experiments using a diverse video dataset capturing hand gestures for boasting an impressive F-score of up to 0.99. The potential of our framework to significantly enhance the accuracy and reliability of hand gesture recognition in real-world applications. Keywords: Accuracy Fault-tolerance Human computer interaction Optimization Performance Resource cost Service level agreement violation rate This is an open access article under the CC BY-SA license. Corresponding Author: Bhupesh Kumar Dewangan Department of Computer Science and Engineering, OP Jindal University Punjipathra, Raigarh, Chhattisgarh 496109, India Email: bhupesh.dewangan@gmail.com 1. INTRODUCTION Effective communication involves not only spoken words but also gestures, they are essential for expressing and boosting communication’s expressiveness. This applies to both the speaker and the audience. In the realm of human-computer interaction (HCI), gestures are instrumental in facilitating seamless interaction. Gestures serve as a bridge between the speaker’s intent and the audience’s understanding, forming the foundation of interaction [1]. When it comes to recognizing hand gestures, there are two primary approaches: non-vision-based and vision-based [2]. Among these, vision-based methods are particularly appealing due to their natural feel. Vision-based approaches can be further categorized as either active or passive. Active sensing techniques have emerged as a successful avenue for gesture recognition, notably through the utilization of devices such as Microsoft_kinect V2 [3], [4] and Leap_Motion cameras. These technologies offer a dynamic and responsive means of capturing gestures, making the recognition process more effective and accurate. In summary, effective communication relies not only on words but also on gestures, which are pivotal in both conveying and enhancing the overall message. In the context of HCI, gestures serve as a fundamental tool, bridging the gap between speakers and their audience. The methods