Journal of Innovative Image Processing (JIIP) (2021) Vol.03/ No. 01 Pages: 1-6 https://www.irojournals.com/iroiip/ DOI: https://doi.org/10.36548/jiip.2021.1.001 1 ISSN: 2582-4252 (online) Submitted: 11.12.2020 Revised: 20.01.2021 Accepted: 09.02.2021 Published: 22.02.2021 3D Image Processing using Machine Learning based Input Processing for Man-Machine Interaction Dr. Akey Sungheetha Data Science SIG member, Computer Science and Engineering, School of Electrical Engineering and Computing, Adama Science and Technology University, Adama, Nazret, Ethiopia. Dr. Rajesh Sharma R Image Processing SIG member, Computer Science and Engineering, School of Electrical Engineering and Computing, Adama Science and Technology University, Adama, Nazret, Ethiopia. Abstract: In various real time applications, several assisted services are provided by the human-robot interaction (HRI). The concept of convergence of a three-dimensional (3D) image into a plane-based projection is used for object identification via digital visualization in robotic systems. Recognition errors occur as the projections in various planes are misidentified during the convergence process. These misidentifications in recognition of objects can be reduced by input processing scheme dependent on the projection technique. The conjoining indices are identified by projecting the input image in all possible dimensions and visualizing it. Machine learning algorithm is used for improving the processing speed and accuracy of recognition. Labeled analysis is used for segregation of the intersection without conjoined indices. Errors are prevented by identifying the non-correlating indices in the projections of possible dimension. The inputs are correlated with related inputs that are stored with labels thereby preventing matching of the indices and deviations in the planes. Error, complexity, time and recognition ratio metrics are verified for the proposed model. Keywords: Space projection, digital visualization, human-robot interaction, dimension modeling, 3D images; 1. Introduction In order to access any object without human involvement within specified time period, human-robot interactions are used. Virtual reality objects are defined using the three-dimensional (3-D) image sequences in Virtual reality modeling language (VRML) [1]. Virtual objects and their scenes may be approached using VRML programming scheme. Based on the object, a virtual robot is generated for supporting the workspace environment. In web settings, a series of virtual images may be generated by the user with the help of VRML that offers connectivity with a seemingly 3D scene by spinning, turning, watching or otherwise [2]. Currently, multipurpose environments or virtual reality headsets are used by the conventional augmented reality devices for creation of compelling emotions, sounds or images by stimulating the actual appearance of a person in a virtual world. Communication between the user and the virtual functions and elements in the simulated environment is made possible using augmented reality devices. From the sensed object, a quantitative data collection may be derived and the location of the object may be observed in this area [3]. The object is acquired from the surrounding in a 3D view for visualization process. Conversion processes are implemented as the object in a 3D view may not be recognized by the robot. Movement and position of the object in space may be detected using a robot configuration space. The changes in position is processed and determined every time by sequential monitoring of objects [4]. With respect to the previous history, the nature of the object is detected by the sensor and the object is visualized. The result of the object in space is derived on implementing matching schemes for finding the object [5].