International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-4, April 2014 324 www.erpublication.org Abstract — This paper presents a generalized framework of an image-based visual servoing of an articulated arm which has to be deployed inside a simulated reactor vessel environment. Wall tiles of the vessel idealized as rectangular grids on a surface are to be inspected and an attempt is made to replace the damaged tiles during shutdown periods of the machine. The vision sensing methodology of the proposed arm is explained. The arm has a camera located at the wrist (eye-in-hand) and the control action has to be taken place at joint level. The preliminary results are only illustrated. Index Terms—In vessel inspection, Kinematics, Manipulator deployment, Serial robot, Visual-servoing. I. INTRODUCTION In recent years, a wide variety of applications regarding autonomous robot behaviour in unknown environments have been developed. The new generation robots are adapted to changing conditions in real time. Such behaviour is necessary especially when facing difficult tasks in practice like search and rescue missions, reconnaissance, surveillance and inspection in complex and dangerous surroundings. As an example, remote handling robots used in inspection and maintenance of in-vessel components of fusion devices require a non-contact robust sensing system. In such instances, the robot vision is crucial, since it mimics human sense and allows for noncontact measurement from the environment. The control inputs for the robot motors are produced by processing image data (like extraction of contours, features, corners and other visual primitives). Basic purpose of visual control is to control the pose of the robot‟s end-effector relative to a target object or a set of target features. Visual servoing or visual servo control (VSC) involves various techniques from image processing, computer vision and control theory. Using such an approach, systems with low cost sensors and actuators can be developed. In VSC, the information from camera is used within the control loop to position the tracking device as per the requirement. The vision data may be acquired either from the camera that is placed directly onto the manipulator (eye-in hand) or at a fixed location in the scene (eye-to hand). The features on the image plane are servo-controlled to their goal positions. There are two traditional approaches among all the vision-based control schemes [1]: (i) position-based VSC and Manuscript received April 20, 2014. Ms.Madhusmitha Senapati, Department of Mechanical Engineering, National Institute of Technology, Rourkela, India, Ph:9040635247, Dr.J.Srinivas, Department of Mechanical Engineering, National Institutte of Technology, Rourkela, India, 769008, Ph.+91-661-2462503, Dr.V.Balakrishnan, Institue of Plasma Research, Gandhinagar, India, Phone: +91-7923-962183, (ii) image-based VSC. In position-based system, the control is performed in task-space based on the three-dimensional information retrieved from the image. Here, the camera pose is estimated using visual information and the control design is a classical state–space design. The quality of the response depends on the quality of the pose estimation and makes the control sensitive to camera calibration errors. In an image-based system, feedback is defined based on image-features and controller is designed to drive the image features towards a goal configuration. Thus, it implicitly solves the Cartesian motion planning problem. The approach is therefore, relatively robust to camera calibration and target modelling errors. Image-based approaches exploit basically 2D visual measurements such as points or lines tracked in the image during task execution. Robot has several links and joints and each requiring a positioning reference in relation to the predefined origin point. The vision system defines image coordinates based on where the camera points-to without regard to a fixed reference origin. Pixel locations within an image frame must coincide with the corresponding robot coordinates in order for proper visual robotic guidance. Several works were reported in literature relating vision guided robotic systems. Early in 1985, Sanderson et al.[1] proposed an adaptive control approach for the nonlinear time-varying relationship between the robot pose and image features in image-based servoing. They described detailed simulations of image-based visual servoing for a variety of 3-degree of freedom manipulators. Seaden and Ang [2] worked-on relative target-object (rigid-body) pose estimation for vision-based control of industrial robots. They developed and implemented a closed-form target-pose estimation algorithm. Feddama [3] applied an explicit feature space trajectory generator and closed-loop joint control to overcome problems due to low visual sampling rate. Here, an experimental work based on image visual servoing of a 4-degree of freedom robot was presented. Hashimoto et al.[4] also illustrated simulations for comparing position-based and image-based approaches. Korayem et al.[5] designed and simulated vision-based control and performance tests for 3-P robot by visual C++ software. They used a camera which was installed on end-effector of the robot to find a target. A feature-based visual servoing control on the end-effector was used to reach the target. Jara et al.[6] employed Java for developing an interactive tool for industrial robot simulations. Pinto et al.[7] proposed a eye-on-hand system, where the use of cameras will be replaced by the 2D laser range finder, which is attached to a robotic manipulator executing predefined path to produce grayscale images of workstation. Fang et al.[8] proposed augmented realty in programming a robot for trajectory planning and transformation into task-optimized executable robot paths. Therefore, the impact of pose estimation in visual Visual Servoing and Motion control of a robotic device in inspection and replacement tasks Madhusmita Senapati, J.Srinivas, V.Balakrishnan