OBSTACLE AVOIDANCE FOR A ROBOT MANIPULATOR BASED ON VISUAL FEEDBACK Fernando J. Mendiburu * , Marcos R.A. Morais * , Antonio M.N. Lima * * Graduate Program in Electrical Electrical Engineering - PPgEE Department of Electrical Engineering - DEE Universidade Federal de Campina Grande - UFCG Rua Apr´ ıgio Veloso, 882, 58429-900 Bairro Universit´ario Campina Grande, Para´ ıba, Brazil Emails: fernando.mendiburu@ee.ufcg.edu.br, morais@dee.ufcg.edu.br, amnlima@dee.ufcg.edu.br Abstract— In this article a visual feedback control system and object occlusion handling implementation is presented. The proposed system integrates a trajectory planning method into the controller of a five degrees of freedom manipulator aiming to simplify object handling and obstacle avoidance. The system aims the intelligent handling of several parts, generating obstacle free paths whenever possible. The workspace information is provided by a redundant vision system: a RGB camera placed over the handler actuator and two fixed RGB-D sensors. Keywords— Intelligent Automation, manipulator, path planning, RGB-D Sensors. 1 Introduction The integration of a trajectory planner for a robotic manipulator with a stereo vision system capable of finding the obstacles pose and target objects it is very important for the safe manipulation of objects. The implementation of a robotic vision system is a theoret- ical and practical challenging problem, and some com- putational vision systems applications can be found in literature (Mendiburu et al., 2013). The research is relevant for social, academic and industrial aspects, and the development of automated systems do im- prove product quality, fosters mass production, and also reduces time and cost of production. Quality im- provement is provided by the high accuracy and re- peatability of the robots. The use of handlers enables the development of more efficient automation, leading to increased reliability to the processes. In the proposed system we use the Pegasus 880- RA2-1-B as the manipulator robot. The redundant vision system is composed an IR camera, namely IR- Syntek STK1160, placed at the arm actuator, and two RGB-D devices, Kinect sensors, statically posi- tioned external to the manipulator and perpendicular to each other to extend the platform field of view. The workspace is shown in Fig. 1. It can be seen the vi- sion system integrated to the robotic platform: the RGB camera, the frontal and lateral RGB-D sensors. The system is robust as an obstacle free trajectory can be determined even when there are occlusions in the workspace. The automatic workspace determination with computational vision only is a complex problem due to variations in scene illumination. The use of a RGB-D sensor as the Kinect simplifies the solution by cap- turing the scene depth using an infrared pair (CMOS camera/IR projector). The system shows robustness to changes in illumination that could otherwise de- grade its performance. The emerging of RGB-D sen- Figure 1: Workspace including the robot and vi- sion system. sors like Kinect have promoted an evolution in vi- sion systems due mainly to its low cost and remark- able technical features (Rakprayoon et al., 2011) and (Macknojia et al., 2013). After workspace determi- nation, a path for task execution has to be estab- lished. For the routing, we used a PRM based al- gorithm, called SBL (Sanchez and Latombe, 2001), with random exploration and fast convergence already used in several robotic manipulators, such as in (Liu et al., 2009) and (Guernane and Belhocine, 2005). The article is organized as follows: Section 1 de- scribes the platform used in the experimental tests. Section 2 discusses how platform configuration has been configured. Section 3 introduces the control strategy that exploits visual feedback for obstacle avoidance with occlusion of objects in the scene. Sec- tion 4 presents and discusses the results which demon- strates the feasibility of the proposed solution to deal with object occlusion and the integration of visual feedback trajectory planning for object handling and obstacle avoidance. Section 5 draws the conclusions.