International Conference on Competitive Manufacturing Application of a Camera for Measuring Robot Position Accuracy T. V. Light 1 , I. A. Gorlach 1 , A. Schönberg 2 , R. Schmitt 2 1 Department of Mechatronics, Nelson Mandela Metropolitan University, South Africa 2 Laboratory of Machine Tools and Production Engineering WZL, RWTH Aachen University, Germany Abstract Conventional calibration of industrial robots is carried out with special measurement equipment, which is expensive and requires skilled personnel to operate. This research explores a methodology which utilises a basic camera and a light source, attached to a robot end-effector, to measure the robot position accuracy. In experiments, robot positions were simultaneously measured with: a camera, a 6DOF measuring arm and an indoor GPS. The data obtained from the three measurements systems were then compared with the robot controller data, as the robot was calibrated prior to the experiments. The results indicate that the accuracy obtained with the camera method is in the range of robot accuracy. Therefore, this methodology can be used for basic robot calibration, with the main advantages of user-friendliness and low cost. Keywords Robotics, Robot Calibration, Vision 1 INTRODUCTION Industrial robots require regular calibration since they are subjected to mechanical wear and environmental effects. According to Elatta et al. [1], calibration is used to enhance the positioning accuracy of robots through software adjustments. As a result of calibration, kinematic and dynamic parameters of a robot are determined and used for error compensation. Kinematic parameters are related to actual robot geometry that are used in inverse kinematic calculations to achieve relative position and orientation of links and joints, while dynamic parameters describe the inertial behaviour of the robot and are used in the robot dynamic control. According to Judd and Knasinski [2], 95% of robot positioning inaccuracy arises from the inaccuracy in the kinematic model description, i.e. geometric errors. Gong et al. [3] indicated that non-geometric errors may also play a significant role. Young and Pickin [4] and Alici and Shirinzadeh [5] showed that a wide variety of factors reduce accuracy, including hardware and software limitations, manufacturing tolerances, payload effects, compliance, elasticity and thermal effects. During calibration, a robot is moved through a number of positions, which are accurately measured using various external measuring systems. The obtained measurements are then used to modify robot model parameters with numerical optimization [1]. Measurement devices for robot calibration include: laser trackers, theodolites, visual sensors and recently developed indoor Global Positioning System (iGPS), presented by Maisano et al. [6] and Schmitt et al. [7]. These measurement systems vary in accuracy, user-friendliness and cost. According to Elatta et al. [1], conventional methods have a number of common drawbacks, of which the requirement of skilled personnel to perform calibration is the main one. In this research, an alternative method of determining the robot’s position accuracy is explored. The approach is based on an application of a camera and a light source. Cameras are widely used in tracking and scanning systems using markers and laser beams. Tracking systems offer a wide range of capabilities to gain spatiotemporal information of objects in the field-of- view of cameras. The measuring principle is based on intersecting rays and/or known geometry points in space, which is similar to the principle used, for example, in iGPS. The Laboratory of Machine Tools and Production Engineering (WZL) of RWTH Aachen University uses an extended version of the ARToolkitPlus tracker, reported by Wagner and Schmalstieg [8], known in the field of augmented reality to evaluate tracking applications in combination with machine- vision tasks. An optical tracker uses a number of cameras to track whole scenes at low frame-rates (6 fps), or in a region of interest mode with rapid updates (370 fps). Ray intersections are then used to calculate the receiver positions marked by patterns, which are identified by the tracker (Fig. 1). The data is refined with the cornerSubpix function of OpenCV to determine intrinsic parameters of the cameras in the bundling phase. The marker corner is established using the gradient-based approach of OpenCV, as reported by Lamers and Dornieden [9] (Fig. 2). An application of cameras for tracking robots with