International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:15 No:06 100 An Image Based Visual Control Law for a Differential Drive Mobile Robot Indrazno Siradjuddin, Indah Agustien Siradjuddin, and Supriatna Adhisuwignjo This paper presents the development of an Image Based Visual Servoing control law for a differential drive mobile robot navigation using single camera attached on the robot platform. Four points image features have been used to compute the actuation control signals: the angular velocities of the right wheel and the left wheel. The actuation control signals move the mobile robot at the desired position such that the error vector in the image space has been minimised. The stability of the proposed IBVS control law has been validated in the sense of the Lyapunov stability. Simulations and real-time experiments have been carried out to verify the performances of the proposed control algorithm. Visual servoing platform (ViSP) libraries have been used to develop the simulation program. Real-time experiments have been conducted using a differential drive mobile robot where a Beaglebone Black Board was used as the main hardware controller. Index Terms—Visual servoing, differential drive mobile robot, Beaglebone Black, robotics I. I NTRODUCTION D EAD reckoning control strategy is a popular method for an autonomous robot navigation [1], [2]. In the case of a mobile robot control, dead reckoning relies on odometri sensors that measure the number of rotations of a robot wheel. Using this technique, the position and the velocity of a mobile robot can be estimated. However, such method subject to estimation error due to wheelslip and discrepancy between the kinematics model and the real robot kinematics [3]. Alternatively, the use vision sensor is a promising method to improve the robot navigation capabilities either in a single or collaborative robot tasks [4], this technique is also known as visual servoing method. Visual servoing methods use the feedback visual information to provide a reactive motion be- havior using visual feedback information extracted from single or multiple cameras, and either using direct or indirect error computation of visual features. Detailed reviews on visual can be found in [5], [6]. In the case of direct method, the visual servoing control law output is computed directly using the extracted visual features from the camera image. This method is also known as an Image Based Visual Servoing (IBVS) method [7], [8]. Typically IBVS scheme defines the reference signal in the image plane. IBVS maps the error vector in the image space to the robot actuation space. Usually, the target image features are extracted from the raw data of the captured image from camera to compress the salient information; thus IBVS scheme is also known as a feature based scheme or 2D visual servoing. One of the problem with IBVS scheme is that it is difficult to estimate the depth. In the case of indirect method, the extracted visual features are transformed using a pose estimation method to have relative pose between the camera and the target. The visual servoing control law ouput is obtained using the pose error in 3D space between the camera Indrazno Siradjuddin, PhD. Electrical Engineering Department, Malang State Polytechnic, Indonesia, indrazno@polinema.ac.id Dr. Indah Agustien Siradjuddin, Informatics Engineering Department, Trunojoyo University, Indonesia, indah.agustien@if.trunojoyo.ac.id Supriatna Adhisuwignjo, MT., Electrical Engineering Department, Malang State Polytechnic, Indonesia, supriatna@polinema.ac.id and the target, such system is known as a Position Based Visual Servoing (PBVS) [9]. Therefore a PBVS scheme can overcome the IBVS issue of the depth estimation. Recently a detailed comparison of the two basic visual servoing schemes in the context of stability and robustness with respect to system modelling error was presented in [10]. In term of the camera configuration, both basic visual servoing schemes can be applied using eye-in-hand or eye-to-hand configurations. In the eye-in-hand configuration, one or multiple cameras are placed on the robot platform observing the target [11]. In contrast, in the eye-to-hand configuration, one or multiple cameras are placed permanently in such a way, the movement of the robot and the target can be observed [12]. With respect to the mobile robot navigation field of study, many articles have focused on the design of PBVS-like methods [4], [13], [14]. This paper presents the analytical development of an Image Based Visual Servoing method used for the differential drive mobile robot navigation. The developed control law algorithm is applied on a Beaglebone Black embedded system. The rest of the paper is organised as follows. Section 2 discusses the development of the pro- posed IBVS control algorithm, Section 3 presents the stability analysis followed by the discussion of the IBVS robustness due to camera callibration error in Section 4. Section 5 shows the experimental results and followed by Section 6 for the conclusion. II. IBVS FOR A DIFFERENTIAL DRIVE MOBILE ROBOT A. Differential Drive Mobile Robot Kinematics The most popular type of indoor mobile robot is a differ- ential drive system. This system uses two main wheels where each of which is connected to its own motor. A third wheel functions to balance the robot structure and rolls passively. To develop a simple model based of the differential drive contraints, only two parameters are necessary to be measured. The first parameter is the distance between the centre of the left wheel and the right wheel, denoted as L. The second parameter is the wheel radius, r. The instantaneous changes 156306-7979-IJMME-IJENS c December 2015 IJENS