Planar Image Based Visual Servoing as a Navigation Problem * Noah J. Cowan and Daniel E. Koditschek Electrical Engineering and Computer Science The University of Michigan Ann Arbor, MI 48105-2110 E-mail: {ncowan, kod}@eecs.umich.edu Abstract We describe a hybrid planar image-based servo al- gorithm which, for a simplified planar convex rigid body, converges to a static goal for all initial condi- tions within the workspace of the camera. This is achieved using the sequential composition of a palette of continuous image based controllers. Each sub- controller, based on a specified set of collinear feature points, is shown to converge for all initial configuations in which the feature points are visible. Furthermore, the controller guarantees that the body will maintain a “visible” orientation, i.e. the feature points will al- ways be in view of the camera. This is achieved by introducing a change of coordinates from SE(2) to an image plane measurement of three points, and impos- ing a navigation function in that coordinate system. Our intuition suggests that appropriately generalized versions of these ideas may be extended to SE(3). 1 Introduction Visual servoing describes a broad class of problems in which a robot is positioned with respect to a target using computer vision as the primary feedback sen- sor [4, 6, 7, 9, 14]. There are traditionally two ap- proaches to visual servoing: 2D image-based (IB), and 3D position based (PB). In image-based visual servo- ing the control objective is to minimize the perceived error (i.e. image plane error), whereas the objective in position-based visual servoing is to minimize the task space error. It is accepted that IB is more robust with respect to calibration uncertainty than PB servoing, though to our knowledge this has never been formally justified. The disadvantage of IB servoing is that convergence * This work was supported in part by the NSF under grant IRI-9510673 is generally only guaranteed locally [18, 19] (with the exception of a few algorithms for point positioning and estimation problems [9, 17]). Conversely, one easily specifies an essentially globally convergent controller using task space error coordinates, but when using PB servoing the controller must also avoid occlusions or risk losing site of the features necessary for pose estimation. x c y c x b y b +ζ E ζ E z 1 z 2 z 3 π 1 π 2 π 3 α Figure 1: The objective is to drive the rigid body so that each feature aligns with the respective feature on a goal image, while avoiding collisions with ±ζ E . The first portion of the algorithm requires three fea- tures (depicted by +, × and ) visible on the body and on the goal, a condition not always satisfied. The “simple-minded” workaround is to “hallucinate” the occluded feature for the controller. We prefer to work only with the visible features available at each posi- tion, as depicted in Figure 2.