Calibration free path planning for visual servoing yielding straight line behaviour both in image and work space Florian Schramm and Alain Micaelli CEA, LIST - DTSI / SCRI 18 rt du Panorama, 92265 Fontenay-aux-Roses, France micaelli@cea.fr Guillaume Morel Laboratoire de Robotique de Paris (CNRS), Universit´ e Paris VI 18 rt du Panorama, 92265 Fontenay-aux-Roses, France {schramm,morel}@robot.jussieu.fr Abstract—Trajectory planning for eye-in-hand visual ser- voing is usually performed in the Euclidean work space of the robot or in a two dimensional image space. However, planning in Euclidean space may lead to very inappropriate trajectories in image space and vice versa. These difficulties are due to the perspective transformation of the camera, loss of one dimension due to projection onto the image plane and the fact that only a rough approximation of camera parameters is practically available. Hence, this paper proposes a planning scheme for image trajectories such that straight line behaviour is ensured both in image space and world space, ie. a single but arbitrarily chosen point in the image plane performs straight line behaviour as well as the camera optical center in work space. This way, trajectories become very compact and most of the above mentioned problems are avoided in a natural way. The algorithm requires as a priori information nothing more than matched image points (in pixels) from a current and desired image and a depth set for at least one position. Index Terms— Visual Servoing, Path planning, Image based, Calibration free INTRODUCTION Eye-in-hand image based visual servoing (IBVS) is a well known technique that allows for direct control of the motion of an object, observed by a camera mounted at the robot end- effector [1], [2]. With this approach, a desired image is learnt once for all when the camera is placed at a desired location with respect to the observed object. Then, from any initial location where the object can be seen by the camera, the controller can provide convergence towards the learnt desired image. A number of theoretical and experimental studies have been published to evaluate the properties of this classical approach. In [3], it is shown that when the error between the initial and final configurations is large, IBVS may produce unnecessary large motions of the robot, that lead to a failure. This is due to the fact that the camera’s 3D displacement is not explicitly controlled. Furthermore, IBVS is not globally stable and can be proven to be stable only in a (yet unknown) neighborhood of the desired location [2], [4]. This has motivated the development of image based path planning algorithms, aimed * This work is partially supported by the Research Training Network FreeSub of the European Commission under contract HPRN-CT-2000-00032 to M. Schramm at interpolating a path between an initial image and the final desired image. Such an interpolated path can then be fed to the real time controller, which allows for the system to work with small errors. A major difficulty in image based path planning lies in the fact that the planned path has to be feasible, i.e. it must correspond to a Euclidean path for the camera. However, it is desirable to avoid an explicit computation of this 3D path since this would require explicit 3D reconstruction, which relies on a knowledge of the camera’s model and/or the object model. Rather, we are here interested in robust path planning, i.e. path planning that does not use any camera model, nor any object model, while guaranteeing that the planned path corresponds to a Euclidean path. Several methods can be found in the literature for IBVS planning: [5] uses image based control with a potential fields method taking into account manipulator joint limits and visibility restrictions. In [6], optimal control techniques are employed for design of image motion compatible with joint limits and ensuring visibility, the cost function representing a time integral of energy. Similarly, [7] uses a navigation function, representing specially potential field functions with a minimum meant to be unique by construction. All these ap- proaches require a calibrated camera and yield only numeric solutions based on iterative methods, with their inherent risk of convergence problems. Alternatively, approaches presented in [8] and [9] are interesting because they are global (thus proven to converge) and, most importantly, do not require any camera model. They decompose projective motion parameters analytically in order to allow for a reparameterization of the involved Euclidean displacement, without explicitly computing any Euclidean entity. The projective displacement parameters can be reconstructed from the learnt initial and final image. However, further precaution must be taken in order to ensure visibility, which (namely in the case of large rotations) may lead to heavy controller effort. To overcome this disadvan- tage, [10] proposed a planning scheme based on the same decomposition technique, but now ensuring visibility. This solution unfortunately introduces occasionally large geodesic displacements of unknown shape for the manipulator tip. IROS 2005 Edmonton - Canada - 2/6 août 2005