2.5D Visual Servoing with a Fixed Camera 1 J. Chen, A. Behal, D. Dawson, and Y. Fang Department of Electrical & Computer Engineering, Clemson University, Clemson, SC 29634-0915 email: jianc, abehal, ddawson, yfang@ces.clemson.edu Abstract In this paper, we investigate the translational and rota- tional motion of the end-eector of a robot under visual feedback from a xed camera. We achieve an exponen- tial stability result for the regulation of the end-eector to a desired location and orientation. Specically, by utilizing visual information from one xed camera, we capture the motion of 4 points located in a ctitious plane attached to the end-eector of the robot that al- lows us to set up the control problem for the 6 DOF motion of the end-eector in Cartesian space. By as- suming knowledge of the camera intrinsic parameters, we obtain the rotational motion of the end-eector through a homography decomposition while utilizing the pixel motion of one of the four points to obtain the translation information. The stability of the controller is proven through a Lyapunov-based stability analysis. 1 Introduction Robotic systems employ sensor-based control strate- gies for ecient operation as well as to obtain robust- ness against disturbances and/or modeling uncertain- ties/inaccuracies. Typically, robots utilize encoders to sense joint movements; velocity information is obtained through tachometers or by employing backwards dier- ence algorithm on the joint positions. This approach works well with robots with nite degrees of freedom since a Jacobian matrix can be employed on the joint velocities to obtain robot end-eector position/velocity in the task-space. However, for hyper-redundant robots (i.e., robots with ideally innite degrees of freedom), it becomes dicult to estimate the forward kinematics of the robot (i.e., the task-space coordinates of the end- eector are not easily obtainable). Visual feedback of the end-eectors position and orientation in task-space using a xed camera comes in as a handy approach to this otherwise cumbersome task. Moreover, any robot operating in an unstructured environment is more ro- bustly controlled with a vision system that obtains the position information for both the robot and the ob- stacles in the environment of the robot. Vision-based systems also have the additional advantage of allow- ing for non-contact measurements of the environment. 1 This work is supported in part by the U.S. NSF Grants DMI- 9457967, ONR Grant N00014-99-1-0589, a DOC Grant, and an ARO Automotive Center Grant. Moreover, vision systems can be used for both on-line trajectory planning and feedforward/feedback control (i.e., visual servoing). An overview of the state-of-the- art in robot visual servoing can be found in [9, 13]. The results from vision-based research can broadly be classied as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS) techniques. As is well known, both of these approaches suer from deciencies. In the last few years, partitioned ap- proaches have been developed that fuse 3D task-space information with 2D image-space information to over- come many of the shortcomings of PBVS and IBVS ap- proaches. Recently, Malis and Chaumette [1, 2, 11, 12] proposed various kinematic control strategies (coined 2.5D visual servo controllers) by exploiting the fact that the interaction between translation and rotation components can be decoupled through a homography. Specically, information from the 3D task-space (ob- tained either through a given 3D model or more in- terestingly through a projective Euclidean reconstruc- tion) is utilized to regulate the rotation error system while information from the 2D image-space is utilized to control the translation error system. In [5], Deguchi proposed two algorithms to decouple the rotation and translation components using a homography and an epipolar condition. Specically, Deguchi decomposed the translation and rotation components through a ho- mography and stated that the 2.5D controller given in [2] can be utilized, and as an alternate method, Deguchi developed a kinematic controller that utilizes task-space information to regulate the translation er- ror and image-space information to regulate the rota- tion error. More recently, Corke and Hutchinson [4] developed a new hybrid image-based visual servoing scheme in order to decouple rotation and translation components about the z-axis from the remaining de- grees of freedom so as to address the problem of desir- able image-space trajectories resulting in undesirable Cartesian trajectories. One drawback of the aforemen- tioned controllers is that they require a constant esti- mate of the depth information that is then utilized in lieu of the exact value. That is, as stated in [12], an o- line learning stage is required to estimate the distance of the desired camera position to the reference plane. Motivated by the desire to compensate for the afore- mentioned depth information, [3] developed an adap- tive kinematic controller to ensure uniformly ultimately p. 1