Robot Force/Position Control with Force and Visual Feedback Vincenzo Lippiello, Bruno Siciliano, and Luigi Villani Abstract—In this paper, a force/position control law for a robot manipulator in contact with a partially known envi- ronment is proposed. The environment is a rigid object of known geometry but of unknown and time-varying pose. An algorithm for online estimation of the object pose is adopted, based on visual data provided by a camera as well as on forces measured during the interaction with the environment. This information is used by a hybrid force/position control scheme. Simulation results are presented for the case of an industrial robot manipulator in contact with a planar surface. I. INTRODUCTION Vision and force are two complementary sensing capa- bilities that can be exploited in a synergic way to enhance the autonomy of a robot manipulator during the interaction with the environment. In fact, a robot may achieve global information on the environment using vision; on the other hand, the perception of the force applied to the end effector allows adjusting its motion so that the local constraints imposed by the environment are satisfied. In recent years, several approaches where force and vision measurements are combined in the same feedback control loop have been proposed, as hybrid visual/force control [1], shared and traded control [2], [3] or visual impedance control [4], [5], [6]. These algorithms improve classical interaction control schemes [7], e.g., impedance control, hybrid force/position control, parallel force/position control, where only force and joint position measurements are used. The approach adopted in this work is based on the classical constrained hybrid force/position control [8], which requires exact knowledge of the geometry of the environment in the form of constraints imposed to the end-effector motion. This hypothesis is relaxed here, in the sense that the geometry of the environment is assumed to be known, but its position and orientation with respect to the robot end-effector are unknown. The relative pose is estimated online from all the available sensor data, i.e., visual, force and joint position measurements, using the Extended Kalman Filter (EKF). The estimated pose is then exploited to compute the constraint equations in the hybrid force/position control law. The pose estimation algorithm is an extension of the visual tracking scheme proposed in [9] to the case that also force and joint position measurements are used. Remarkably, the same algorithm can be adopted both in free space and during the interaction, simply modifying the measurements set of the EKF. PRISMA Lab, Dipartimento di Informatica e Sistemistica, Univer- sit` a di Napoli Federico II, Via Claudio 21, 80125 Napoli, ITALY {lippiell,siciliano,lvillani}@unina.it A simulation case study on an industrial robot is presented. The results confirm the effectiveness of the proposed ap- proach. II. MODELLING Consider a robot in contact with an object, a wrist force sensor and a camera mounted on the end-effector (eye- in-hand) or fixed in the workspace (eye-to-hand). In this Section, some modelling assumptions concerning the object, the robot and the camera are presented. A. Object The position and orientation of a frame attached to a rigid object O o x o y o z o with respect to a base coordinate frame Oxyz can be expressed in terms of the coordinate vector of the origin o o =[ x o y o z o ] T and of the rotation matrix R o (ϕ o ), where ϕ o is a (p × 1) vector corresponding to a suitable parametrization of the orientation. In the case that a minimal representation of the orientation is adopted, e.g., Euler angles, it is p =3, while it is p =4 if unit quaternions are used. Hence, the (m × 1) vector x o =[ o T o ϕ T o ] T defines a representation of the object pose with respect to the base frame in terms of m =3+ p parameters. The homogeneous coordinate vector ˜ p =[ p T 1] T of a point P of the object with respect to the base frame can be computed as ˜ p = H o (x o ) o ˜ p, where o ˜ p is the homogeneous coordinate vector of P with respect to the object frame and H o is the homogeneous transformation matrix representing the pose of the object frame referred to the base frame: H o (x o )= R o (ϕ o ) o o 0 T 1 , where 0 is the (3 × 1) null vector. It is assumed that the geometry of the object is known and that the interaction involves a portion of the external surface which satisfies a twice differentiable scalar equation ϕ( o p)=0. The unit vector normal to the surface at the point o p and pointing outwards can be computed as: o n( o p)= (∂ϕ( o p)/∂ o p) T (∂ϕ( o p)/∂ o p , (1) where o n is expressed in the object frame. Notice that the object pose x o is assumed to be unknown and may change during the task execution. As an example, a compliant contact can be modelled assuming that x o changes during the interaction according to an elastic law. A further assumption is that the contact between the robot and the object is of point type and frictionless. Therefore, Proceedings of the European Control Conference 2007 Kos, Greece, July 2-5, 2007 WeD02.6 ISBN: 978-960-89028-5-5 3790