An Application of Constraint-Based Task Specification and Estimation for Sensor-Based Robot Systems Tinne De Laet, Wilm Decr´ e, Johan Rutgeerts, Herman Bruyninckx and Joris De Schutter Abstract— This paper shows the application of a systematic approach for constraint-based task specification for sensor- based robot systems [1] to a laser tracing example. This approach integrates both task specification and estimation of geometric uncertainty in a unified framework. The framework consists of an application independent control and estimation scheme. An automatic derivation of controller and estimator equations is achieved, based on a geometric task model that is obtained using a systematic task modeling procedure. The paper details the systematic modeling procedure for the laser tracing task and elaborates on the task specific choice of two types of task coordinates: feature coordinates, defined with respect to object and feature frames, which facilitate the task specification, and uncertainty coordinates to model geometric uncertainty. Furthermore, the control and estimation scheme for this specific task is studied. Simulation and real world experimental results are presented for the laser tracing example. I. I NTRODUCTION Robotic tasks of limited complexity, such as simple po- sitioning tasks, trajectory following or pick-and-place appli- cations in well structured environments, are straightforward to program. For these kinds of tasks extensive program- ming support is available, as the specification primitives for these tasks are present in current commercial robot control software. While these robot capabilities already fulfill some industrial needs, research focuses on specification and exe- cution of much more complex tasks. The goal of our recent research is to open up new robot applications in industrial as well as domestic and service environments. Examples of complex tasks include sensor-based navigation and 3D ma- nipulation in partially or completely unknown environments, using redundant robotic systems such as mobile manipulator arms, cooperating robots, robotic hands or humanoid robots, and using multiple sensors such as vision, force, torque, tactile and distance sensors. Little programming support is available for these kinds of tasks. As a result, the task programmer has to rely on extensive knowledge in multiple fields such as spatial kinematics, 3D modeling of objects, geometric uncertainty and sensor systems, dynamics and All authors are with the Department of Mechanical Engineering, Katholieke Universiteit Leuven, Belgium. Corresponding author: Tinne De Laet (tinne.delaet@mech. kuleuven.be) All authors gratefully acknowledge the financial support by K.U.Leuven’s Concerted Research Action GOA/05/10 Tinne De Laet is a Research Assistant of the Research Foundation - Flanders (FWO-Vlaanderen). Wilm Decr´ e’s research is funded by a Ph.D. grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT- Vlaanderen). o1 a o1 b o2 a o2 b f 1 a f 2 a f 1 b f 2 b Fig. 1. The object and feature frames for simultaneous laser tracing on a plane and a barrel. control, estimation, as well as resolution of redundancy and of conflicting constraints. The goal of our recent research is to fill this gap. We want to develop programming support for the implementation of complex, sensor-based robotic tasks in the presence of geometric uncertainty. The foundation for this programming support is a generic and systematic approach [1] to specify and control a task while dealing properly with geometric uncertainty. Previous work on specification of sensor-based robot tasks, such as force controlled manipulation [2]–[5] or force controlled compliant motion combined with visual servoing [6], was based on the concept of the compliance frame [7] or task frame [8]. In this frame, different control modes, such as trajectory following, force control, visual servoing or distance control, are assigned to each of the translational directions along the frame axes and to each of the rotational directions about the frame axes. The task frame concept has proved to be very useful for the specification of a variety of practical robot tasks. However, the drawback of the task frame approach is that it only applies to task geometries with limited complexity, that is, task geometries for which separate control modes can be assigned independently to three pure translational and three pure rotational directions along the axes of a single frame. A more systematic approach is to assign control modes and corresponding constraints to arbitrary directions in the six dimensional manipulation space. This approach, known as constraint-based programming, opens up new applications involving a much more complex geometry and/or involving multiple sensors that control different directions in space