Research article A 3D grasping system based on multimodal visual and tactile processing Beata J. Grzyb, Eris Chinellato, Antonio Morales and Angel P. del Pobil Robotic Intelligence Lab, Department of Computer Engineering and Science, Universitat Jaume I, Castellon, Spain Abstract Purpose – The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects. Design/methodology/approach – The robotic system is composed of three main components. The first is a conceptual manipulation framework based on grasping primitives. The second component is a visual processing module that uses stereo images and biologically inspired algorithms to accurately estimate pose, size, and shape of an unmodeled target object. A grasp action is planned and executed by the third component of the system, a reactive controller that uses tactile feedback to compensate possible inaccuracies and thus complete the grasp even in difficult or unexpected conditions. Findings – Theoretical analysis and experimental results have shown that the proposed approach to grasping based on the concurrent use of complementary sensory modalities, is very promising and suitable even for changing, dynamic environments. Research limitations/implications – Additional setups with more complicate shapes are being investigated, and each module is being improved both in hardware and software. Originality/value – This paper introduces a novel, robust, and flexible grasping system based on multimodal integration. Keywords Control technology, Robotics, Control applications Paper type Research paper 1. Introduction Traditional research on robot grasp planning, analysis, and control assumes that the layout of the workspace is known in advance, and that models of the objects to manipulate and of the robot hand are readily available. In these conditions the problem of grasping becomes an analytical planning problem, and many theoretical and computational solutions have been proposed for the different stages of a reach and grasp action. In service robotics applications the above assumptions normally do not hold, and real world scenarios are usually unstructured and prohibitively costly to model. In these cases, theoretical analytical solutions are not directly applicable, and more flexible and versatile approaches have to be pursued. In dealing with unstructured environments, the main sources of uncertainty for grasping actions come from the attempt to manipulate unmodeled objects, which pose and physical characteristics can be variable and not known in advance. The use of sensors allows to acquire information about the environment and hence reduce uncertainty during action execution. Within the field of grasp planning and execution, the use of sensors focuses on three main stages: first, on the object model acquisition, allowing a most traditional grasp planning after a model is built; second, on the approaching phase; and third, on the control loop of the grasp execution phase, with the purpose of obtaining a stable grasp. For what concerns the two first stages, vision is the most widely used modality. Many different strategies have been developed to estimate shape and pose of target objects from visual input (Jang et al., 2005; Wang et al., 2005). Successful approaches, with certain limitations, are available for grasp planning on 2D planar objects (Davidson and Blake, 1998; Morales et al., 2006), but no completely satisfactory solutions have been provided for the full 3D case. Visual feedback is often employed also when approaching the target objects, by using various techniques of visual servoing and active vision. However, when the effector gets in touch with the object, vision leaves its leading role, and other sensory modalities, mostly contact based sensors, take over the control of the action. Pressure and/or force sensors are mainly employed as feedback for the grasp control execution loop (Platt et al., 2002), and in object exploration strategies (Teichmann and The current issue and full text archive of this journal is available at www.emeraldinsight.com/0143-991X.htm Industrial Robot: An International Journal 36/4 (2009) 365–369 q Emerald Group Publishing Limited [ISSN 0143-991X] [DOI 10.1108/01439910910957138] This paper describes research carried out at the Robotic Intelligence Laboratory of Universitat Jaume I. Support for this laboratory is provided in part by the European Community’s Seventh Framework Programme FP7/2007-2013 under grant agreements 217077 (EYESHOTS project) and 215821 (GRASP project), by Ministerio de Ciencia y Innovacio ´n (FPU grant AP2007-02565), by Fundacio ´ Caixa-Castello (projects P1-1B2005-28 and P1-1A2006-11) and by Generalitat Valenciana (project GV-2007-109). 365