An Autonomous Assistant Robot For Book Manipulation in a Library * Ramos-Garijo R, Prats M, Sanz PJ, Del Pobil AP Computer Science Department Jaume-I University Castellón, Spain al024080, al019137, sanzp, pobil@uji.es * 0-7803-7952-7/03/$17.00 2003 IEEE. Abstract - This paper represents work in progress towards a complete system working to assist users in a library. With this aim, the system must be capable to looking for a specific book in a shelf, asked by any user, and whether it is found, deliver it as soon as possible to the user. To get its objectives the system integrates automatic object recognition, visually guided grasping, and force feedback, among other advanced capabilities. Implementation details about the main modules developed presently are shown. Finally, after success in preliminary results obtained in our Campus, we are encouraged to follow working in this way to obtain the complete prototype. Keywords: Autonomous Manipulation; Pattern Recognition Robots; Visually-Guided Grasping. 1 Introduction Presently, a very active research on service robots has been detected. This can be observed looking at the programs followed by the most important conferences around over the world in robotics (ICRA, IROS, etc.). Commonly, underlying to this kind of systems a mobile robot is needed. Many applications until now have been reported, focused on tour-guide robots, cleaning robots and so on. But is more unusual the application in a real life scenario, like a library, where manipulation is a must. In this last case a mobile robot arm is necessary to manipulate the books on a shelf. In summary, we can distinguish three main components in such a systems: a mobile platform, including navigation capabilities; a robot arm, suitable for autonomous manipulation tasks; and a user interface, letting a very high level of interaction with the system. 1.1 State of the art Focus on these aforementioned systems, some recent progress has been achieved. In particular, and with the aim to reduce the many difficult tasks included in this kind of systems, a teleoperated solution was presented by Tomizawa and coworkers [4], from the University of Tsukuba, in Japan. Although the final objective in our work is different to that proposed by Tomizawa et al., some strategies can be useful to our project. Note that we are not interested in a posterior digitization of printed materials, as it is proposed by Tomozawa et al., but all the previous strategies necessary to guide the robot towards the bookshelf, identify and manipulate any book, etc., it is the same for us. On the other hand, some works has been developed in an autonomous manner, such as that from the Johns Hopkins University, untitled “Comprehensive Access to Printed Materials” (CAPM). Its main objective, presently under progress, will be allow for real-time browsing of printed materials through a web interface. So, with this aim, an autonomous mobile robotic library system has been developed to retrieve items from bookshelves and carry them to scanning stations located in an off-site shelving facility. Nevertheless, until now only simulation experiments have been presented by their authors Suthakorn et al. [3]. It is important to clarify the main difference between [3] and [4], from a robotics point of view. Although the two systems are using internet for the user interaction, only in [4] the user is within the system control loop, namely it is a teleoperated system. 1.2 Motivation and goals In this paper, an autonomous solution for the robotics librarian is proposed. The aim will be to retrieve a book required by any user, and bringing that book to the user, whether it was founded in the corresponding bookshelves. Until now, same previous pieces, related to this problem, have been developed in our lab, such as: robot navigation strategies; user interfaces based on voice commands; or visually guided grasping modules [2], among others. In particular, this paper is focused on the computer vision and grasping modules, necessary to achieve the objective proposed. The rest of the paper has been organized in the subsequent sections. The overall system description is presented in section 2. The main modules developed until now, namely vision and grasping, are shown in sections 3 and 4, respectively. Section 5 reports some preliminary results. And finally, some concluding remarks are presented in section 6. 3912