Prediction of object manipulation using tactile sensor information
by a humanoid robot
Shigeyuki Uematsu, Yuichi Kobayashi, Akinobu Shimizu and Toru Kaneko
Abstract— This paper presents a framework of lifting-up
manipulation acquisition based on tactile sensing informa-
tion by a humanoid robot. Feature extraction from sensor
information, including tactile information, is presented using
linear and nonlinear mappings. Information acquired from
sensors is mapped to a lower-dimensional space for predicting
success of lifting-up task. Robot judges success or failure of
the manipulation using the obtained feature space and object
orientation. The proposed method was evaluated by simulation
with a humanoid robot. Sensor information obtained at the
beginning stage of lifting-up task was utilized to predict whether
the robot can accomplish the task without dropping down the
object. It was verified that the proposed feature extraction
provides sufficient information to predict success of the task.
The prediction will be utilized to modify posture of the robot.
I. INTRODUCTION
Nowadays, demands for robots that can perform tasks
that have been conducted by humans are increasing [1].
One of them is to assist people who need nursing care to
do their daily activities by themselves. Assistance robot for
cooping meal [3] and a smart wheelchair [2] are examples
of such applications. Another demand is to decrease heavy
load handled by caregivers at nursing home and hospitals.
Mukai et al. have developed a nursing-care assistance robot
RIBA that can lift a human in its arms [5]. They used soft
tactile sensors equipped at its arms for lifting up a bedridden
patient.
One of the difficulties of lift-up motion in such application
is that object (patient) can be variant, in the sense of size,
shape, posture, cloths, and so on. Thus, it is very important
for such robots to cope with variance of object state. RIBA,
as described above, has tactile sensors on its arm and has
a large potential to adaptively manipulate object in various
states, because tactile information provides rich information
of contact between robots and the object [4]. Ohmura et al.
realized whole-body contact motion to lift up a heavy object
using tactile feedback [7]. Chitta et al. proposed a method
for estimating state of objects using information obtained by
tactile sensors [6].
One of the problems common to those approaches to effec-
tively utilize tactile information is that most control strategies
are problem-specific, designed suitable for each application.
In conventional manipulation with a robot hand, stability of
grasping has been formulated by analytical models, such as
force closure and form closure [8]. Performance index of
S. Uematsu is with the Department of Electrical and Electronic Engi-
neering, Tokyo University of Agriculture and Technology, Koganei, Tokyo,
184-8588 Japan 50011645205@st.tuat.ac.jp
manipulation was further extended to multi-robot manipu-
lation case [9]. In the above-mentioned applications with
humanoids, however, those idea cannot be directly applied
due to difficulties of modeling contact between objects and
robots.
One possible solution for this problem is to introduce a
data-driven approach. By directly observing many sample
motions and their results, we might be able to evaluate
a manipulation strategy from those observed experiences.
If a robot can extract features that help to evaluate (or
predict) a manipulation strategy autonomously, the method
can be applied to cases with various objects and robots. In
addition, such evaluation of a manipulation strategy could
be equivalent to stability analysis of grasps without any
analytical models.
In this paper, a feature extraction method for evaluating
manipulation of object by a humanoid robot is proposed.
Evaluation is conducted through predicting whether current
posture of manipulation is suitable for final achievement
of manipulation task. Feature extraction based on mapping
to lower-dimensional space and clustering is applied for
estimation of success/failure of the lifting task. The proposed
method is verified by simulation with a humanoid robot
which has tactile sensors on its arm.
The rest of the paper is organized as follows. Problem
settings for the proposed method are described in section
II. The proposed method of predicting task achievement is
described in section III. The proposed method is evaluated
in simulation in section IV. After discussing results of
simulations in section V, section VI gives conclusion of the
paper.
II. PROBLEM SETTINGS
A manipulation task of lifting-up an object by a humanoid
robot is considered. An exemplar in simulation is shown
in Fig.1, where configuration of humanoid robot NAO [13]
is simulated by WEBOTS simulator [14]. There are two
cameras at the head of the robot, but both of them do not
provide sufficient information about relative position of an
object when it is located near to the robot. Measurement of
object close to the robot by visual information is especially
difficult in cases with a large object with few textures,
because viewing range will be filled with the object without
any visual cue to detect its position.
It is assumed that force sensors are attached on each arm
of the robot so that it can evaluate contact with an object
*
.
*
The original configuration of NAO does not include force sensors.
978-1-4673-2706-0/12/$31.00 ©2012 IEEE