Control of Redundant Manipulators
by Fuzzy Linguistic Commands
Koliya Pulasinghe
1
, Keigo Watanabe
2
, Kiyotaka Izumi
2
, Kazuo Kiguchi
2
1 Dept. of Production & Control Tech., Graduate School of Sci. & Eng.,
Saga University, 1-Honjomachi, Saga, 840-8502, Japan
2 Dept. of Advanced Systems Control Eng., Graduate School of Sci. & Eng.,
Saga University, 1-Honjomachi, Saga, 840-8502, Japan
koliya@ieee.org
Abstract: This paper presents a method of controlling redundant manipulator by spoken lan-
guage commands consisting fuzzy linguistic information. The present system introduces the fuzzy-
neuro control paradigm to the contemporary speech controlled robotic systems, which are based
on on-off control paradigm. The system is sensitive to the action activation commands, action
modification commands, and action repetition commands of the human-robot conversation carried
out by practical dialogues. Credibility of the proposed system is experimentally proved by control-
ling a manipulator with seven degrees-of-freedom by fuzzy linguistic information enriched spoken
language commands to perform an assembling task.
Keywords: Natural language commands, Fuzzy-neuro controller, Spoken language based robotic
control.
1. Introduction
Robotic presence in the human environment has made
the researchers busy with investigating a candidate com-
munication medium to build the human-robot relation-
ship. Following list substantiates the candidacy of the
spoken language as the human-robot communication
medium and explores the possible areas that the spoken
language based robotic systems can be implemented.
• The world’s elderly population is accelerated by de-
clining fertility rates and steady development in life
expectancy. Almost all developed countries as well
as developing countries encounter the problems of
adopting elderly citizens in near future. Govern-
ments have to pay much attention for nursing and
aiding services for elderly people. According to the
U.S. Census Bureau forecasts, in 2030 the percent-
age of European population that is more than 65
years old is 24.3 percent and that of North Amer-
ica is 20.3 percent
1)
. Under these circumstances,
elderly people have to live alone without proper
care of their children. Under these circumstances
nursing and aiding robots have taken much atten-
tion. But, elderly people cannot operate precisely a
joystick or similar conventional control equipment
as the input medium when they are communicat-
ing with human friendly robots. Therefore, spoken
language is a better alternative for them in com-
municating with human friendly robots, which are
employed for nursing and aiding.
• People suffering from quadriplegia (paralysis in all
four limbs) have no other means than the spoken
language that they can utilize to communicate with
human friendly robots. Spoken interfaces not only
improve the quality of these severely injured peo-
ple’s life but also add their contribution to the ef-
fective workforce of a country.
• A natural language based interface can add an ad-
ditional dimension to the work carried out by a per-
son, whose eyes and hands are busy with the job.
As an example, while driving a car or while carrying
out a surgery, people can control the human friendly
robots around them by the spoken language.
• Tele-operation is another field, where we can ap-
ply spoken language interfaces to control robots.
Since we already have well established phone net-
work, spoken language can be utilized for control-
ling robots in remote environments, which will in-
crease the human potential. This is very cost ef-
fective because speech signals consume low band-
width compared to the other signals. In addition to
that, natural language based interfaces can be im-
plemented for the entertainment robots, which are
working remotely in home environments.
• Spoken language interface is a better alternative
if there are severe space limitations in implement-
ing keypads and visual display units on the human
friendly robots. It can be implemented with mini-
mal power and space requirements, which are ideal
for portable or wearable robots.
In early days of the speech controlled machines, ma-
chine functions are activated by comparing the input
user utterance with a stored template as in voice con-
trolled wheelchair developed by Mazo et al.
2)
. Each com-
mand is restricted to one or two words and has an asso-
ciated function
2, 3)
. To control the machine, user must
SICE Annual Conference in Fukui, August 4-6, 2003
Fukui University, Japan
PR0001/03/0000-2819 ¥400 © 2003 SICE -2819-