Laser-Assisted Telerobotic Control for Enhancing Manipulation
Capabilities of Persons with Disabilities
Karan Khokar, Kyle B. Reed, Redwan Alqasemi and Rajiv Dubey
Abstract—In this paper, we demonstrate the use of range
information from a laser sensor mounted on the end-effector
of a remote robot manipulator to assist persons with limited
upper body strength carry out Activities of Daily Living
in unstructured environments. Laser range data is used to
determine goals, identify targets, obstacles, and via points
to enable autonomous execution of trajectories. The human
operator is primarily involved in higher level decision making;
the human user performs minimal teleoperation to identify
critical points in the workspace with the laser pointer. Tests
on ten healthy human subjects in executing a pick-and-place
task showed that laser-based assistance not only increased the
speed of task execution by an average of 26.9% while decreasing
the physical effort by an average of 85.4%, but also made the
task cognitively easier for the user to execute.
I. I NTRODUCTION
According to the 2006 US Census Bureau report [1] 51.2
million Americans suffer from some form of disability and
10.7 million of them are unable to independently perform
activities of daily living (ADL). They need personal as-
sistance to do ADLs such as to pick-and-place an object
or open a door. Robotic devices have been used to enable
physically disabled individuals to execute ADLs [2]. How-
ever, teleoperation of a remote manipulator puts a lot of
physical and cognitive load on the operator [2] more so for
persons with disabilities. There have been previous attempts
to provide computer based assistance by combining teleop-
eration and autonomous modes in shared and traded control
formulations [3] [4] [5], by means of virtual fixtures [6]
and potential fields [7]. Previous work at the Rehabilitation
Robotics Laboratory at the University of South Florida
has focused on reducing operators fatigue by providing
assistance depending on the accuracy of sensor and model
information [8], augmenting the performance of motion-
impaired users in job-related tasks using scaled teleoperation
and haptics [9], and providing assistance based on real-time
environmental information and user intention [10]. In this
work we use the laser sensor to minimize the physical and
mental burden on the human user during task execution. The
laser range data is used to determine goal points and identify
Manuscript received March 10, 2010. Final Mauscript Received
July 15, 2010. This work was supported by the NSF Grant No. IIS-0713650.
Karan Khokar is at the University of South Florida,
Tampa, FL 33620 USA (phone: 813-447-7703; e-mail:
karan.khokar@gmail.com)
B. Reed is at the University of South Florida, Tampa, FL 33620 USA
(phone: 813-974-2385; e-mail: kylereed@usf.edu)
Redwan Alqasemi is at the University of South Florida, Tampa, FL 33620
USA. ( e-mail: alqasemi@eng.usf.edu)
Rajiv Dubey is at the University of South Florida, Tampa, FL 33620
USA.( e-mail: dubey@eng.usf.edu)
targets, obstacles, and via points in the remote unstructured
environment that enables autonomous execution of certain
subtasks under human supervisory control, thus providing
assistance to the human user. The human is still in the loop
and teleoperates to point the laser to critical points in the
remote environment. The authors believe that the use of the
laser range information in this manner is a novel approach.
The motivating factor behind this work is to enable persons
with limited upper body strength (due to multiple sclerosis,
muscular dystrophy, heart stroke or spinal cord injuries) to
execute ADLs. However, the proposed telerobotic concept
has a much broader scope in terms of providing assistance
in areas such as nuclear waste clean-up, space/undersea
telerobotics, robotic surgery, and defense applications.
II. RELATED WORK
Hasegawa et al. [11] enabled autonomous execution of
tasks by generating 3D models of objects with a laser sensor
that computed 3D coordinates of points on objects. These
models were compared to a database of CAD models to
match objects. Takahashi and Yashige [12] presented a sim-
ple and easy to use laser-based robot positioning system to
assist the elderly in doing daily pick-and-place activities. The
robot in this case was an x-y-z linearly actuated mechanism
mounted on the ceiling. Nguyen et al. [13] made use of a
system consisting of a laser pointer, a monochrome camera,
a color filter and a stereo camera pair to estimate the 3D
coordinates of a point in the environment so their robot
could fetch objects in the environment designated with the
laser pointer. The methodology that we present for task
execution is simple and uses a single point laser sensor. The
information necessary to enable task execution is generated
quickly online. Moreover the interface is easy to use, which
is necessary in assistive robotics for persons with disabilities.
III. LASER-ASSISTED CONTROL CONCEPT
The human user teleoperates a PUMA manipulator via a
Phantom Omni haptic device. First, the user points the laser
mounted on the PUMA end-effector to critical points in the
environment by teleoperating. These critical points could be
goal points, objects, or planar surfaces of interest. Referring
to Fig. 1, the laser sensor is mounted on the PUMA end-
effector so the laser beam direction will always be parallel
to the z-axis of the end-effector. Thus, by teleoperating the
PUMA wrist (i.e., joints four, five, and six) the user is able
to access a major portion of the PUMA workspace with the
laser pointer.
The 2010 IEEE/RSJ International Conference on
Intelligent Robots and Systems
October 18-22, 2010, Taipei, Taiwan
978-1-4244-6676-4/10/$25.00 ©2010 IEEE 5139