A Taxonomy and Comparison of Haptic Actions for Disassembly Tasks
Aaron Bloomfield , Yu Deng , Jeff Wampler , Pascale Rondot
Dina Harth, Mary McManus, Norman Badler
University of Pennsylvania, Center for Human Modeling and Simulation
Air Force Research Laboratory, Deployment and Sustainment Division (AFRL/HES)
GE Global Research
aaronb@cis.upenn.edu
Abstract
The usefulness of modern day haptics equipment for vir-
tual simulations of actual maintenance actions is exam-
ined. In an effort to categorize which areas haptic simu-
lations may be useful, we have developed a taxonomy for
haptic actions. This classification has two major dimen-
sions: the general type of action performed and the type
of force or torque required. Building upon this taxonomy,
we selected three representative tasks from the taxonomy
to evaluate in a virtual reality simulation. We conducted a
series of human subject experiments to compare user per-
formance and preference on a disassembly task with and
without haptic feedback using CyberGlove, Phantom, and
SpaceMouse interfaces. Analysis of the simulation runs
shows Phantom users learned to accomplish the simulated
actions significantly more quickly than did users of the Cy-
berGlove or the SpaceMouse. Moreover a lack of differ-
ences in the post-experiment questionnaire suggests that
haptics research should include a measure of actual per-
formance speed or accuracy rather than relying solely on
subjective reports of a device’s ease of use.
1. Introduction
One particular application area that seems to be a natural
testbed for haptic interaction is in validating disassembly or
maintenance instructions. Since instructions are authored
for human maintainers, reducing overall difficulty, avoiding
errors, and improving safety should reduce costs and im-
prove productivity. The prospect of using haptic simulation
to aid in validating maintenance tasks led us to an analysis
of possible tasks and ways to provide credible virtual task
experiences.
Maintenance instructions provide a concrete application
with verifiable results for haptic analysis. Real physical sys-
tems must be disassembled and repaired by maintenance
technicians. Design decisions that impact maintainability
can incur major cost over the lifetime of a complex system
such as a modern aircraft. Validation means establishing
that a given maintenance task could indeed be performed.
Validating maintenance tasks in a virtual environment may
help designers to create more maintainable systems, and can
also serve to train technicians without taking an aircraft off
the flight line or subjecting its components to the extra wear
and tear of task practice.
In a technology investment agreement titled Service
Manuals Generation (SMG), the Air Force Research Labo-
ratory, Deployment and Sustainment Division (AFRL/HES)
and GE Global Research are developing an automated tech-
nical manual development capability with a haptics enabled
virtual validation environment. The SMG program is rev-
olutionizing the way technical manuals are developed and
providing a unique opportunity to validate the manuals be-
fore the system is built. Our research is directly supporting
the SMG effort by evaluating various input devices across
representative maintenance tasks.
The overall goal of this study is to investigate and com-
pare virtual task validation with and without haptic feed-
back. There are four general methods to test and validate
task performance:
1. Physical performance of task in real environment
2. Interactive computer user with visual feedback only
3. Interactive computer user with visual and haptic feed-
back
4. Non-interactive computational task analysis
Case 1 involves human action in a real system or full-
scale mock-up. Cases 2 and 3 are investigated here using
three different devices. Case 4 uses computation alone, such
as a robotics reachability analysis and digital human mod-
Proceedings of the IEEE Virtual Reality 2003 (VR’03)
1087-8270/03 $17.00 © 2003 IEEE