1
Autonomy in Spacecraft Software Architecture
Henry Hexmoor
University of North Dakota
Department of Computer Science
Grand Forks, ND 58202
Abstract
In this paper we discuss the concept of
autonomy and its impact on building
complex systems for space applications
that require autonomy. We have
developed a few metrics for quantification
of autonomy in systems.
1. Introduction
As more complex systems are being
developed, there is greater need for
quantifying their level of autonomy
(Brown, Santos, Banks, & Oxley, 1998;
Gat, Pollack, & Cohen, 1998; Hanks,
Pollack, & Cohen, 1993; Hexmoor,
Kortenkamp, & Horswill, 1997). This is
more evident in systems that control
spacecraft.
Let’s consider a device as a complex
machine that appears capable of tasks
commonly performed by intelligent
organisms. An automobile equipped with
cruise control, road-sensitive traction, and
self-inflating tires is such a device. Such
devices receive input from the
environment and follow an algorithm
provided by the device designer to produce
an output. In general, devices cannot tell
how well they perform. Furthermore,
devices may only manipulate a fixed
ontology of their surroundings -- they
represent things in the world in a fixed
way. Many robotic applications would
qualify them as devices. To the extent a
system can be aware of its performance
and can improve its pre-programmed
ability to interact with its surroundings, or
can alter its ontology, it is autonomous.
Autonomy is self-governance over its
output. A system such as smart chess-
playing programs can be intelligent
without being autonomous. This notion of
autonomy is desirable in systems such as
autonomous space applications needed in
long-duration space missions.
Elsewhere (Hexmoor, Lafary, & Trosen,
1999), we have argued that autonomy
levels closely correspond to an agent’s
rank. We defined five ranks: fully
autonomous, boss, cooperative, underling,
instructible, and remote control. We
argued that an autonomous agent’s
decision-making changes when it
introspects about its rank with respect to
other agents. In spacecraft software where
a human user is in ultimate control of the
agent, the agent is not required to
introspect about its level of autonomy.
The human user changes the agent’s
autonomy level and gives it new
instructions.
Therefore, these systems also need to be
interruptible. Although, these systems
don’t need to introspect about changing
their autonomy when interaction with
human users, they need to be aware of
their resource usage. Finally, autonomous
agents need to learn and self-detect faults.
2. Resource management
Resources are either shared or used up.
Shared resources are called reusable and we
will first focus on this type of resource.
We are concerned with design of behaviors
that account for using a shared resource.
Specifically, we will examine design of two
behaviors that share a single resource.
Lets consider a behavior F for navigating
in the direction the robot is facing while
following a moving target. A second
behavior C uses the vision system to look
for objects that are closer and smaller for
obstacle avoidance. Lets assume that the
vision system can only process the target
at a minimum distance of 5 feet, and a
minimum height of 4 feet. The vision
From: Proceedings of the Twelfth International FLAIRS Conference. Copyright © 1999, AAAI (www.aaai.org). All rights reserved.