Determinants of system transparency and its influence on trust in and
reliance on unmanned robotic systems
Scott Ososky
a
, Tracy Sanders
a
, Florian Jentsch
a
, Peter Hancock
a
& Jessie Y. C. Chen
b
a
University of Central Florida, Orlando, FL;
b
U.S. Army Research Laboratory, Orlando, FL
ABSTRACT
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous
environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even
less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human
operator’s ability to understand a robot’s behaviors and states. The notion of system transparency is examined as a vital
aspect of robotic design, for maintaining humans’ trust in and reliance on increasingly automated platforms. System
transparency is described as the degree to which a system’s action, or the intention of an action, is apparent to human
operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence
humans’ impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our
approach considers transparency as emergent property of the human–robot system. In this paper, we present insights from
our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-
futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This
paper demonstrates how factors such as human–robot communication and human mental models regarding robots impact
a human’s ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications
of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of
trust.
Keywords: human–robot interaction, autonomous systems, transparency, trust, reliability, human–robot teams, mental
models, situation awareness
1. INTRODUCTION
Human reliance on robots in complex and dangerous environments is expected to continue expanding. This is due to both
ongoing technological advancements in artificial intelligence and sensing capabilities, and the obvious benefits that robots
already provide in keeping humans out of harm’s way. This trend is, perhaps, most readily evidenced by the recent news
that the US Army is considering downsizing the human contingent of its forces by as many as 80,000 Soldiers over the
next five years [1]. In order to compensate for the reduction in manpower due to the re-scoping of defense budgets, as well
as to support the continuing evaluation of the strategic/operational environment, the US military is looking to robotic
systems to partner with and augment the capabilities of Soldiers. This approach is further detailed in the most recent
iteration of the US Department of Defense Unmanned Systems Integrated Roadmap, which spans FY2013 to 2038 [2].
This roadmap describes R&D efforts in the near and future term to advance the capabilities of unmanned robotic assets in
ground, sea, and air applications. However, in order to realize the benefits that advanced autonomous robots may provide,
there are unique challenges that collaborative robotics must first address, including trust and transparency in robotic
teammates.
Of particular relevance to the current discussion is the advancement of autonomous capabilities in unmanned ground
systems, specifically within the military domain. Increased autonomy in ground robots provides a pathway from tools to
teammates [3], positioning robots as collaborating members of teams that consist of human and robotic members.
However, it is not expected (or perhaps even desirable) that the autonomous capabilities of robots will replicate the
capabilities of their human counterparts [4]. Rather, collaborative robotics presents an opportunity to develop systems that
complement the strengths and weaknesses of human teammates. Likewise, humans can compensate for the limitations of
robotic systems through bi-directional communication channels during task execution. Robots, therefore, do not need to
be 100% reliable in order for them to be useful, but it is critical that humans place an appropriate amount of trust in, and
reliance on robotic systems to effectively leverage their capabilities [5]. Trust can be properly calibrated given an accurate
Unmanned Systems Technology XVI, edited by Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker,
Grant R. Gerhart, Proc. of SPIE Vol. 9084, 90840E · © 2014 SPIE
CCC code: 0277-786X/14/$18 · doi: 10.1117/12.2050622
Proc. of SPIE Vol. 9084 90840E-1
Downloaded From: http://www.spiedl.org/ on 02/16/2015 Terms of Use: http://spiedl.org/terms