The Future of Legal and Ethical Regulations for Autonomous Robotics
Huan Xu
1
and Joseph E. Borson
2
Abstract— “Autonomous robotics” promise significant im-
provements across a host of different complex systems, which
will need to be managed within regulatory frameworks to
promote, at a minimum, device safety. Contrary to how they
are often portrayed, however, these systems do not necessarily
require fundamentally new approaches to engineering or reg-
ulatory challenges, i.e., the development of a novel “autonomy
framework” applicable to different types of devices. Rather,
because autonomous systems generally represent a progressive
improvement of existing complex systems, preexisting regu-
latory scheme offer the best guidance for considering future
regulation of autonomous elements. Moreover, the regulatory
landscape differs considerably based on the type of device
at issue (e.g., consumer electronics vis-` a-vis medical devices).
This paper argues that users and regulators must consider
future autonomy regulations within the specific framework
those devices currently inhabit, rather than focusing on a novel
set of rules divorced from the preexisting context.
I. INTRODUCTION
“Autonomous robotics,” at least as popularly construed in
the vein of self-driving cars, swarms of flying drones, or
even military UAVs capable of deciding on their own to fire
missiles, has been often been thought to be a fundamentally
new type of technology, requiring fundamentally new types
of engineering principles and regulatory approaches - an
“autonomy” approach, as it were. This thought piece argues
the opposite - autonomy is merely an evolutionary outgrowth
of existing systems, and so existing systems engineering
and regulatory approaches are the best ways of governing
them. It does not make sense to speak of “autonomy” law,
regulation, or even systems design as a new-in-kind category,
or one that is generally applicable to different domains (e.g.,
aviation, automobiles, consumer products, etc.). Rather, ex-
isting frameworks - in particular, existing means of regulating
complex systems (with their commensurate incorporation of
different values and risk tolerances) are the best means to
regulate autonomous systems.
There is not a universal definition of an autonomous
system. Autonomy, however, at its core is merely the process
of designing or regulating systems that make choices based
not on contemporaneous, explicit, human intervention, but
on pre-conceived decision rules. A self-driving car that de-
termines how best to navigate a route based on an algorithm
and sensor inputs is autonomous, but so is, in some sense, a
stoplight that changes color based on pressure sensors, or
even a medical device that delivers medicine based on a
series of biological inputs. Accordingly, ”autonomy” is not
1
Huan Xu is with Faculty of Aerospace Engineering, University of
Maryland, College Park, MD, 20742, USA. mumu@umd.edu
2
Joseph Borson is a graduate of Harvard Law School, Washington, DC,
20009, USA. jborson@jd13.law.harvard.edu
something new, rather, it is a measure of complexity that
the law and the engineering fields have been confronting for
many years.
Importantly, different fields have addressed autonomy
challenges in different ways. Some fields, such as aviation,
may prohibit autonomous devices altogether absent specific
approval. Other fields, such as consumer electronics, may al-
low autonomous devices so long as they do not affirmatively
interfere or cause problems. These approaches are often
governed by the relative risk and benefits of the systems,
or in some cases, by regulatory inertia or path dependency.
Fields, for example, that are more heavily regulated or that
require regulator pre-approval to begin with, like for certain
medical devices, will follow these structural requirements
with autonomous systems, just as they would with any
system. Key, however, is that there is no single approach to
regulation of autonomous systems - it is system and regulator
specific. Accordingly, attempts to develop a unified theory of
autonomous regulation, to the extent that such attempts are
ongoing, are bound to fail, for the simple reason that they are
necessarily unbound from the specific context of the systems
they are trying to govern.
This work investigates what the legal and engineering
communities believe are the current challenges in regulating
autonomous systems, and posits that those challenges are
not the true difficulties in legal and ethical regulation. This
paper is structured as follows: Section II discusses the
current challenges to regulation, Section III describes three
application areas, and Section IV presents our conclusions.
II. CURRENT CHALLENGES
While there have been a number of articles in the news
regarding the oncoming rush of autonomous systems into
daily life, most have only posed concerns regarding what
those changes could imply [1]. As the issue of regulation in
autonomy is fairly new, the preliminary available literature
in this area is somewhat limited. However, a few broad
categories seem to emerge from existing work. These topics
fall under the following categories: ethics, liability, and
safety. We present a brief description of each of these issues,
as well as reasons why the focus on these particular issues
do not specifically address actual regulatory decisions.
A. Ethics and Society
1) Previous Work: A number of works focus on the
issue of imposing morals on machines [2]. Anderson et al.
[3] discusses the moral reasons why autonomous machines
should function ethically. In the area of medical robotics,
Stahl and Coeckelbergh [4] explore the implications of health
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Madrid, Spain, October 1-5, 2018
978-1-5386-8094-0/18/$31.00 ©2018 IEEE 2362