SCHIFF et al.: AI ETHICS IN THE PUBLIC, PRIVATE, AND NGO SECTORS: A REVIEW OF A GLOBAL DOCUMENT COLLECTION
[Accepted version. Published version available at https://doi.org/10.1109/TTS.2021.3052127]
Abstract— In recent years, numerous public, private, and
non-governmental organizations (NGOs) have produced
documents addressing the ethical implications of artificial
intelligence (AI). These normative documents include
principles, frameworks, and policy strategies that articulate
the ethical concerns, priorities, and associated strategies of
leading organizations and governments around the world.
We examined 112 such documents from 25 countries that
were produced between 2016 and the middle of 2019. While
other studies identified some degree of consensus in such
documents, our work highlights meaningful differences
across public, private, and non-governmental
organizations. We analyzed each document in terms of how
many of 25 ethical topics were covered and the depth of
discussion for those topics. As compared to documents from
private entities, NGO and public sector documents reflect
more ethical breadth in the number of topics covered, are
more engaged with law and regulation, and are generated
through processes that are more participatory. These
findings may reveal differences in underlying beliefs about
an organization’s responsibilities, the relative importance
of relying on experts versus including representatives from
the public, and the tension between prosocial and economic
goals.
Index Terms—Artificial intelligence, ethics, social implications
of technology.
I. INTRODUCTION
RTIFICIAL intelligence (AI) is beginning to revolutionize
numerous sectors of society, from research and
transportation to finance and health care. Its near-term
economic impacts are estimated to be in the trillions [1], and it
is considered to be central to the Fourth Industrial Revolution
Manuscript received May 27, 2020; revised October 9, 2020 and November
25, 2020; accepted January 3, 2021. This work was supported in part by the
Science, Technology, and Innovation Policy Program, Georgia Institute of
Technology.
© 2021 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
[2]. Its potential transformative impacts have led to a significant
increase in attention to AI’s social and ethical implications. As
a result, over recent years, many organizations have produced
documents that examine AI’s ethical implications, articulate
principles and guidance, and identify strategies to develop and
implement AI responsibly. These documents – ethics codes,
principles, frameworks, guidelines, and policy strategies –
reflect the ethical viewpoints and priorities of leading
organizations around the world. These include national
governments, intergovernmental bodies, multinational
corporations, prominent NGOs, and organizations created with
a specific focus on AI.
Scholars have begun to analyze the content of these AI ethics
documents. Some have used qualitative methods to identify
themes across documents [3]–[7] or to support comparative
analyses [8]–[10]; others have employed quantitative content
analysis for similar reasons [11]. Still others have discussed
second-order themes, such as the ethical assumptions
underlying such documents [12] and the gap between ethical
principles and actual practices [3], [7], [13]–[15]. Overall, the
plurality of this work has focused on conceptually categorizing
ethics topics and reducing them into a small number, typically
5-10, of core topics [6].
Jobin, Ienca, and Vayena (2019) have, for example,
identified transparency, justice, fairness, nonmaleficence,
responsibility, and privacy as concerns that typically appear in
their set of 84 documents. Fjeld et al. (2020) identified eight
similar principles in their analysis of 36 documents. Floridi and
Cowls (2019) argued that the 47 AI ethics principles they
reviewed fall within the traditional bioethics principles of
beneficence, nonmaleficence, autonomy, and justice, along
with a novel principle of explicability. In short, the primary
thrust and focus of the prior literature has been to describe to
what degree a global consensus around AI ethics is emerging.
Daniel Schiff is with the Georgia Institute of Technology, School of Public
Policy, Atlanta, GA, U.S. (e-mail schiff@gatech.edu).
Jason Borenstein is with the Georgia Institute of Technology, School of
Public Policy and Office of Graduate Studies, Atlanta, GA, U.S. (e-mail
borenstein@gatech.edu).
Justin Biddle is with the Georgia Institute of Technology, School of Public
Policy, Atlanta, GA, U.S. (e-mail justin.biddle@pubpolicy.gatech.edu).
Kelly Laas is with the Illinois Institute of Technology, Center for the Study
of Ethics in the Professions, Chicago, IL, U.S. (e-mail laas@iit.edu).
AI Ethics in the Public, Private, and NGO
Sectors: A Review of a Global Document
Collection
Daniel Schiff, Graduate Student Member, IEEE, Jason Borenstein, Society Affiliate, IEEE,
Justin Biddle, and Kelly Laas
A