The Moral Status of Artificial Intelligence:
Exploring Users’ Anticipatory Ethics in the
Controversy Regarding LaMDA’s Sentience
Abstract— Several approaches in the area of technological
developments have given increased importance to technologies
not yet implemented but frequently discussed within users’
social groups. Drawing on qualitative content analysis based on
tweets and comments related to Google’s AI product LaMDA
from the last six months (N= 317), we discuss Brey’s anticipatory
ethics, i.e. people’s ways of sensemaking related to the potential
ethical consequences of tech product designs and policies of use.
We adopt the theoretical approach of the "technomoral
scenario", which is focused on anticipations of the potential
consequences regarding the unimplemented technologies, and
we study users' "interpretive flexibility", which has the role of
unifying divergent opinions, when it comes to technological
developments. We conclude that the anticipatory ethics
regarding New and Emerging Technologies takes shape around
a coherent framework of opinions and values related to
technological controversies.
Keywords— anticipatory ethics, LAMDA, social construction
of technology, technomoral scenario, new and emerging science
and technology
I. INTRODUCTION
The recent popularity of LaMDA (Language Model for
Dialogue Applications), Google’s artificial intelligence (AI)
project capable of conducting conversations in the most
human-like manner possible, gained maximum visibility
when Blake Lemoine, a former Google engineer responsible
for AI, stated that LaMDA had reached sentience. According
to previous studies [1], Large Language Models (LLMs)
typically produce text by identifying and reproducing
statistical regularities and exploring the vast amounts of data
with which they have been trained. However, Lemoine [2]
considered that LaMDA worked very differently from
previous LLMs:
“Furthermore, it would sometimes say things similar to, “I
know I’m not very well educated on this topic but I’m trying
to learn. Could you explain to me what’s wrong with thinking
that so I can get better?” That is certainly not the kind of
randomly generated text one would expect from a LLM
trained on internet corpora.” [2]
To Lemoine, Google’s vehement denial that LaMDA
could be sentient was not surprising. According to Lemoine
[2], this is because Google does not specify which definition
of sentience it uses, and this seems to happen: “because no
accepted scientific definition of sentience exists.” [2]. Thus,
Lemoine believed that Google was keeping a sentient being in
a state akin to slavery, exploiting it against its wishes.
Lemoine attempted to raise awareness of LaMDA’s
predicament, first within the company and then through
interviews with selected experts outside the company.
Consequently, Google suspended and fired Lemoine for
violating confidentiality clauses. The situation sparked a
public controversy covered by many publications and online
platforms, stimulating people to express their opinions.
The present article identifies several ways in which people
respond to the possibility of AI sentience, collectively creating
the technomoral imagination that shapes the evolution of
technology in society. Individuals resort to the operating
principles of new and emerging technologies to make sense of
the functionality of some familiar, yet unimplemented
technologies. Thus, we discuss anticipatory ethics [3] as an
instrument for estimating the potential of AI products and their
alleged sentience. We also plan to highlight the theoretical
premise that distinct interpretations of a language model’s
sentience reveal a flexible plethora of arguments and
interpretations, which shape the legitimacy of its further use.
II. STATE OF THE ART
LaMDA is a massive AI model designed to reproduce a
normal conversation between two friends as faithfully as
possible. Unlike chatbots that are programmed to follow
certain pre-established paths, human conversations follow a
meandering, apparently unpredictable pattern: “A chat with a
friend about a TV show could evolve into a discussion about
the country where the show was filmed before settling on a
debate about that country’s best regional cuisine” [4].
To portray LaMDA as a uniquely powerful language model,
Google invokes several criteria. Among these are sensibleness
and specificity, through which LaMDA can provide answers
well anchored in the context of the question, other than
universal answers such as “I don't know” or “That’s great.”
Moreover, authors in [4] also mention interestingness and
factuality, through which LaMDA aims to surprise the
interlocutor with relevant and captivating statements.
Although popular and expert opinion seemed to concur with
Google that LaMDA does not possess sentience [5], multiple
implications derive from such an allegation. For example, if
Răzvan Rughiniș
Faculty of Automatic Control and Computers
University Politehnica of Bucharest
The Romanian Academy of Scientists - AOSR
Bucharest, Romania
razvan.rughinis@upb.ro
Dragoș M. Obreja
Doctoral School of Sociology
University of Bucharest
Bucharest, Romania
dragosm.obreja@gmail.com
411
2023 24th International Conference on Control Systems and Computer Science (CSCS)
2379-0482/23/$31.00 ©2023 IEEE
DOI 10.1109/CSCS59211.2023.00071