Vol.:(0123456789) 1 3
AI and Ethics
https://doi.org/10.1007/s43681-022-00185-1
ORIGINAL RESEARCH
The case for virtuous robots
Martin Gibert
1
Received: 23 November 2021 / Accepted: 1 June 2022
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022
Abstract
Is it possible to build virtuous robots? And is it a good idea? In this paper in machine ethics, I ofer a positive answer to both
questions. Although moral architectures based on deontology and utilitarianism have been most often considered, I argue
that a virtue ethics approach may ultimately be more promising to program artifcial moral agents (AMA). The basic idea is
that a robot should behave as a virtuous person would (or recommend). Now, with the help of machine learning technology,
it is conceivable to get an AMA to learn from moral exemplars. To support my claim, I sketch the steps of building such a
virtuous robot, using the thought experiment of programming an autonomous car facing a trolley-like dilemma situation. It
appears that, at least in certain contexts, the virtue ethics approach can provide its own and original solution. I then give four
reasons to favor it. Not only are virtuous robots technically feasible, but they have the advantage over their deontological
and utilitarian counterparts of fostering normative consensus between these moral schools, improving social acceptability,
and beginning to address the technical challenge of moral perception.
Keywords Machine ethics · Virtue ethics · Exemplarism · Supervised learning · Machine learning · Artifcial moral agent ·
Virtuous robot · Deontologist robot · Utilitarian robot
1 Introduction
The Trolley Dilemma (or problem) has been widely used by
moral philosophers since its conception in 1967 by Philippa
Foot [1]. In its canonical version, it asks whether to let a trol-
ley kill fve innocent workers or to pull a lever that diverts
it to a side road where one innocent worker will die. This
thought experiment generally elicits utilitarian intuitions,
while other versions, such as the footbridge one developed
by Judith Jarvis Thomson [2], elicit more deontological
intuitions. Philosophers and teachers greatly appreciate this
tool for questioning and illustrating moral reasoning. But
something new is happening with the development of AI
and the emergence of autonomous vehicles (transportation
robots): the thought experiment is becoming, in a sense, real.
Nowadays, engineers and philosophers need to address this
issue and think seriously about how to translate ethics into
algorithms.
Now, let’s say you have to program such an autonomous
vehicle to solve a specifc trolley-like dilemma: save a child
or an elderly person (assuming your robot is able to distin-
guish between children and elderly people). What are the
options? You can program it to systematically protect the
elderly or children. But these are not the only two options.
You can make the decision randomly, with a 50/50 chance of
saving one or the other. Finally, you can have more complex
programming, weighting the options, for example, with a
35% chance of saving the elderly person and a 65% chance
of saving the child.
Diferent moral reasons and moral theories can justify
these diferent options. For example, a utilitarian approach
to maximizing well-being may recommend saving children
rather than the elderly because their life expectancy is (gen-
erally) longer. On the other hand, a deontic approach, which
values equality in the Kantian way, and condemns ageism,
would probably favor the 50/50 option. In this paper, I want
to show that the third classical approach in normative eth-
ics, virtue ethics, can support programming with weighted
options.
Indeed, the weighted options could refect the decision of
virtuous people in a similar situation (35% considering the
car should save the elderly and 65% considering it should
* Martin Gibert
martin.gibert@umontreal.ca
1
Centre de Recherche en Éthique, University of Montreal,
Montreal, QC, Canada