Journal of Responsible Technology 19 (2024) 100089
Available online 17 June 2024
2666-6596/© 2024 The Author(s). Published by Elsevier Ltd on behalf of ORBIT. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Decoding faces: Misalignments of gender identification in
automated systems
Elena Beretta
a,*
, Cristina Voto
b
, Elena Rozera
c
a
Vrije Universiteit Amsterdam, Faculty of Science, Department of Computer Science, User-Centric Data Science group, De Boelelaan 1105, 1081 HV Amsterdam,
Netherlands
b
University of Turin, ERC FACETS, Department of Philosophy and Educational Sciences, via Sant’Ottavio, 20, 10124 Torino TO, Italy
c
Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, Netherlands
ARTICLE INFO
Keywords:
Automatic gender recognition
Gender identity
Face recognition
ABSTRACT
Automated Facial Analysis technologies, predominantly used for facial detection and recognition, have garnered
significant attention in recent years. Although these technologies have seen advancements and widespread
adoption, biases embedded within systems have raised ethical concerns. This research aims to delve into the
disparities of Automatic Gender Recognition systems (AGRs), particularly their oversimplification of gender
identities through a binary lens. Such a reductionist perspective is known to marginalize and misgender in-
dividuals. This study set out to investigate the alignment of an individual’s gender identity and its expression
through the face with societal norms, and the perceived difference between misgendering experiences from
machines versus humans. Insights were gathered through an online survey, utilizing an AGR system to simulate
misgendering experiences. The overarching goal is to shed light on gender identity nuances and guide the cre-
ation of more ethically responsible and inclusive facial recognition software.
1. Introduction
In the rapidly evolving landscape of Artificial Intelligence, where the
interaction between technology and human identity is increasingly
scrutinized, Automated Facial Analysis (AFA) emerges as a critical
domain for ethical and societal reflection. Employing advanced deep
neural networks the discipline is predominantly composed of two key
processes: facial detection and facial recognition. Facial detection per-
tains to the task of identifying the existence of a face within a digital
image or a video stream. Upon successful detection, facial recognition is
undertaken to distinguish specific individuals based on their unique
facial attributes (Scheuerman et al., 2019). The reliability and precision
of these systems have seen remarkable advancements over time. This
rapid growth has led to their widespread adoption across a diverse range
of sectors. Initially, facial recognition technology, like Automated Facial
Analysis, has been primarily used for security (Balla & Jadhao, 2018;
Karovaliya et al., 2015) and law enforcement purposes (Bradford et al.,
2020; Kaur et al., 2020). However, its applications now extend far
beyond these traditional domains. In the realm of recruitment, facial
recognition is being explored to streamline hiring processes and assess
candidate suitability (Majumder & Bhattacharya, 2021; Mujtaba &
Mahapatra, 2019). Business applications are also emerging, with com-
panies leveraging this technology for customer engagement and
personalized marketing (Christopher Hlongwane et al., 2021; Zeng &
Chiu, 2021). In education, it is used for monitoring student engagement
and attendance (Andrejevic & Selwyn, 2020; Krithika et al., 2017),
while the healthcare sector is exploring its use in patient identification
and diagnosis (Bisogni et al., 2022). Furthermore, the analysis of facial
expressions (Mane & Shah, 2019; Tian et al., 2005) and emotions (Wolf,
2015) through facial recognition is gaining traction, providing valuable
insights in psychological and behavioral studies. However, the integra-
tion of these advanced technologies into the fabric of our society ne-
cessitates a careful and thorough consideration of the ethical
implications that accompany their use. The expansive use of these sys-
tems in diverse societal contexts can lead to cultural misunderstandings
and misrepresentations. For example, the way these systems interpret
and categorize facial features can be heavily influenced by the cultural
biases inherent in their programming and data sets. This can result in a
technology that, albeit inadvertently, reinforces stereotypical or
culturally insensitive portrayals of certain groups (Buolamwini & Gebru,
2018). Hence, despite advancements in their accuracy, these systems are
not immune to biases, which can result in discriminatory practices. The
* Corresponding author.
E-mail address: elena.beretta@vu.nl (E. Beretta).
Contents lists available at ScienceDirect
Journal of Responsible Technology
journal homepage: www.sciencedirect.com/journal/journal-of-responsible-technology
https://doi.org/10.1016/j.jrt.2024.100089