Gender Bias in AI Recruitment Systems: A
Sociological- and Data Science-based Case Study
Sheilla Njoto
Faculty of Arts
University of Melbourne
Parkville, VIC, Australia
s.njoto@unimelb.edu.au
Aidan McLoughney
School of Computing and Information
Systems, University of Melbourne
Parkville, VIC, Australia
a.mcloughney@unimelb.edu.au
Marc Cheong
School of Computing and Information
Systems & CAIDE, University of Melbourne
Parkville, VIC, Australia
marc.cheong@unimelb.edu.au
Leah Ruppanner
Faculty of Arts
University of Melbourne
Parkville, VIC, Australia
leah.ruppanner@unimelb.edu.au
Reeva Lederman
School of Computing and Information
Systems, University of Melbourne
Parkville, VIC, Australia
reeva.lederman@unimelb.edu.au
Anthony Wirth
School of Computing and Information
Systems, University of Melbourne
Parkville, VIC, Australia
awirth@unimelb.edu.au
Abstract—This paper explores the extent to which gender
bias is introduced in the deployment of automation for hiring
practices. We use an interdisciplinary methodology to test our
hypotheses: observing a human-led recruitment panel and
building an explainable algorithmic prototype from the ground
up, to quantify gender bias. The key findings of this study are
threefold: identifying potential sources of human bias from a
recruitment panel’s ranking of CVs; identifying sources of bias
from a potential algorithmic pipeline which simulates human
decision making; and recommending ways to mitigate bias from
both aspects. Our research has provided an innovative research
design that combines social science and data science to theorise
how automation may introduce bias in hiring practices, and also
pinpoint where it is introduced. It also furthers the current
scholarship on gender bias in hiring practices by providing key
empirical inferences on the factors contributing to bias.
Keywords— algorithmic bias, gender, recruitment, CV.
I. INTRODUCTION
Existing scholarship has long identified gender biases in
hiring practices. Human conscious and unconscious gender
bias influences decision mechanisms and has harmed the
representation of women in the labour force [1]–[5]. In the
past two decades, however, the upsurge of computer-based,
automated decision-making (ADM) in recruitment has
become more prevalent. Quite predictably, given its supposed
pragmatism, automation is assumed to be more impartial,
scientific, and mathematical, and thereby is assumed to
mitigate the very issue of human biases [6]. It appears that
ADM has emerged as a solution to the increasing challenges
of recruitment [7], [8]. However, literature has increasingly
recognised the vulnerability of ADM in making fair decisions
[7], [9], [10]; and attempted to dissect the issue of fairness
from intersecting dimensions of race, gender, ability,
sexuality, and others [8], [11]–[19].
Despite these attempts, however, to our knowledge there
has been little technical research to test the theories about
ADM that has a combined focus on recruitment, sociological
methodology, implementation of a data science pipeline, and
its overall potential repercussions towards women. In this
paper, we seek to identify the extent to which recruitment
algorithms may be biased against women’s CVs, by a set of
experiments to answer the following research question: To
what extent do algorithms introduce human bias into
algorithmic predictions?
To answer this, our study utilises a multidisciplinary
research design combining social science and data science.
Briefly, the social science component involves human panel
ratings of synthetic job candidates against a set of job
advertisements, represented by simulated and anonymised
real-life CVs, with controlled biographic data. The data
science component involved building an algorithm from
scratch (using off-the-shelf, industry-standard tools) to
replicate the human decision-making and preferences as much
as possible. For the purpose of this study, we seek to start
‘from first principles’ to build a prototype system which
allows us to keep track of the ‘inner workings’ of the system
and interrogate any potential sources of bias against women.
II. THEORETICAL BACKGROUND
A. The landmark of gender bias
Women’s position in society has shaped the way in which
women are socially perceived [4]. Traditional gender norms
frame women as homemakers, responsible for the care of
children and family [20], and this phenomenon is studied from
various disciplines, including sociology and feminist
philosophy [21], [22]. Today, this gender-role norm has
somewhat weakened [8], but its impact on recruitment persists
[23]: women are often still associated with domestic work
[24], and mistakenly presumed to be less productive at work
when compared to men, especially following the transition
into motherhood.
As a result, androcentric biases are a norm when it comes
to describing men and women [2]. Gender bias is where traits
tied to stereotypes of gender are applied to individuals,
regardless of whether an individual actually exhibits them [4]
An example [1] notes that women refer to more communal,
social, and expressive words; also with the use of different
adjectives to describe themselves and others [8]. Another
example: gendered adjectives by men in formal
recommendation letters include descriptors of ‘prominence’,
such as ‘outstanding’ or ‘unique’ [1], [8], [25]; in contrast to
“more social and less directive connotations” [8] e.g., ‘warm’
and ‘collaborative’ [4].
B. Gender Bias and Recruitment
We now turn to how gender bias can extend itself to job
recruitment. Human recruitment panels rely on cognitive
shortcuts or heuristics, to shortlist candidates for positions
2022 IEEE International Symposium on Technology and Society (ISTAS) | 978-1-6654-8410-7/22/$31.00 ©2022 IEEE | DOI: 10.1109/ISTAS55053.2022.10227106
Authorized licensed use limited to: University of Melbourne. Downloaded on September 10,2024 at 00:14:51 UTC from IEEE Xplore. Restrictions apply.