Legal responsibility for errors caused
by artificial intelligence (AI)
in the public sector
Ahmed Oudah Mohammed Al-Dulaimi
Faculty of Law, University of Anbar, Ramadi, Iraq, and Centre de Droit Public
Compare (CDPC), Pantheon-Assas University, Paris, France, and
Mohammed Abd-Al Wahab Mohammed
Faculty of Law, University of Anbar, Ramadi, Iraq
Abstract
Purpose – This paper aims to assume the responsibility of examining the shifting patterns of legal liability for
failures that result from the integration of artificial intelligence (AI) in the public domain. It explores aspects
such as the current legal implications, accountability mechanisms of AI errors and potential concerns and
issues and proffered solutions for the complex issues that surround AI-related mistakes in public
administration. Toward this end, the study outlines a central problem that is defined by the complex nature of
errors that arise when AI is applied within the public service.
Design/methodology/approach – AI systems have recently been implemented into the public sectors and
have influenced positive changes in efficiency and decision-making. However, the development and
complication of AI technologies have raised profound worries on accountability in the case of mistakes in
public sector.
Findings – As international governments increasingly rely on AI for critical selection and planning
processes, establishing a clean prison system to educate and allocate responsibility when errors occur is
paramount. What it has been found to have the potential to guide policy makers, criminologists and AI
planners toward the challenges of implementing AI in the public sector easy to navigate. Finally, the research
seeks to assess the potential of AI in public administration and will also serve to create a certain level of
transparency, accountability and public trust.
Research limitations/implications – To provide a comprehensive response, the research employs a
multifaceted methodology that encompasses a thorough literature review, in-depth legal analysis, regulatory
assessment, exploration of various liability models, consideration of challenges and ethical considerations and
real-world case studies. This holistic approach aims to shed light on the intricate web of legal responsibility
and accountability entwined with AI in the public sector.
Practical implications – Although as a tool, AI is different from the human agents who use it, and defining
and attributing legal responsibility for such errors becomes a challenging task because of the classification of
AI as either software or a tool, and the accountability of its human users.
Social implications – Consequently, the primary research question emerges: “‘Employing’ AI in the public
sector: how can legal responsibility for errors be assigned and governed in ways that respond to the plural
employment-aspects of AI?”
Originality/value – The significance of this research lies in its ability to address the emerging
challenges associated with AI adoption in the public sector. As international governments increasingly
rely on AI for critical selection and planning processes, establishing a clean prison system to educate and
This research was supported by the University of Anbar/Ministry of Higher Education and Scientific
Research, Iraq. Special thanks are extended to Professor Idris Fassassi, director of the Centre de Droit
Public Comparé (CDPC) at Université Panthéon-Assas-Paris 2, for graciously hosting this research
project during my stay and for his invaluable guidance and support.
International
Journal of Law
and Management
Received 29 August 2024
Revised 19 December 2024
Accepted 31 January 2025
International Journal of Law and
Management
© Emerald Publishing Limited
1754-243X
DOI 10.1108/IJLMA-08-2024-0295
The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/1754-243X.htm