BRIEFING STOA Options Brief EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 624.262 April 2019 EN A governance framework for algorithmic accountability and transparency Algorithmic systems are increasingly used as part of decision-making processes in the public and private sectors, with potentially significant consequences for individuals, organisations and societies. However, the very properties of scale, capability to handle complex datasets, and autonomous learning that make these systems useful also make it difficult to provide clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when relating to decision-making processes that can have a considerable impact on fundamental human rights. On the basis of a review of existing proposals for the governance of algorithmic systems, the study offers four sets of policy options, each addressing a different aspect of algorithmic transparency and accountability: i) awareness raising – education, journalism and whistleblowers; ii) accountability in public sector use of algorithmic systems; iii) regulatory oversight and legal liability; and iv) global coordination of algorithmic governance. Awareness raising – education, journalism, whistleblowers The general public are struggling to understand how algorithmic systems work, the impact they are having, and how to make a critical evaluation of their decisions. The same is true of many highly skilled non-technical professionals, such as judges and lawyers. A broad understanding of algorithmic systems, however, will do little to provide accountability unless there is a public debate about the types and properties of the algorithmic systems associated with the decisions concerned. Notifications should be standardised and short, akin to nutrition labels, while information should be limited to that which can impact a user's decisions, or wider public understanding. Investigative journalism and whistleblowers play an important role in uncovering questionable uses and outcomes of algorithmic decision-making (e.g. Cambridge Analytica election manipulation). Whistleblowing by (ex-)employees is an important part of activism aimed at changing unethical company projects (e.g. Google Dragonfly). Beyond their role as independent watchdogs, journalists help to present relevant aspects of algorithms in plain language with understandable narratives. Journalistic investigations have sparked broad public conversations and important normative debates, including triggering new academic studies (e.g. Propublica's report on 'Machine Bias' in the COMPAS algorithm triggered a series of studies into the meaning of 'fair' in algorithms). To uncover cases of algorithmic 'malpractice' journalists are combining traditional investigation practices, with computationally intensive methods to reverse engineer algorithms (e.g. 'black box testing') so as to tease out the consequences of algorithmic system use. Reverse engineering can however involve violating of trade secrets and/or copyright rules.