comment An embedded ethics approach for AI development There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development. Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin and Alena Buyx E thical concerns around artificial intelligence (AI) technology have prompted a rush towards ‘AI ethics’ to consider how AI technology can be developed and implemented in an ethical manner 1–4 . A recent scoping review identified 84 documents containing ethical principles or guidelines for AI that have been issued by a wide range of public and private organizations 1 . In the absence of legally enforceable regulations, those developing AI technology are largely left to translate the existing high-level ethical principles as they see fit 5 . Furthermore, it has recently been argued that a principled approach is unlikely to be successful in AI because AI development “lacks common aims and fiduciary duties, professional history and norms, proven methods to translate principles into practice, and robust legal and professional accountability mechanisms” compared to professions like medicine 5 . Although it is clear that a growing number of technology developers are willing to consider the ethical challenges around AI 6 , most do not have the necessary competency to translate high-level ethical principles into practice. This is unsurprising as the professional backgrounds of AI developers usually do not include systematic training in ethics. Conversely, few trained ethicists or social scientists currently work in tech companies, and there is no established culture of practical exchange between these fields. This creates a gap when it comes to translating ethical considerations into ethical practices, and there is a need to develop more concrete approaches 7 . It is imperative that the ethical challenges of AI are addressed as early as possible during the development process to ensure the ethically, socially and legally responsible design and implementation of these applications 8 . While various suggestions have emerged, so far there is no cohesive approach to integrating ethics into the development of AI and capitalize on the potential of including ethics upstream in the development process. We propose that an ‘embedded ethics’ approach can fill this gap and promote a more ethical development of AI applications. What can embedded ethics ofer AI development? By engaging with the term embedded ethics in a much wider sense than prior uses 9,10 , we aim to denote the practice of integrating the consideration of social, ethical and legal issues into the entire development process in a deeply integrated, collaborative and interdisciplinary way. The overarching objective of an embedded ethics approach is to help develop AI technologies that are ethically and socially responsible, and that benefit and do not harm individuals and communities. To achieve this goal, embedded ethics is integrated into the development process from the beginning, so as to anticipate, identify and address social and ethical issues that arise during the process of developing an AI technology, including planning, ethics approval, designing, programming, piloting, testing and implementation phases. At each step of the way, points of ethical uncertainty or disagreement are analysed regarding what course of action ought to be pursued or how an ethical concept should be understood in relation to an AI healthcare technology 11 . This includes explaining and clarifying complex ethical issues, and using the methods of ethical reasoning to justify a particular position or course of action 11 . In addition to the well-known ethical problems that can arise in development of AI technologies (for example, privacy, data protection, transparency and explainability, bias, responsibility, and the impact of automation on employment) 12–15 , there will be many issues that are yet unknown or are specific to a particular domain or project. These require ongoing analysis and ad hoc suggestions on how to deal with them, developed between all those involved in the process. An embedded ethics approach thus seeks to establish transparency regarding the uncertainty or disagreement at hand and to bring a new perspective ‘to the workbench’ by offering a variety of ethically defensible strategies for how to address pertinent concerns. How embedded ethics works in practice Embedded ethics could be integrated into the development process several possible ways, depending on the context and resources available. Arguably, the gold standard for embedded ethics integration would be an ethicist, or a team of ethicists, as a dedicated member of the development team. This would facilitate regular formal and informal exchanges between ethicists and the other members of the development team. This approach has been successfully employed in, for example, the field of genomics, where an ethicist has been embedded in the synthetic biology lab of George Church at Harvard Medical School 16 . In situations where having an ethicist as a dedicated member of the staff is not feasible due to resource constraints, another option would be to have ethicists working elsewhere (for example, at a university or another research facility) who regularly join development meetings, in person or virtually. However, a general requirement is that embedded ethics should involve the regular exchange between ethicists and other team members from the beginning of development. It is not sufficient for the development team to call on ethicists only when they perceive ethical or social issues; there should be regularly scheduled exchanges between the ethicist(s) and other team members to reduce the risk that ethical issues are overlooked or conflicts glossed over. Furthermore, the presence of an embedded ethicist does not absolve other team members of their responsibility for engaging in ethical consideration NATURE MACHINE INTELLIGENCE | www.nature.com/natmachintell