Response on the draft ethical guidelines for trustworthy AI produced by the European Commission’s High-Level Expert Group on Artificial Intelligence. The AI and Robotics group at the Tilburg Institute for Law, Technology and Society Contributors: Merel Noorman, Esther Keymolen, Maurice Schellekens, Aviva de Groot, Silvia da Conca, Robbert Coenmans (LL&SP department), Ronald Leenes, Bo Zhao, Lorenzo Dalla Corte, Emre Bayamlioglu, Robin Pierce, Linnet Taylor 31 January 2019 The HLEG has taken on the ambitious project of developing general guidelines for ethical AI and has, a first step in this process, published a draft document for stakeholders to review and comment upon. In this draft document, the group builds upon, and rightly so, a range of existing frameworks, principles and manifestos. It proposes to centre the guidelines on the concept of Trustworthy AI. They elaborate this concept in three sections that each address different levels of abstraction: ethical purpose rooted in fundamental rights, technical and non-technical methods, and an assessment list. We would like to congratulate the HLEG on this first step in a complex and multifaceted process and complement the group on finding a shared basis to further build upon. In particular, we welcome the rights-based approach that HLEG chose to pursue, as it roots the guidelines in shared values and principles within Europe while at the same time aligning them with many of the existing guidelines. Moreover, we were pleased to see the substantive definition of AI as it is outlined in the document published in parallel with the guidelines and summarized in the draft document. In particular, by distinguishing between AI as a technology and artefact designed and deployed by human beings on the one hand and AI as a scientific discipline on the other, the authors have managed to highlight the extensiveness and heterogeneity of AI. They have also signalled the human agency and work that is involved in making these AI systems function. The focus in the definition on the pre-determined goals and parameters provides regulators something to work with. The HLEG also brings the discussions on ethical AI a step forward by not only focusing on rights, principles and values, but also on the implementation and embedding of the technology. The ambition to provide concrete tools and methods for policy makers, developers, and citizens is needed to bring ethical AI into practice and we encourage further work in this direction. As the HLEG has explicitly asked for critical feedback we would like to offer a few suggestions and comments for the further improvement of the document. We will first provide some general comments and then go into more specific comments per section of the guidelines document.