De Ethica. A Journal of Philosophical, Theological and Applied Ethics Vol. 6:1 (2020) 3 Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI) Alexis Fritz, Wiebke Brandt, Henner Gimpel and Sarah Bayer Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) agent’ and ‘(moral) agency’ are exclusively related to human agents. Initially, the division between symbolic and sub-symbolic AI, the black box character of (deep) machine learning, and the complex relationship network in the provision and application of machine learning are outlined. Next, the ontological and action-theoretical basic assumptions of an ‘agency’ attribution regarding both the current teleology-naturalism debate and the explanatory model of actor network theory are examined. On this basis, the technical-philosophical approaches of Luciano Floridi, Deborah G. Johnson, and Peter-Paul Verbeek will all be critically discussed. Despite their different approaches, they tend to fully integrate computational behavior into their concept of ‘(moral) agency.’ By contrast, this essay recommends distinguishing conceptually between the different entities, causalities, and relationships in a human- computer interaction, arguing that this is the only way to do justice to both human responsibility and the moral significance and causality of computational behavior. Introduction: Exemplary harmful outcomes Artifacts have played a substantial role in human activity since the first Paleolithic hand axes came into use. However, the emergence of an (ethical) discussion about which roles can be attributed to the people and artifacts involved in an action is only a consequence of the increasing penetration of artifacts carrying ‘artificial intelligence’ (AI) into our everyday lives. Let us consider three examples of the potentially harmful effect of sophisticated machine learning approaches: