ORIGINAL PAPER Developing Automated Deceptions and the Impact on Trust Frances S. Grodzinsky & Keith W. Miller & Marty J. Wolf Received: 21 August 2013 /Accepted: 7 February 2014 # Springer Science+Business Media Dordrecht 2014 Abstract As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technol- ogy: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifacts development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological enchantmentwarranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. Keywords Deception . Trust . Artificial agents 1 Introduction In her book Alone Together, MIT anthropologist Turkle (2011, p. 90) writes: In The Republic, Plato says, Everything that deceives may be said to enchant. The sentiment also works when put the other way aroundThat which enchants, deceives. Philos. Technol. DOI 10.1007/s13347-014-0158-7 F. S. Grodzinsky (*) Sacred Heart University, 5151 Park Avenue, Fairfield, CT 06825, USA e-mail: grodzinskyf@sacredheart.edu K. W. Miller University of MissouriSt. Louis, 1 University Blvd, St. Louis, MO 63121, USA e-mail: kmill217@gmail.com M. J. Wolf Bemidji State University, 1500 Birchmont Drive, #23, Bemidji, MN 56601, USA e-mail: mjwolf@bemidjistate.edu