Penultimate version. Chapter for Robot Ethics 2.0 (OUP, 2017) 1 When robots should do the wrong thing Brian Talbot, Ryan Jenkins, and Duncan Purves In the first section, we argue that deontological evaluations do not apply to the actions of robots. For this reason, robots should act like consequentialists, even if consequentialism is false. In the second section, we argue that, even though robots should act like consequentialists, it is sometimes wrong to create robots that do. At the end of that section and in the next, we show how specific forms of uncertainty can make it permissible, and sometimes obligatory, to create robots that obey moral views that one thinks are false. <1>Robots are not agents All events that occur can be placed along a continuum of agency. At one end of this continuum are events whose occurrence are fundamentally explained by the intentional action of some robust moral agent, like a human being. 1 The most plausible examples of these are human attempts to perform actions. On the other end of this continuum are events whose occurrence is in no way caused by the intentional action of a moral agent. Certain natural disasters, such as earthquakes, are the prime example of this kind of event. We can call these latter events proper natural disasters. Because robot actions are closer to the proper natural disaster end of the continuum than the agential end, they are not subject to certain kinds of moral evaluation. 2