Can robots understand values?: Artificial morality and ethical symbol grounding Minao Kukita 1 Introduction The main purpose of this paper is to reflect on the question of whether an artificial system can be a moral agent. Recent decades have seen a great advance in autonomous machines and software programs, and their increasing participation in many areas of our life. This situation has led engineers and philosophers to set out to work on incorporating ethical components into machines. We will call this field of research ``artificial morality (AM)''. AM, being of technological interest and practical significance, is also likely to have a considerable impact on philosophy and ethics. For one thing, we have to consider the extent to which we can permit robots to take part in our moral activities. For another, AM raises the question of whether we can create artificial agents which do not only simulate moral behaviors but are really moral. In this article, we will explore the possibility and argue that AM is susceptible to the same criticism addressed to AI. Especially an equivalent of the symbol grounding problem in AM, which we call the ethical symbol grounding problem, is as relevant. We will then consider how this problem may be dealt with. 2 The development of AM Engineers and philosophers have now set out to develop artificial moral agents (AMAs) --- artificial systems capable of moral decision making. In this section we see some of the attempts and approaches of developing AMAs. Simple automatic accident-prevention systems are already in use in our everyday situations, from a petroleum heater to a large aircraft. If the situations in which a system operate were fairly complex, the action would involve some ethical dimensions --- say, the emotions of people. In this prospect, philosophers like Wallach and Allen (2009) suggest that we should design and engineer artificial moral agents. If AMAs are sufficiently autonomous and sensitive and responsive to interests and well-being of humans with whom they operate, then they will have ``functional morality.'' This means that their action need not be considered to be fully moral, but they are functioning as if they were moral agents, whatever conditions --- consciousness, emotion, will, intention, reason and so on --- are needed for something to be fully moral. Susan and Michael Anderson have already launched a project of implementing moral principles into machines. Using a technique from machine learning, they created a several systems that simulate judgments of human experts in health care. Andersons and their colleagues have shown that an explicit ethical computing can be implemented in a machine. 1 However, they make it clear that they are not concerned with the question of the moral standing of machines, or of whether the machine is really moral. Their concern is exclusively with the question of whether moral can be computable or not, and they are attempting to answer the question by creating such systems. Both Wallach and Allen, and Andersons put aside the question of the moral standing of moral machines. Meanwhile, Floridi and Sanders (2004) dare to go further. According to them, we should consider a certain kind of machines and computer programs to be genuine moral agents. If not, i.e. if we continue to adhere to classical moral philosophies that treat humans exclusively as moral entities and disregards all the rest, we cannot properly address the urgent problems in computer ethics. For, since humans, machines, and software programs are inextricably involved in the world of computer networking systems, it is difficult to confine the 1 Cf. Anderson, Anderson and Armen (2004), Anderson and Anderson (2007).