Don Norman Last revised: March 23, 2003 Emotional Design: Chapter 7 7: THE FUTURE OF ROBOTS Donald A. Norman Science fiction can be a useful source of ideas and information, for it is, in essence, detailed scenario development. Writers who have used robots in their stories have had to imagine in considerable detail just how they would function within everyday work and activities. Isaac Asimov was one of the earliest thinkers to explore the implications of robots as autonomous, intelligent creatures, equal (or superior) in intelligence and abilities to their human masters. Asimov wrote a sequence of novels analyzing the difficulties that would arise if autonomous robots populated the earth. He realized that a robot might inadvertently harm itself or others, both through its actions or, at times, through its lack of action. He therefore developed a set of postulates that might prevent these problems; but, as he did so, he also realized that they were often in conflict with one another. Some conflicts were simple: given a choice between preventing harm to itself or to a human, the robot should protect the human. But other conflicts were much more subtle, much more difficult. Eventually, he postulated three laws of robotics (laws one, two and three) and wrote a sequence of stories to illustrate the dilemmas that robots would find themselves in, and how the three laws would allow them to handle these situations. These three laws dealt with the interaction of robots and people, but as his story line progressed into more complex situations, Asimov felt compelled to add an even more fundamental law dealing with the robots relationship to humanity itself. This one was so fundamental that it had to come first; but, because he already had a law labeled One, this fourth law had to be labeled Zero. Asimov’s vision of people and of the workings of industry was strangely crude. It was only his robots that behaved well. When I reread his books in preparation for this chapter, I was surprised at the discrepancy between my fond memories of the stories and my response to them now. His people are rude, sexist, and naïve. They seem unable to converse unless they are insulting each other, fighting, or jeering. The U.S. Robots and Mechanical Men Corporation doesn’t fare well either. It is secretive, manipulative, and allows no tolerance for error: make one mistake and the company would fire you. Asimov spent his entire life in a university: maybe that is why he had such a weird view of the real world. Nonetheless, his analysis of the reaction of society to robots – and of robots to humans -- was interesting. He thought society would turn against robots; and, indeed, he wrote that “most of the world governments banned robot use on earth for any purpose other than scientific research between 2003 and 2007.” 1 (Robots, however, were allowed for space exploration and mining; and in Asimov’s stories, these activities are widely deployed in the early 2000s, which allow the robot industry to survive and grow.) The Laws of Robotics are intended to reassure humanity that robots will not be a threat and will, moreover, always be subservient to humans. Today, even our most powerful and functional robots are far from the stage of Asimov’s. They do not operate for long periods without human control and assistance. Even so, the laws are an excellent tool for examining just how robots and humans should interact. Draft for “Emotional Design” Copyright © 2003 Donald A. Norman. All rights reserved. http://www.jnd.org don@jnd.org C:\Documents and Settings\don\My Documents\Writing\Emotional Design\Chapters Vers. 3.0\jnd.org versions\CH07.doc