Minas Liarokapis 1 School of Engineering and Applied Science, Yale University, 9 Hillhouse Avenue, New Haven, CT 06511 e-mail: minas.liarokapis@yale.edu Charalampos P. Bechlioulis School of Mechanical Engineering, National Technical University of Athens, Athens 15780, Greece e-mail: chmpechl@mail.ntua.gr Panagiotis K. Artemiadis School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ 85287 e-mail: panagiotis.artemiadis@asu.edu Kostas J. Kyriakopoulos School of Mechanical Engineering, National Technical University of Athens, Athens 15780, Greece e-mail: kkyria@mail.ntua.gr Deriving Humanlike Arm Hand System Poses Robots are rapidly becoming part of our lives, coexisting, interacting, and collaborating with humans in dynamic and unstructured environments. Mapping of human to robot motion has become increasingly important, as human demonstrations are employed in order to “teach” robots how to execute tasks both efficiently and anthropomorphically. Previous mapping approaches utilized complex analytical or numerical methods for the computation of the robot inverse kinematics (IK), without considering the humanlikeness of robot motion. The scope of this work is to synthesize humanlike robot trajectories for robot arm-hand systems with arbitrary kinematics, formulating a constrained optimiza- tion scheme with minimal design complexity and specifications (only the robot forward kinematics (FK) are used). In so doing, we capture the actual human arm-hand kinemat- ics, and we employ specific metrics of anthropomorphism, deriving humanlike poses and trajectories for various arm-hand systems (e.g., even for redundant or hyper-redundant robot arms and multifingered robot hands). The proposed mapping scheme exhibits the following characteristics: (1) it achieves an efficient execution of specific human-imposed goals in task-space, and (2) it optimizes anthropomorphism of robot poses, minimizing the structural dissimilarity/distance between the human and the robot arm-hand systems. [DOI: 10.1115/1.4035505] 1 Introduction Since the beginnings of robotics, mapping of human to robot motion was necessary for a series of applications that range from teleoperation and telemanipulation studies, to closed-loop, anthro- pomorphic grasp planning. In particular, the extraction of anthro- pomorphic robot motion is useful for robots that collaborate, interact, and co-exist with humans in dynamic and/or human- centric environments. Anthropomorphism is derived from the Greek words anthropos (human) and morphe (form). A robot may be characterized as anthropomorphic or humanlike if it mimics the human form. According to Epley et al. [1], the purpose of anthropomorphism is “to imbue the imagined or real behavior of nonhuman agents with humanlike characteristics, motivations, intentions, and emotions.” Regarding the different classes of anthropomorphism, a clear distinction between functional and per- ceptional anthropomorphism was recently proposed in Ref. [2]. Functional anthropomorphism has as its first priority to guarantee the execution of a specific functionality in task-space and only after accomplishing such a prerequisite to optimize anthropomor- phism of structure (minimizing a “distance” between the human and robot poses). Perceptional anthropomorphism concerns all synergistic motions, behaviors, decisions, and emotions that can be perceived by the humans, as humanlike. An important question is: why has anthropomorphism become significant and necessary? Nowadays, we experience an increas- ing demand for human robot interaction (HRI) applications. We believe that anthropomorphism of robot motion is important in these applications as it increases safety in human and robot inter- actions and facilitates the establishment of a solid social connec- tion between the human and the robots. More precisely, regarding social connection, the more humanlike a robot is in terms of motion, appearance, expressions, and perceived intelligence, the more easily it will manage to create meaningful “relationships” with human beings as robot likeability is increased [3]. Regarding safety in HRI scenarios, when robots move anthropomorphically, users can more easily predict their motion and comply with their activity, thus avoiding injuries. Gielniek et al. [4] support this idea, discussing in their work that: “humanlike motion supports natural human–robot interaction by allowing the human user to more easily interpret movements of the robot in terms of goals. This is also called motion clarity.” In this respect, anthropomorphism increases robots’ motion expressiveness, which may be critical for scenarios in which humans and robots cooperate advantageously, in order to execute specific tasks. Beetz et al. [5] first elaborated the idea of creating legible and predictable robot motions, while the idea of the legi- bility of robot motion goes back to Alami et al. [6]. Dragan and Srinivasa [7] proposed a methodology based on gradient optimiza- tion techniques for autonomously generating legible robot motion (e.g., motion that communicates its intent to a human observer). More precisely, the proposed algorithm optimizes a legibility met- ric inspired by the psychology of action interpretation in humans, deriving robot motion trajectories that better express intent. The motivation behind this study comes from the fact that when humans are able to predict the outcome/intent of robot motions, they may comply with their motion and avoid injuries or enhance collaborations. Similarly, deriving anthropomorphic robot motions can be significant not only for aesthetic but also for prac- tical reasons. Over the last decades, numerous schemes have been proposed that map human to robot hand motion. The most well known methodologies are: (1) the fingertips (point-to-point) mapping, (2) the joint-to-joint (angle-to-angle) mapping, (3) the functional pose mapping, and (4) the object-specific mapping. Fingertips mapping appears in Ref. [8] and is based on the computation of forward and inverse kinematics for each human and robot finger, in order to achieve same fingertip positions in 3D space. The linear joint- to-joint mapping is a one-to-one, angle-to-angle mapping where the joint angle values of the human hand are directly assigned to the corresponding joints of the robot hand [9]. In joint-to-joint mapping, the replicated by the robot postures are identical to the human hand postures, as human and robot finger links attain same orientations. Functional pose mapping [10] places both the human 1 Corresponding author.