The Development of Shared Meaning within Different Embodiments Joachim de Greeff Centre for Robotics and Neural Systems University of Plymouth, UK Email: joachim.degreeff@plymouth.ac.uk Tony Belpaeme Centre for Robotics and Neural Systems University of Plymouth, UK Email: tony.belpaeme@plymouth.ac.uk Abstract—This paper discusses the effect of different embod- iments on the development of shared meanings between agents, and how language can help to overcome this difference. Using color as an exemplary case, we discuss how despite perceptual differences agents can develop a common understanding of color categories. This phenomenon is investigated through computa- tional modeling of agents with different perceptual capabilities that engage in linguistic interaction. Differences in perception are modeled on both human physiological differences and on data recorded from two robots. I. I NTRODUCTION Young children typically learn new categories through in- teraction with their caregivers. It is well known that language is hugely important in this process, as the specific words that are used influence the categories a child will form. For example, Xu [1] showed how the use of linguistic labels helps children to form novel categories. Also Plunkett et al. [2] drew similar conclusions from experiments showing that the actual label used to describe stimuli determines the nature of categories that a child will develop. Drawing on this psychological data, models have been created to simulate the development of new categories within a population of artificial agents (e.g. Steels & Belpaeme [3], for an overview see Nolfi & Mirolli [4]). In these models, simulated agents perceive an environment and develop new categories through linguistic interaction. Typically the perceptual capabilities are the same for all agents in the population, i.e. the agents are homogeneous. However, in human cognition there are physiological differences between individuals [5]. This is not only the case between adults, but also between caregivers and children. A child with a developing body and neural system will perceive the world differently than an adult will [6]. Human cognition is considered to be embodied, so one of the implications is that variation in the human body and specifically variation in the sensor modalities will result in varying perception. When the perception of two agents is not identical this leads to diverging perceptual categories and concepts. One would expect this to have a negative impact on communication: if two agents have categories and concepts that are not identical, then communication is expected to be affected. Also the learning of meaning from one agent to another would be expected to be influenced by perceptual differences. Yet in normal circumstances this does not seem to be the case. Communication between people is very effective and caregivers who teach children are apparently not hindered by the fact that the child may perceive things differently. How can this phenomenon be explained? We wish to explore the possibility that language plays a crucial role in the coordina- tion of perceptual categories. We present a model that shows how language can bridge the gap between differences in embodiment. We evaluate the model using two computational experiments in which artificial agents with differing perceptual capabilities engage in linguistic interactions that impact on conceptualization. The first experiment looks at how differences in human color perception can be overcome by language. The second experiment demonstrates how the same principle can be used to overcome different perception from embodied robots. Both models are based on language games, a simple one-to-one linguistic interaction between two agents [7]. By varying the perceptual capabilities of the agents a systematic exploration of the influence of language and difference in embodiment on the coordination of meanings is possible. II. BACKGROUND A. Differences in human perception In this paper we wish to focus on variation in color perception. The neurophysiology and psychology of color perception has been well studied [8], [9], making color an ideal test ground for cognitive models. Humans have four types of photosensitive receptors in the retina. The achromatic rod receptor contributes little to color perception and mainly serves scotopic vision. Color is perceived by three types of chromatic receptors, known as cone receptors. Each of the three cone types has a different peak sensitivity: to long wavelengths (L-cones), to medium wavelength (M-cones) and to short wavelengths (S-cones). Using imaging techniques it has been shown that people with normal color perception can have a strikingly different retinal distribution of cones and a varying proportion of L and M cones, see [5]. Brainard et. al. [10] studied the variation in L and M sensitive cones in the retinas of two subjects and its effect on the perception of the color yellow, as yellow light is picked up by both L and M cones. As the ratio between L and M cones was 1.15 for one subject and 3.79 for the other, it was predicted this would have a large effect on the wavelength of light that both subjects perceive