The Geometry of Desire Luis Antunes GUESS, LabMAg University of Lisbon, Portugal xarax@fc.ul.pt Davide Nunes GUESS, LabMAg University of Lisbon, Portugal dnunes@fc.ul.pt Helder Coelho GUESS, LabMAg University of Lisbon, Portugal hcoelho@di.fc.ul.pt ABSTRACT In the BDI paradigm, much attention was devoted to be- liefs, intentions, choice and commitment, whereas desire has traditionally been seen as given. However, desire is the key connection to the agents’ creator, and the ultimate source of behaviour. Desires are allowed to be incoherent, irrational, or at least a-rational. Agent environments establish a moti- vational context for agents to act upon. Agent societies are never truly autonomous. We argue that pre-designed utility- based behaviour search strategies not only hinder the adapt- ability of an agent but also prevent the emergence of novel social behaviour. In this paper, we propose a new model of desire acquisition and evolution. Agents continuously adapt their desires by means of both their intrinsic motivations, as well as a mimetic mechanism inspired in Ren´ e Girard’s the- ory. Agents acquire new goals not through fitness or novelty but out of mechanisms such as envy, imitation and com- petition. To achieve their goals, agents have to sometimes discard them and just overcome their neighbours. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial In- telligence—Intelligent Agents Keywords Agent theories and models::Cognitive aspects; Agent soci- eties and societal issues::Artificial social systems; Agent- based simulation::Emergent behaviour 1. INTRODUCTION In real and artificial societies, agents make their choices us- ing their own criteria, often informed by complex concepts and mechanisms such as utility [19], other times following more esoteric and yet realistic rules of behaviour such as imitation [3], evolution [8] or value sharing [1]. The Belief-Desire-Intention (BDI) agent architecture [16] emanates from and represents a philosophical stance allow- ing the characterisation of agents in terms of mental qualities easily recognisable by other agents. Several rigorous mod- els and techniques supplied reasoning machinery designed Appears in: Alessio Lomuscio, Paul Scerri, Ana Bazzan, and Michael Huhns (eds.), Proceedings of the 13th Inter- national Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), May 5-9, 2014, Paris, France. Copyright c 2014, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. to deal with such qualities, and build credible scripts for themselves and interpret the actions of others. However, the behaviours that are derived from those scripts are very often unsatisfactory, especially when we hope that our model scales-up to resemble what happens in real societies, hoping to derive some understanding and perhaps useful policies from those models. BDI used to be an architecture designed for decision and action. For some time it included an associated modal logic, which seemed to constrain the design of agent minds into a logic-based paradigm [16]. But time corrected that ten- dency and BDI is now seen as a terminological and concep- tual basis that provides common ground (and grounding) for a dialogue between theorists and practitioners in the multia- gent system (MAS) communities, but also outside, allowing to engage other scientific areas in the discussion, such as economists, other social scientists, as well as physicists, psy- chologists, philosophers, neuroscientists, etc. Watching BDI as a general framework for agent mental- ity, we notice the key role played by intentions as a link between the agents’ beliefs (what the agent knows) and de- sires (what the agent aims for). Several authors (notably Castelfranchi [4]) noted that desires and intentions really belong to the same mental category (pro-attitudes). Inten- tions are an especially constrained subset of desires, and represent what the agents will really work towards achiev- ing. Intentions come out of desires as a result of the agents deliberation, and are managed through special mechanisms abiding to rationality principles. What cannot be derived from rational principles, whichever rationality definition we pick, is the set of desires the agents aspire to. Given that most of what remains is left to the agents’ (and its designer) discretion, it is in the desire set that we can (should) locate the agents’ ultimate goals, which can justify (and generate) their behaviour. The source of desires is key to determine behaviour, both individually and collectively. In most BDI approaches, de- sires are given data for the problems to be addressed. But what happens when we want to confer true autonomy to agents? What happens when, as often in exploratory simu- lation, we don’t know what exactly we are after in an experi- ment, both as agent designers and as interrogative scientists? What happens when we are after novelty, and seek to dis- cover, instead of facing well-defined problems which involve search of a solution? As Ken Stanley puts it [13], sometimes we have to abandon our goals in order to achieve them. Teleological behaviour [17] has long been the basis of multi- agent system development. Simon [18] stated that “People 1169