Considerations for fairness in multi-agent systems Steven de Jong, Karl Tuyls, Katja Verbeeck and Nico Roos MICC/IKAT, Maastricht University, The Netherlands {steven.dejong,k.tuyls,k.verbeeck,roos}@micc.unimaas.nl Abstract. Typically, multi-agent systems are designed assuming perfectly ratio- nal, self-interested agents, according to the principles of classical game theory. However, research in the field of behavioral economics shows that humans are not purely self-interested; they strongly care about whether their rewards are fair. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations. Two important motivations for fairness have already been identified and modelled, being (i) inequity aversion and (ii) reciprocity. We identify a third motivation that has not yet been captured: priority awareness. We show how priorities may be modelled and discuss their relevance for multi-agent research. 1 Introduction Modelling agents for a multi-agent system requires a thorough understanding of the type and form of interactions with the environment and other agents in the system, including any humans. Since many multi-agent systems are designed to interact with humans or to operate on behalf of them [1, 2], agents’ behavior should often be aligned with human expectations. If a multi-agent system is insufficiently aligned, humans may not understand and even reject it. Usually, multi-agent systems are designed according to the principles of a standard game-theoretical model. More specifically, the agents assume complete knowledge of the environment, are perfectly rational and optimize their individual payoff disregarding what this means for the utility of the entire population. Experiments in behavioral eco- nomics have taught us that humans often do not behave in such a self-interested manner [3–5]. Instead, they take into account the effects of their actions on others; i.e., they strive for fair solutions and expect others to do the same. Therefore, multi-agent sys- tems using only standard game-theoretical principles risk being insufficiently aligned with human expectations. To avoid this problem, designers of multi-agent systems should take the human conception of fairness into account. If the motivations behind human fairness are suf- ficiently understood and modelled, the same motivations can be applied in multi-agent systems. This interesting track of research ties in with the descriptive agenda formulated by Shoham [6] and the objectives of evolutionary game theory [5, 7]. In the remainder of this paper, we first discuss related work in the area of fairness models. Then, we look at problems in which priorities play a role. We show that current models do not predict human behavior in such problems. Next, we provide our model, and perform experiments to show that the model performs a much better prediction of human behavior. We conclude with some directions for future work.