Preferences with Qualitative Thresholds and Methods for Individual and Collective Decisions (Extended Abstract) Samy Sá * Universidade Federal do Ceará MDCC, Campus do Pici, Bl 910 Fortaleza, Brazil samy@ufc.br João Alcântara * Universidade Federal do Ceará MDCC, Campus do Pici, Bl 910 Fortaleza, Brazil jnando@lia.ufc.br ABSTRACT In this paper, we propose a way to model preferences so agents base their decisions on beliefs and can reason about such preferences. This connection allows agents to build ar- guments about their preferences or to explain decisions, and update preferences as they review beliefs. We also discuss how agents can reach decisions and the role played by prefer- ences in deliberation towards collective decision situations. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Theory Keywords Reasoning (single and multiagent); Preference Handling; De- cision Making (single and multiagent) 1. INTRODUCTION Autonomy is closely related to making decisions. As such, autonomous agents are frequently required to make choices, and are expected to do so according to their beliefs, goals, and preferences, however, beliefs are rarely connected to the preferences of the agent, especially in collective settings. Di- etrich and List argue in [1] that logical reasoning and the eco- nomic concept of rationality are almost entirely disconnected in the literature. We consider this disconnection is, indeed, notorious and make a step towards integrating beliefs and preferences: We introduce preferences based on unary pred- icates used to compare options and yield their utility, then consider qualitative thresholds to understand the quality of such options. Such concept of quality can be used by agents to build arguments and explain their decisions. * This research is partially supported by CNPq (Universal 2012), CNPq/CAPES(Casadinho/PROCAD 2011). Appears in: Proceedings of the 12th International Confer- ence on Autonomous Agents and Multiagent Systems (AA- MAS 2013), Ito, Jonker, Gini, and Shehory (eds.), May, 6–10, 2013, Saint Paul, Minnesota, USA. Copyright c 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. Rossi et al. argue in [5] that much work has to be done to achieve a single formalism to model problems with both constraints and preferences of many kinds, and solve them efficiently. We believe our proposal is a good step in this direction, as we integrate reasoning with expected utility rationality in a way they influence in one another. By doing so, we allow agents to (i) work with different perspectives of preferences; (ii) build arguments to explain decisions; (iii) deal with preferences under uncertainty; and (iv) automat- ically update preferences if they perform belief revision. The paper is organized as follows: Section 2 presents our approach to model preferences and some properties. Next, Section 3 shows reasoning about preferences. Section 4 dis- cusses decision making. Section 5 concludes the paper. 2. PREFERENCES AS UTILITY + BELIEFS We defend that the preferences of an agent should arise from beliefs, but as well take part in such beliefs so the agent can reason about them. We achieve this by means of an utility function based on the truth value of certain predicated formulas that describe options (alike with whats done in weighted propositional formulas [3]). The thresholds indicate utilitarian requisites for an option to be classified as good, poor or neutral, so the agent is clear about whether to support or avoid an option. This feature can be particularly important for participating in collective decisions. Definition 1. (preference profile) Let P red be the set of all unary predicates expressing attributes of objects in the language of an agent. A preference profile is a triple Pr = 〈U t, U p, Lw〉 with an utility function Ut : P red → R, and upper and lower utility thresholds U p, Lw ∈ R, Up ≥ Lw. An agent can have as many preference profiles as desired for each kind of decision the agent may get involved. Given an agent theory and a preference profile Pr = 〈U t, U p, Lw〉, to rank the possible outcomes is straightforward: Let Alt = {o1,...,on} be the set of options available in a decision sit- uation, each option oi ,1 ≤ i ≤ n, has an expected utility Ut S (oi )= ∑ P (x),P (o i ) ∈ S Ut(P (x)), where S is a model of the agent’s knowledge base. In the context of a particular model S, oi is a good option if Ut S (oi ) ≥ Up,a poor option if Ut S (oi ) < Lw, and neutral (neither good or poor) otherwise. In that sense, if Up = Lw, there are only good and poor options in the eyes of the agent. 1247