Choices and their Consequences — Explaining Acceptable Sets in Abstract Argumentation Frameworks Ringo Baumann 1 and Markus Ulbricht 1,2 1 Leipzig University, Department of Computer Science 2 TU Wien, Institute of Logic and Computation {baumann,mulbricht}@informatik.uni-leipzig.de Abstract We develop a notion of explanations for acceptance of argu- ments in an abstract argumentation framework. To this end we show that extensions returned by Dung’s standard semantics can be decomposed into i) non-deterministic choices made on even cycles of the given argumentation graph and then ii) deterministic iteration of the so-called characteristic function. Naturally, the choice made in i) can be viewed as an explana- tion for the corresponding extension and thus the arguments it contains. We proceed to propose desirable criteria a reason- able notion of an explanation should satisfy. We present an exhaustive study of the newly introduced notion w.r.t. these criteria. Finally some interesting decision problems arise from our analysis and we examine their computational complexity, obtaining some surprising tractability results. 1 Introduction Explainable AI is a highly relevant topic of current research. The ultimate goal is to develop intelligent systems equipped with tools to provide reasons for decisions made and actions taken. Achieving this goal is a key challenge in all areas of AI nowadays as it enables human users to understand artificially intelligent systems. This is inevitable in order to maintain the user’s trust in an AI system and hence the system’s raison d’ˆ etre. These requirements triggered a considerable amount of research, not only for artificial neural networks, but also for several knowledge representation and reasoning formalisms. For example in description logics (Baader, McGuinness, and Nardi 2003) the notions of a justification (Horridge et al. 2013) or pinpointing (Baader and Pe ˜ naloza 2010) the rea- son for an (undesired) outcome have been introduced and investigated; explanations have been studied for Answer Set Programming (Brewka, Eiter, and Truszczynski 2011) in (Dauphin and Satoh 2019) as well as for abstract argumenta- tion frameworks (AFs) (Dung 1995) in e.g. (Saribatur, Wall- ner, and Woltran 2020). Some approaches also investigate ex- planations for (non-monotonic) logics in general (Belle 2017; Brewka and Ulbricht 2019). The present paper is a contribution to explanations in AFs. The field of formal argumentation has become a vibrant re- search area in Artificial Intelligence. One of the main booster of this development was the seminal paper by Phan Minh Dung in 1995 on abstract argumentation frameworks (AFs). His work is based on the observation that argument evalua- tion, i.e. the selection of reasonable sets of arguments consti- tuting a coherent world view, can be done without taking into account the internal structure of arguments. Consequently, arguments can be treated as abstract, atomic entities and it suffices to know about the attack relation among the argu- ments only. Defining and utilizing explanations in AFs gained quite some attention recently. Several papers view AFs as tools to explain (Zeng et al. 2018; Cocarascu, Cyras, and Toni 2018; Rago et al. 2020) while others propose notions of explana- tions for acceptable sets of arguments within an AF. For example, based on novel semantics (Fan and Toni 2015), by delving into subframeworks of a given AF (Saribatur, Wallner, and Woltran 2020; Ulbricht and Wallner 2021), or considering the SCCs (Alfano et al. 2020). In this paper, we extend the investigation of the theoretical point of view. As a matter of fact, many mature Dung-style semantics are complete-based. Complete extensions can be characterized as conflict-free fix points of the so-called characteristic function (Dung 1995). Obviously, such a description can be hardly used to explain a certain outcome to an user. However, there is one notable exception in the family of complete seman- tics, namely the uniquely defined grounded semantics. This semantics can be easily understood as its unique point of view traces back to unattacked arguments. More precisely, unattacked arguments are accepted. Then, further accepted arguments can be obtained given that they are defended by previous ones and so on. Grounded semantics reflects a very skeptical point of view and its acceptance can also be un- derstood in terms of a human-like dialogue (Caminada and Podlaszewski 2012). Our approach for explaining complete extensions is based on three crucial ingredients: 1. We use the easily understandable grounded semantics as base line and completion. 2. We make use of the fact that different complete extensions of a given AF are due to different arguments occurring in even cycles of the corresponding graph (Dvoˇ ak 2012) 3. We utilize the so-called reduct of an AF, a simple yet powerful tool that was recently considered to characterize the behavior of AF semantics (Baumann, Brewka, and Ulbricht 2020a). Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning Main Track 110