Deterministic Multiagent Planning Techniques: Experimental Comparison (Short paper) Karel Durkota 1 and Anton´ ın Komenda 2 karel.durkota@gmail.com, komenda@agents.fel.cvut.cz 1 Faculty of Electrical Engineering, Czech Technical University in Prague 2 Dept. of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague Abstract Deterministic domain-independent planning techniques for multiagent systems stem from principles of classical plan- ning. Three most recently studied approaches comprise (i) DisCSP+Planning utilizing Distributed Constraint Satisfac- tion Problem solving for coordination of the agents and in- dividual planning using local search, (ii) multiagent adapta- tion of A* with local heuristics and (iii) distribution of the GraphPlan approach based on merging of planning graphs. In this work, we summarize the principles of these three ap- proaches and describe a novel implementation and optimiza- tion of the multiagent GraphPlan approach. We experimen- tally validate the influence of parametrization of the inner ex- traction phase of individual plans and compare the best results with the former two multiagent planning techniques. Introduction The problem of multiagent planning as defined in (Brafman and Domshlak 2008) is similarly important as classical plan- ning, as it can provide generally usable techniques for in- telligent agents, which are required to cooperatively come up with distributed plans. Recently the research community proposed both theoretical treatments and implementations of such distributed multiagent planning (DMAP) techniques. Similarly to the classical planning, the agents in DMAP cooperatively search for the local sequences of actions, which after execution transform the world from an ini- tial state to a common goal state. The local sequences of actions—the local plans—has to interleave appropriately, as each particular agent cannot possibly solve the problem on its own, but have to base its own actions on the results of actions of the other agents. Furthermore, the agents are mo- tivated to communicate as few information as possible not to put load on the other agents if it is not needed. Three recently theoretically treated approaches for DMAP are (i) multiagent planning utilizing a solver for Dis- tributed Constrain Satisfaction Problems (DisCSP) for the coordination part and a classical planning for the individual plans denoted as DisCSP+Planning (Brafman and Domsh- lak 2008), (ii) extension of A* for multiagent systems coined Multiagent Distributed A*, (MA-A*) (Nissim and Brafman Copyright c 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 2012a; 2012b) and (iii) Distributed Planning through Graph Merging (DPGM) (Pellier 2010) which uses principally the same factorization scheme for separation of parts of the orig- inal planning to more agents as in the previous approaches, defined originally in (Brafman and Domshlak 2008) together with the MA-STRIPS formalization. First two approaches, namely DisCSP+Planning and MA- A*, were already implemented and experimentally vali- dated. Works describing the implementation and experi- ments are for DisCSP+Planning (Nissim, Brafman, and Domshlak 2010) and for MA-A* the original papers (Nis- sim and Brafman 2012a; 2012b). However, according to our knowledge, the Pellier’s approach was not implemented and experimentally verified yet. Therefore, our initial focus in context of this work was the implementation of the approach described by Pellier and comparing it with the other two ap- proaches. Since this comparison was not done yet, it was not clear if the GraphPlan (Blum and Furst 1997) approach could be viable in multiagent setting, although the under- lying approach in classical planning was outperformed al- ready at (IPC 2004). Especially as in the multiagent setting the communication complexity can be of much more impor- tance than the computational complexity. Multiagent planning Planning in a multiagent (MA) systems is by (Brafman and Domshlak 2008) a search for a plan for each agent, assuming that agents have to cooperate in order to reach a global goal. Formally, problem for a set of k agents AG = {ag i } k i=1 is given by a quadruple Π= P, {A agi } k i=1 ,I,G, where P is a finite set of propositions describing facts holding in the world; I P is a set of propositions that hold in the initial state; G P is a set of propositions that must hold in a goal state; and A agi is a set of actions that an agent ag i can perform. Each action has a standard STRIPS syntax, i.e., a = pre (a), add (a), del (a), where pre (a), add (a), del (a) P and add (a) del (a)= . An action a can be performed only in a state s P , which the propositions form pre(a) hold in. Performing an action a will add to the state s propositions from add (a) and remove the propositions from del (a). DisCSP+Planning-based planner The algorithm from (Nissim, Brafman, and Domshlak 2010) can be