The DALI Agent-Oriented Logic Programming Language: References Stefania Costantini and Alessio Paolucci Universit` a di L’Aquila, Coppito 67100, L’Aquila, Italy stefania.costantini@univaq.it 1 Introduction DALI is a logic programming agent-oriented language defined in [1,2,3,4], fully for- malized in [5,6]. DALI is fully implemented, and has been used in practice in a variety of applications [7,8,9,10]. A stable release of the DALI interpreter is publicly available at [11]. For the definition of DALI we have built under many respects upon our past work about meta-reasoning and reflection in logic programming languages [12,13,14,15,16]. In this work in particular, issues related to meta-level representation of predicates, atoms and rules are discussed in depth. As concerns meta-rules, at the semantic level they can be coped with by enriching given theory by means of Reflection Principles [16], inspired to the ones introduced in Symbolic Logic by Feferman in 1962. At the op- erational level, an extended resolution can be devised [13,14] to instantiate and apply meta-axioms “on the flight”. DALI main features are described in [1,2] and in more depth in [4]. DALI communi- cation architecture is explained in [17]. The declarative semantics of DALI is presented in [5], and the operational semantics in [6]. Both approaches are quite general and can be in principle be adopted in other computational-logic-based agent frameworks. The DALI architecture [18] is encompassed in the general agent model described in [19]. In [20] we have proposed an extension to the well-known LTL Linear Temporal Logic called A-ILTL, for “Agent-Interval LTL”, which is tailored to the agent’s world in view of run-time verification. Based on this new logic, we are able to enrich DALI pro- grams by means of A-ILTL rules. These rules are defined upon a logic-programming- like set of formulas where all variables are implicitly universally quantified. They use operators over intervals that are reminiscent of LTL operators. In order to enlarge the set of perceptions they can recognize, elaborate on and react to, and in order to expand their range of expertise, agents need to learn. They can either perform “deep” learning or learning by “being told” by other trusted agents. We have coped with this aspect in [21,22,23]. Expressing preferences among actions in DALI has been coped with in [24]. In [22,25] where we introduced meta-axioms for run-time self-checking and self- reconfiguration. We have initiated in [26] and [27] the design for DALI (but in principle for any logic-based agent-oriented environment) of a flexible management of memories for storing, retrieving and managing past events.