A generic agent architecture for norm-compliant planning assistance Jean Oh and Felipe Meneguzzi Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213–3890 Email: {jeanoh,meneguzz}@cs.cmu.edu Katia Sycara Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213–3890 Email: katia@cs.cmu.edu Timothy J. Norman Department of Computing University of Aberdeen Aberdeen, Scotland, UK – AB24 3UE Email: t.j.norman@abdn.ac.uk Abstract—An assistant refers to a (human or software) agent that helps a user accomplish her goals, ideally with minimal instructions from the user. In this paper we describe a software assistant agent that can proactively assist human users situated in a time-constrained dynamic environment. We specifically aim at assisting the user’s normative reasoning–reasoning about prohibitions and obligations. When human users are cognitively overloaded normative stipulations hinder user’s ability to plan to both accomplish goals and abide by the norms. In contrast to reactive assistant agent systems (that perform certain tasks on behalf of the user in reaction to user cues), this paper presents a goal-driven approach where the agent sets its own goals, plans and executes a series of actions to accomplish the identified goals, steering users towards norm compliance. Our approach uses probabilistic plan recognition to predict the user’s future plan based on the user’s current activities, identifying likely norm violations in the predicted user plan and updating the assistant’s goals to accommodate potential needs for resolving norm violations. This paper introduces the general framework for the goal-driven autonomous assistant agent. To validate the approach we implemented an assistant in the context of a military peacekeeping scenario that involves norm reasoning within various coalition partners. This approach is the first that manages norms in a proactive and autonomous manner. I. I NTRODUCTION Human users planning for multiple objectives in highly- complex environments are subjected to high levels of cog- nitive workload, which can severely impair the quality of the plans created. The cognitive workload is significantly increased when a user must not only cope with a complex environment, but also with a set of complex rules that prescribe how the planning process must be carried out. For example, military planners during peacekeeping operations have to plan to achieve their own unit’s objectives while following standing operating procedures that regulate how interaction and collab- oration with Non-Governmental Organizations (NGOs) must take place. These procedures generally prescribe conditions upon which the military should perform escort missions, for example, to ensure humanitarian NGO personnel are kept safe in conflict areas. We develop a prognostic assistant agent that takes a proactive stance in assisting cognitively overloaded human users by providing timely reasoning support. In this paper, we specifically aim to assist the user’s norma- tive reasoning–reasoning about prohibitions and obligations. When human users are cognitively overloaded normative stip- ulations hinder their ability to plan to both accomplish goals and abide by the norms. Existing work on automated norm management relies on a deterministic view of the planning model [1], where norms are specified in terms of classical logic; in this approach, violations are detected only after they have occurred, consequently assistance can only be provided after the user has already committed actions that caused the violation [2]. By contrast, our agent aims to predict potential future violations and proactively take actions to help prevent the user from violating the norms. More specifically, we use a probabilistic plan recognition technique to predict the user’s future plan steps based on a series of observations of user behavior and changes in the environment. We then use a prognostic normative reason- ing to identify any potential violations in a predicted user plan, generating a set of new tasks for the agent such that a newly generated task (goal) specifies a resolution for a potential violation. As the user’s environment changes the agent’s prediction is continuously updated, and thus agent’s plan to accomplish its goals must be frequently revised during execution. In order to address this issue, our agent supports a full cycle of autonomy including determining new goals (to support the user plan), planning, execution, and replanning. The main contributions of this paper are the followings. We introduce an agent architecture for proactive assistance based on probabilistic plan recognition. Our agent architecture is innovative in that the agent autonomously identifies a new set of goals to accomplish in a principled way through rea- soning about norm compliance. We present a prognostic norm reasoning to predict the user’s likely normative violations, allowing the agent to plan and take remedial actions before the violations actually occur. To the best of our knowledge, our approach is the first that manages norms in a proactive and autonomous manner. Our framework supports interleaved planning and execution for the assistant agent to adaptively revise its plans during execution, taking time constraints into consideration to ensure timely support to prevent violations. Our approach has been fully implemented in the context of a military peacekeeping scenario that involves norm reasoning within various coalition partners. The rest of this paper is organized as follows. We review