Planning ramifications: When ramifications are the norm, not the ‘problem’ Debora Field Dept. of Computer Science, University of Liverpool Liverpool L69 3BX, UK debora@csc.liv.ac.uk Allan Ramsay School of Informatics, University of Manchester PO Box 88, Manchester M60 1QD, UK allan.ramsay@manchester.ac.uk Abstract This paper discusses the design of a planner whose intended application required us to solve the so-called ‘ramification problem’. The planner was designed for the purpose of plan- ning communicative actions, whose effects are famously un- knowable and unobservable by the doer/speaker, and depend on the beliefs of and inferences made by the observer/hearer. Our fully implemented model can achieve goals that do not match action effects, but that are rather entailed by them, which it does by reasoning about how to act: state-space plan- ning is interwoven with theorem proving in such a way that a theorem prover uses the effects of actions as hypotheses. 1 Introduction Seeing the word ‘ramification’ so often bound to the word ‘problem’, it is easy to get the impression from the litera- ture that the ramifications of actions are viewed by the AI planning community as an annoying hindrance to their AI planning ambitions. We, however, see ramifications very dif- ferently. They are the focus of our planning ambition and the mechanism of its success. Why? Because we are inter- ested in modelling an every-day human activity which is to- tally dependent upon the ramifications of actions: human-to- human communication. As far as communication is concerned, each man (and woman) is an island. I have things I want you to believe, and to this end I do my best to make appropriate signs to you—in writing, speech, smoke signals, facial gestures, and so on. You see my signs, and you decide for yourself what they mean. There is nothing I can do to ensure that you re- ceive the message I want you to get. All I can do is make my signs, and put my trust in the ramifications of my actions. Consider the human, John. Imagine John’s current goal is to get human Sally to believe the proposition John is kind. John has no direct access to the environment he wishes to affect—he cannot simply implant John is kind into Sally’s belief state. John knows that Sally has desires and opinions of her own, and that he will have to plan something that he 1 Initially supported by an EPSRC grant, with recent develop- ments partially funded under EU-grant FP6/IST No. 507019 (PIPS: Personalised Information Platform for Health and Life Services). considers might well lead Sally to infer John is kind. This means that when John is planning his action—whether to give Sally some chocolate, pay her a compliment, tell her he is kind, lend her his credit card—he has to consider the many different messages Sally might infer from the one thing John chooses to say or do. To plan communicative acts is, then, to plan actions by taking into account their possible ramifications. How do we do this? We took a backward-chaining theorem prover, and adapted it for hypothetical reasoning about the effects of ac- tions. Our backward-chaining reasoner essentially says, “I could prove this backwards if you allowed me to introduce these hypotheses”. The fully implemented model is thus able to plan to achieve goals that do not match action effects, but that are entailed by them. Our planner was developed by first adapting (Manthey & Bry 1988)’s first-order theorem prover, Satchmo, into a theorem prover for a highly inten- sional logic (Ramsay 2001), namely, a constructive version of property theory (Turner 1987). To this was added a deduc- tion theory of knowledge and belief (Konolige 1986) so that the planner can reason with its beliefs about the world, in- cluding its beliefs about others’ beliefs. State-space planning (based on foundational work in classical planning (Newell, Shaw, & Simon 1957; Newell & Simon 1963; Green 1969; McCarthy & Hayes 1969; Fikes & Nilsson 1971)) was then interwoven with theorem proving in such a way as to enable planning for entailed goals. Satchmo The theorem prover we present was developed by extend- ing Manthey and Bry’s (1988) first-order theorem prover, Satchmo (SATisfiability CHecking by MOdel generation). For model generation we convert the standard form to SE- QUENT FORM, where a sequent is a formula of the form Γ Δ where Γ is , an atomic formula, or a conjunc- tion of atomic formulae, and Δ is , an atomic formula, or a disjunction of atomic formulae. Satchmo was designed for carrying out proof by contradiction, where you show that some formula A follows from a set of assumptions α by con- verting α ∪ {¬A} to normal form, and showing that this set