Geometric conditioning of belief functions Fabio Cuzzolin Department of Computing Oxford Brookes University Oxford, United Kingdom Email: (fabio.cuzzolin)@brookes.ac.uk Abstract—In this paper we study the problem of conditioning a belief function b with respect to an event A by geometrically projecting such belief function onto the simplex associated with A in the simplex of all belief functions. Two different such simplices can be defined, as each belief function can be represented as the vector of its basic probability values or the vector of its belief values. We show here that defining geometric conditional b.f.s by minimizing Lp distances between b and the conditioning simplex in the mass space produces simple, elegant results with straightforward semantics in terms of degrees of belief. Such results can be interpreted in the light of a generalization to belief functions of the notion of imaging introduced by Lewis. Keywords: Belief functions, conditioning, geometric ap- proach, L p norms. I. I NTRODUCTION Several theories of and approaches to conditioning in the framework of belief functions (b.f.s) [1], [2] have been pro- posed along the years [3]–[9]. In the original model, in which belief functions are induced by multi-valued mappings of probability distributions, Dempster’s conditioning can indeed be judged inappropriate from a Bayesian point of view. Spies [10] defined conditional events as sets of equivalent events under conditioning. By applying multi-valued mapping to such events, conditional belief functions were introduced. An updat- ing rule generalizing the total probability theorem was derived from them. Kyburg [11] analyzed the links between Dempster conditioning of belief functions and Bayesian conditioning of closed, convex sets of probabilities, of which belief functions are a special case. He arrived at the conclusion that the probability intervals generated by Dempster updating were included in those generated by Bayesian updating. One way of dealing with such criticism is to abandon all notions of multivalued mapping to define belief directly in terms of basis belief assignments, as in Smets’ transferable belief model [12]. The unnormalized conditional belief func- tion b U (.|B) with b.b.a. m U (.|B) 1 m U (.|B)= ∑ XB c m(A X) if A B 0 elsewhere is the minimal commitment specialization of b such that pl b (B c |B)=0 [13]. In [14], Xu and Smets used conditional belief functions to represent relations between variables in evidential networks, and presented a propagation algorithm for such networks. In [15], Smets pointed out the distinction 1 Author’s notation. between revision and focussing in the conditional process, and how they lead to unnormalized and geometric [16] con- ditioning b G (A|B) = b(AB) b(B) , respectively. In these two scenarios he proposed generalizations of Jeffrey’s rule of conditioning [17], [18] P (A|P , B)= BB P (AB) P (B) P (B) to the framework of belief functions. Slobodova also conducted some early studies on the issue of conditioning. In particular, a multi-valued extension of condi- tional b.f.s was introduced [19], and its properties examined. More recently, Tang and Zheng [20] also discussed the issue of conditioning in a multi-dimensional space. Klopotek and Wierzchon [21] provided a frequency-based interpretation for conditional belief functions. Quite recently, Lehrer [22] proposed a geometric approach to determine the conditional expectation of non-additive prob- abilities. Such conditional expectation was then applied for updating, whenever information became available, and to introduce a notion of independence. Early attempts of studying conditioning in a geometric framework appeared in [23], where the simplicial geometry of the set bof all belief functions obtained by Dempster combination with a given b.f. b, or conditional subspace, was described. A. Contribution Along this line, in this paper we propose indeed to define the notion itself of conditioning by geometric methods. The idea is simple: as the collection of events {B A} included in a given conditioning event A determine a simplex in the space of belief functions, conditional belief functions can be defined geometrically by minimizing a certain distance between the original b.f. b and the conditioning simplex. Such geometric conditioning can take place in two different spaces M and B, according to whether we represent belief functions are vectors of mass values or belief values. We show here that defining geometric conditional b.f.s by minimizing L p distances between b and the conditioning simplex in the mass space M produces simple, elegant results with straightforward interpretations in terms of degrees of belief. In summary, L 1 -conditional belief functions in M form a polytope in which each vertex is the b.f. obtained by re- assigning the entire mass not contained in A to a single focal element {B A}. In turn, the L 2 conditional b.f. is the barycenter of this polytope, i.e., the belief function with core in A obtained by re-assigning the mass BA m(B) to each focal element {B A} on an equal basis.