1 A note regarding ″Loopy Belief Propagation″ convergence in probabilistic and possibilistic networks Amen Ajroud 1 , Mohamed Nazih Omri 2 , Salem Benferhat 3 , Habib Youssef 4 1 ISET de Sousse, Cité Erriadh, B.P. 135, 4023, Sousse, Tunisia. Phone : +216 73 30 79 60 Fax : +216 73 30 79 63 - amen.ajroud@isetso.rnu.tn 2 FSM, Université de Monastir, Boulevard de l’Environnement 5019, Monastir, Tunisia. Phone : +216 73 50 02 76 Fax : +216 73 50 02 78 - nazih.omri@ipeim.rnu.tn 3 CRIL, Université d’Artois, Rue Jean Souvraz SP18, 62300, Lens, France. Phone : +33 3 21 79 17 79 Fax : +33 21 79 17 70 - benferhat@cril.univ-artois.fr 4 ISITCOM, Université de Sousse, Route principale n°1, 4011, Hammam Sousse, Tunisia. Phone : +216 73 36 44 11 Fax : +216 73 36 44 11 - habib.youssef@fsm.rnu.tn Abstract: We present a novel inference algorithm which is an adaptation of Loopy Belief Propagation applied on Product-Based Possibilistic Networks. Without any transformation of the initial graph, the basic idea of this adaptation is to propagate evidence into network by passing messages between nodes until a convergence state is reached (if ever). Product-Based Possibilistic Networks appear to be important tools to efficiently and compactly represent possibility distributions. This inference process is known to be a crucial and a hard task especially for multiply-connected networks i.e. with loops. Keywords: Possibilistic networks, possibility distributions, approximate inference, DAG multiply connected, convergence. 1. INTRODUCTION Belief propagation (BP) is an efficient local message passing algorithm for exact inference on singly- connected graphs i.e. trees (Pearl, 1988). Applying the same rules to multiply- connected graphs, named Loopy Belief Propagation (LBP), has proven a successful method for approximate inference (Murphy et al., 1999). Empirically, a number of researchers have demonstrated good performance for LBP algorithms applied to multiply-connected graphs among them the decoding Turbo code: the error correction codes network (Glavieux et al., 1993) and problems in computer vision: image analysis (Freeman and Pasztor, 1998). More precisely, when LBP algorithm converges, the provided posterior distribution values are often a good approximation to the exact inference result. On the other hand, for other graphs with loops, LBP may give poor results or fail to converge (Murphy et al., 1999). Graphical models are an important tools to efficiently represent and analyse uncertain information. Among these graphical representations, Bayesian Networks are particularly well defined and well applied. Possibilistic networks appear to be an alternative approach to model both uncertainty and imprecision. There are two kinds of possibilistic networks: Min-Based Possibilistic Networks and Product-Based Possibilistic Networks (Fonck, 1994). These two kinds of possibilistic networks only differ on the definition of possibilistic conditioning. In this paper, we focus on product-based possibilistic network and propose an adaptation of LBP in possibilistic inference algorithm which allows to determine an approximation of possibility degree of any variable of interest given some evidence. The rest of this paper is organized as follows: first, we give a background on graphical models, and LBP (section 2). After that, we will review possibility theory and product based possibilistic networks (section 3). Then, we present our possibilistic adaptation of “Loopy Belief Propagation” algorithm for product-based possibilistic networks (Section 4). Section 5 gives some experimental results. 2. BACKGROUND An International Forum for Exploring e-Business and e-Government Research, Applications, and Technologies, March 12 - 15, 2008, Hammamet, Tunisia