(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 7, 2011 53 | Page www.ijacsa.thesai.org Characterization of Dynamic Bayesian Network The Dynamic Bayesian Network as temporal network Nabil Ghanmi National School of Engineer of Sousse Sousse - Tunisia Mohamed Ali Mahjoub Preparatory Institute of Engineer of Monastir Monastir - Tunisia Najoua Essoukri Ben Amara National School of Engineer of Sousse Sousse - Tunisia Abstract— In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network. Keywords- DBN; DAG; Inference; Learning; HMM; EM Algorithm; SEM; MLE; coupled HMMs. I. INTRODUCTION The majority of events encountered in everyday life are not well described based on their occurrence at a particular point in time but rather they are described by a set of observations that can produce a comprehensive final event. Thus, time is an important dimension to take into account in reasoning and in the field of artificial intelligence in general. To add the time dimension in Bayesian networks, different approaches have been proposed. The common names used to describe this new dimension are "temporal" and "dynamic ". II. BASICS A. Definition Bayesian networks represent a set of variables in the form of nodes on a directed acyclic graph. It maps the conditional independencies of these variables. They bring us four advantages as a data modeling tool [16,17,18] A dynamic Bayesian network can be defined as a repetition of conventional networks in which we add a causal one time step to another. Each Network contains a number of random variables representing observations and hidden states of the process. We consider a dynamic Bayesian network composed of a sequence of T hidden state variables (a hidden state of a DBN is represented by a set of hidden state variables) and a sequence of T observable variables where T is time limit of the studied process. In order that the specification of this network is complete, we need to define the following parameters: - The transition probability between states - The conditional probability of hidden states knowing observation - The probability of the initial state The first two parameters must be determined for each time . These parameters can be invariant or not over time. B. Inference The general problem of inference for DBNs is to calculate where is the hidden variable at time and represents all observations between times and . There are several interesting cases of inference, they are illustrated below. The arrow indicates : that we try to estimate. Shaded regions correspond to observations between and Filtering: this is to estimate the belief state at time knowing all the observations until this moment: Decoding (Viterbi): decoding problem is to determine the most likely sequence of hidden states knowing the observations up to time : Prediction: This is to estimate a future observation or state knowing the observations up to the current time t0 t 0 t t 0