DISTRIBUTED MULTI-DIMENSIONAL HIDDEN MARKOV MODELS FOR IMAGE AND TRAJECTORY-BASED VIDEO CLASSIFICATIONS Xiang Ma, Dan Schonfeld and Ashfaq Khokhar Department of Electrical and Computer Engineering, University of Illinois at Chicago, 851 S Morgan St, Chicago, IL 60607, U.S.A. ABSTRACT In this paper, we propose a novel multi-dimensional dis- tributed hidden Markov model (DHMM) framework. We first extend the theory of 2D hidden Markov models (HMMs) to arbitrary causal multi-dimensional HMMs and provide the classification and training algorithms for this model. The pro- posed extension of causal multi-dimensional HMMs allows state transitions in arbitrary causal directions and neighbors. We subsequently generalize this framework further to non- causal models by distributing the non-causal models into multiple causal multi-dimensional HMMs. The proposed training and classification process consists of the extension of three fundamental algorithms to multi-dimensional causal systems, i.e. (1) Expectation-Maximization (EM) algorithm; (2) General Forward-Backward (GFB) algorithm; and (3) Viterbi algorithm. Simulation results performed using real- world images and videos demonstrate the superior perfor- mance, higher accuracy rate and promising applicability of the proposed DHMM framework. Index Terms— Hidden Markov Models, Image Classifi- cation, Trajectory Classification. 1. INTRODUCTION Hidden Markov Models (HMMs) have received tremendous attention in recent years due to its wide applicability in di- verse areas such as speech recognition and trajectory classifi- cation. Most of the previous research has focused on the clas- sical one-dimensional HMM developed in the 1960s by Baum et al [1], where the states of the system form a single one- dimensional Markov chain. However, the one-dimensional structure of this model limits its applicability to more com- plex data elements such as images and videos. In this paper, we propose a novel multi-dimensional dis- tributed hidden Markov model (DHMM) framework. We first provide a solution for non-causal, multi-dimensional HMMs by distributing the non-causal model into multiple distributed causal HMMs. We approximate the simultaneous solution of multiple distributed HMMs on a sequential processor by an alternate updating scheme. Subsequently we extend the train- ing and classification algorithms presented in [2] to a general This work is funded in part of funding from NSF IIS-0534438. causal model. The proposed DHMM model can be applied to many problems in pattern analysis and classification. 2. DISTRIBUTED MULTI-DIMENSIONAL HIDDEN MARKOV MODEL: THEORY We propose a novel solution to arbitrary non-causal multi- dimensional hidden Markov model, by distributing it into multiple causal distributed hidden Markov models and pro- cess them simultaneously. For an arbitrary non-causal two-dimensional hidden Markov model which has N 2 state nodes lying on the two- dimensional state transitional diagram, if every dimension of the model is non-causal, we can solve the model by allocating N 2 processors, each for one node, and if the N 2 processors can be perfectly synchronized and dead-lock of concurrent state dependencies can be successfully solved, we can esti- mate the parameters of the non-causal model by setting all N 2 processors working simultaneously in perfect synchrony. However, this is usually impractical in reality. We propose to distribute the non-causal model to N 2 distributed causal mod- els, by specifically focusing on the state dependencies of each node one at a time, while ignoring other nodes. Similarly, for arbitrary M -dimensional hidden Markov models, we can distributing the non-causal model to N M distributed causal HMMs, by specifically focusing on the state dependencies of each node one at a time, while ignoring other nodes. Fig. 1 depicts state dependencies diagrams of one non- causal two-dimensional model (Fig. 1(a)) and its decomposed two causal models (Fig. 1(b) and 1(c)). Directions of arrow show state dependencies, e.g. state node A points to B means that B depends on A. We refer to the distributed causal hidden Markov models as DHMMs. Please note that in the distributing procedure, the state dependency information is not lost but considered. Further more, since each distributed submodel preserved the corre- lation between neighboring state nodes, the proposed frame- work is not a simple collection of uncorrelated causal models but an accurate representation of the original model. To ac- curately estimate the state transition probabilities of the non- causal model, all of the distributed causal two-dimensional models must be processed simultaneously in perfect syn- chrony. However, in reality, it is impossible for the whole