Joint Source-Channel Turbo Decoding of Entropy Coded Sources Karim Ali Fabrice Labeau Center for Advanced Systems and Technologies in Communications (SYTACom) Department of Electrical Engineering McGill University Montreal, Quebec H3A 2A7 Email:{karim, flabeau}@tsp.ece.mcgill.ca Abstract A new turbo joint source-channel decoding algorithm is pre- sented. The proposed scheme, derived from a Bayesian net- work representation of the coding chain, incorporates three types of information: the source memory; the residual redun- dancy of the source coder; and finally the redundancy intro- duced by the channel coder. Specifically, we modify an existing algorithm by introducing an equivalent graph, that is shown to hold the same state-space while exhibiting far less undirected cycles. A fully consistent solution for joint turbo decoding within the Bayesian networks framework follows. The pro- posed algorithm is demonstrated to yield considerably better results along with a drastic reduction in computational com- plexity when compared to the existing one. 1 Introduction Communication systems that employ sequential decoding, namely channel decoding followed by source decoding, ignore two types of information: the residual redundancy of the source coder and the source memory (inter-symbol correlation). If a block length 1 optimal source coder is used — a common com- ponent — as much as 1 bit of residual redundancy per sym- bol may remain in the data. In addition, such a compression scheme leaves all source memory intact. These two types of redundancies, present at the output of the source coder, are also necessarily present in the received data stream. One can there- fore consider designing a joint decoder that would incorporate the two former sources of natural redundancy along with that artificial redundancy introduced by the channel coder; a possi- bility mentioned as early as Shannon’s seminal paper [1]. Such a design strategy is motivated further by the fact that optimal source coders of the variable length code (VLC) variety have corresponding source decoders that are extremely sensitive to noise: the lack of set symbol boundaries resulting in a vul- nerability to synchronization errors. Joint decoders have been shown to reduce the effects of de-synchronization and gener- ally improve the overall decoding performance [2]-[6]. The authors in [3] developed a generic solution to the joint decoding problem by deriving the product finite state ma- chine (FSM) model of the source, the source coder and the This research was supported in part by a grant from FQRNT channel coder. Various algorithms such as Hard Viterbi, Soft Viterbi and BCJR (Kalman smoothing) are then readily appli- cable yielding the optimal solution with respect to the algo- rithms’ criteria. Unfortunately, this solution has intractable complexity: the cardinality of the state-space of the product model is equal to the product of the cardinalities of the three constituent models. This unaffordable phenomenon, leads to the need for less complex and therefore sub-optimal joint de- coders. In this context, the authors in [4]-[5] provided a sub- optimal joint decoding solution under the additional assump- tion of a memoryless source. Specifically, their proposed al- gorithm uses the principle of turbo-decoding and alternates the use of a soft VLC decoder with a soft channel decoder. This approach was recently extended by Guyader et al. in [6] to in- clude sources with memory. The algorithm, which also relies on the principles of turbo-decoding and was derived in the con- text of Bayesian networks, has the advantage of isolating the three constituent components and therefore has limited com- plexity. In this paper, we present an algorithm largely inspired from [6]. In particular we consider an equivalent Bayesian net- work representation of the joint decoding problem. The result- ing graph has the same state-space, contains fewer loops (undi- rected cycles) and is shown to yield considerably better result with a significant reduction in computational complexity. We begin this paper with a brief review of the algorithm as pre- sented in [6]. Next, the proposed algorithm is introduced and studied. Finally, experimental results are shown. 2 Turbo Joint Decoding via the Bayesian Networks Framework A good introduction to Bayesian networks may be found in [7]. Essentially Bayesian networks provide a graphical representa- tion of statistical problems based on the factoring of their joint distribution into conditional distributions. The resulting graph may then be used to incorporate new knowledge as particu- lar nodes (random variables) are instantiated. Belief Propaga- tion (BP), which we henceforth refer to, essentially performs maximum-a-posteriori (MAP) estimation of each random vari- able in the graph. Belief Propagation may be either blind (or rather locally triggered by each node) or organized, in two passes equivalent to the BCJR algorithm.