Maximum a Posteriori Decoding of Turbo Codes by Bernard Sklar Introduction The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs) for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP) probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision- making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time. The metrics needed for the implementation of a MAP decoder are presented here, along with an example to illustrate how these metrics are used. Viterbi Versus MAP The MAP algorithm is unlike the Viterbi algorithm (VA), where the APP for each data bit is not available. Instead, the VA finds the most likely sequence to have been transmitted. However, there are similarities in the implementation of the two algorithms. When the decoded bit-error probability, P B , is small, there is very little performance difference between the MAP and Viterbi algorithms. However, at low values of bit-energy to noise-power spectral density, E b /N 0 , and high values of P B , the MAP algorithm can outperform decoding with a soft-output Viterbi algorithm called SOVA [1] by 0.5 dB or more [2]. For turbo codes, this can be very important, since the first decoding iterations can yield poor error performance. The implementation of the MAP algorithm proceeds somewhat like performing a Viterbi algorithm in two directions over a block of code bits. Once this bi- directional computation yields state and branch metrics for the block, the APPs and the MAP can be obtained for each data bit represented within the block. We describe here a derivation of the MAP decoding algorithm for systematic convolutional codes assuming an AWGN channel model, as presented by Pietrobon [2]. We start with the ratio of the APPs, known as the likelihood ratio ˆ k d Λ( ) , or its logarithm, called the LLR, as shown below.