Further Results on Survivor Error Patterns for Maximum Likelihood Decoding Jakov Snyders Department of Electrical Engineering – Systems, Tel Aviv University Tel Aviv 69978, Israel snyders@eng.tau.ac.il Abstract— Survivor error patters for the (63,57,3) Hamming code are presented. This reduced list of error patterns enables rather efficient maximum likelihood soft decision decoding of the Hamming code and large turbo-like codes whose constituents are the aforementioned Hamming codes. The list of survivor error patterns for the (63,57,3) Hamming code possesses certain properties unshared by the lists of survivors for smaller Hamming codes. Also, we demonstrate that the weight distribution is a powerful means in specifying survivor error patterns for codes other than Hamming codes. Index terms soft decision decoding, error pattern, reliabil- ity, partial ordering, survivor, least reliable positions I. I NTRODUCTION Consider a binary linear block error correcting code C = C (n, k, d) of length n, dimension k and minimum distance d, employed to combat the disturbance when transmitting infor- mation through an AWGN communication channel. Prior to transmission some antipodal modulation is performed. Denote by c =(c 1 ,c 2 , ··· ,c n ) the codeword fed into the modulator. At the receiving end of the channel demodulation is carried out. The output of the demodulator is a sequence of real numbers, denoted r =(r 1 ,r 2 , ··· ,r n ). Bit-by-bit detection yields v =(v 1 ,v 2 , ··· ,v n ) where v j =0 if r j ≥ 0 and v j =1 if r j < 0. Denote e = c + v. If e = 0 then v is declared to be the codeword that was sent. Otherwise a search is performed. The search may be based solely on the hard-detected values v. This is called hard decision decoding (and then v, rather than r, is regarded to be the received word). In contrast, maximum likelihood (ML) soft decoding, such as the Viterbi algorithm, exploits all the information contained in the values r. Denote ρ j = |r j |, which is the reliability of the decision v j . Clearly, knowledge of both v and ρ =(ρ 1 ,ρ 2 , ··· ,ρ n ) amounts to knowledge of r. Indeed, it was shown [1], [2] that ML decoding can be accomplished by finding e which minimizes ρ(e)= ∑ j=n j=1 e j ρ j and then setting c ML = v + e. Viewed this way, ML soft decoding differs from conventional hard decoding in that the weight employed to seek the error vector is the reliability ρ(e) rather than the Hamming weight w H (e). A means to facilitate ML soft-decision decoding of a block code is to impose a type of ordering of the collection of error patterns, aimed at displaying a large majority of them as being inadequate to win the competition. The rest of the error patterns, called survivor error patterns, are thereafter compared by numerical evaluation of their reliability weights. The error pattern requested for the decoding is the survivor with the smallest reliability. This approach yields rather efficient decod- ing algorithms for the some codes, most notably the Hamming codes and their derivatives such as iteratively decoded product- like code whose constituents are Hamming codes [8], [13], [15], [16]. Let H =(h 1 h 2 ··· h n ) be a check matrix of C . Denote Γ(H)= {h 1 , h 2 , ··· , h n }. Assume, without essential loss of generality, that d ≥ 3. Then |Γ(H)| = n, whereby we have the convenience to identify the locations by columns of H. The syndrome z corresponding to v is given by z = Hv T , where the superscript T indicates transposition. It will be assumed throughout the paper that z = 0. Each error pattern e satisfies z = He T = ∑ j=n j=1 e j h j . Hence z = ∑ {h j : j ∈ supp(e)} where supp is short for support. In view of this it is possible, and turns out to be advantageous, to represent an error pattern by the associated subset of the column vectors Γ(H). We shall attribute the reliability ρ j also to h j , i.e., ρ(h j )= |r j |. Further, for a set Φ ⊂ Γ(H) we define ρ(Φ) to be the sum of the reliabilities of its members. With this terminology ML soft decoding may be stated as follows: find a subset Φ of Γ(H) with minimum reliability and then invert the bits of v at the positions specified by Φ. Obviously, a column set Φ with least reliability is necessar- ily a linearly independent set. A linearly independent subset of Γ(H) whose elements sum up to z will be called a pattern. Thus we rephrase the previously outlined decoding procedure as follows: find a pattern Φ with minimum reliability and then invert the bits of v at the positions specified by Φ. We have | Φ|≤ m where m = n - k is the codimension of C . The dimension of Ls(Γ(H)) is m, where Ls is short for linear span. Elimination of error patterns based on ordering of the reliabilities was considered in [1] and [4]. In this paper we present the list of survivor patterns for the (63,57,3) Hamming code. The list is useful to accomplish iterative (turbo) decoding of product codes that comprise (63, 57, 3) and (64, 57, 4) Hamming codes as constituent codes. We shall demonstrate that extensive elimination of error