2072 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 6, JUNE 2007
A Partial Ordering of General Finite-State Markov
Channels Under LDPC Decoding
Andrew W. Eckford, Member, IEEE, Frank R. Kschischang, Fellow, IEEE, and Subbarayan Pasupathy, Fellow, IEEE
Abstract—A partial ordering on general finite-state Markov
channels is given, which orders the channels in terms of proba-
bility of symbol error under iterative estimation decoding of a
low-density parity-check (LDPC) code. This result is intended
to mitigate the complexity of characterizing the performance of
general finite-state Markov channels, which is difficult due to the
large parameter space of this class of channel. An analysis tool,
originally developed for the Gilbert–Elliott channel, is extended
and generalized to general finite-state Markov channels. In doing
so, an operator is introduced for combining finite-state Markov
channels to create channels with larger state alphabets, which are
then subject to the partial ordering. As a result, the probability
of symbol error performance of finite-state Markov channels
with different numbers of states and wide ranges of parameters
can be directly compared. Several examples illustrating the use
of the techniques are provided, focusing on binary finite-state
Markov channels and Gaussian finite-state Markov channels.
Furthermore, this result is used to order Gilbert–Elliott channels
with different marginal state probabilities, which was left as an
open problem by previous work.
Index Terms—Estimation-decoding, iterative decoding, low-
density parity-check (LDPC) codes, Markov channels, partial
ordering.
I. INTRODUCTION
F
INITE-STATE Markov channels are binary-input chan-
nels, each with a hidden channel state sequence that is
generated by a finite-state Markov chain, where the state of
the Markov chain determines the instantaneous behavior of the
channel. (Throughout this paper, we will take the finite-state
nature of the channel to be implicit, and simply refer to Markov
channels.) These channels have many applications, such as
approximating wireless channels with slow fading, or modeling
other correlated noise effects. Capacity and coding for these
channels was discussed in [1].
A low-density parity-check (LDPC) code is a type of block
code with a very sparse parity-check matrix [2]. Using the
sum–product algorithm (SPA) [3] for decoding, it is well
Manuscript received May 14, 2003; revised March 4, 2007. The material in
this paper was presented at the 2003 IEEE International Symposium on Infor-
mation Theory, Yokohama, Japan, June/July 2003.
A. W. Eckford was with The Edward S. Rogers Sr. Department of Electrical
and Computer Engineering, University of Toronto, Toronto, ON, Canada. He is
now with the Department of Computer Science and Engineering, York Univer-
sity, Toronto, ON M3J 1P3, Canada (e-mail: aeckford@yorku.ca).
F. R. Kschischang and S. Pasupathy are with The Edward S. Rogers
Sr. Department of Electrical and Computer Engineering, University of
Toronto, Toronto, ON M5S 3G4, Canada (e-mail: frank@comm.utoronto.ca;
pas@comm.utoronto.ca).
Communicated by A. Kavˇ cic ´, Associate Editor for Detection and Estimation.
Digital Object Identifier 10.1109/TIT.2007.896877
known that LDPC codes have excellent performance in mem-
oryless channels. The SPA can be extended to obtain natural
estimation-decoding strategies in channels with memory, which
from recent work are known to have excellent performance
in, for example, partial response channels [4], [5], and the
Gilbert–Elliott (GE) channel, the simplest type of binary-output
Markov channel [6], [7]. Recent work has also focused on the
applicability of LDPC codes for source compression, especially
for Slepian–Wolf encoding (see, e.g., [8], [9]). Markov sources,
which are analogous to Markov channels, have been proposed
to model temporal correlations in data, requiring many of the
same approaches as for Markov channels [10]–[12].
Partial orderings of communication channels can be traced
back to Shannon [13], who described a partial ordering of
memoryless channels using general codes. For Markov chan-
nels, some analytical performance results exist for decoding
using classical codes, such as burst error correcting codes [14]
and convolutional codes [15]. Knowledge of the performance
of LDPC decoding may be obtained using Monte Carlo sim-
ulation, or using some analytical technique such as density
evolution [16].
Markov channels have large parameter spaces—the GE
channel, with two channel states, is characterized by four
parameters, and, as we discuss in Section II, parameters
are required to completely describe a -state Markov channel.
Since contemporary analysis techniques can only examine one
channel at a time, it is complicated to analyze large classes or
families of Markov channels. By contrast, many memoryless
channels, such as the binary symmetric channel (BSC) and
the additive Gaussian channel, are characterized by a single
parameter. The analysis of these single-parameter memoryless
channels is simplified because some channels are known to be
degraded with respect to others. For instance, for a memoryless
Gaussian channel, we immediately know that channels with a
given noise variance are better than any channel with greater
variance. Analogously, to simplify the analysis of Markov
channels, if the performance of an LDPC code (in terms of
probability of symbol error) using some channel is known,
that knowledge should immediately imply something about the
performance for a “region” of channels neighboring . To that
end, in this paper we give a method of recursively constructing
general Markov channels, and show that this construction
results in a partial ordering in terms of probability of symbol
error under iterative LDPC decoding.
The results in the present paper are largely a generalization
of previous results from [6]. In that paper, three results were
given to order GE channels in terms of their iterative decoding
0018-9448/$25.00 © 2007 IEEE