Distributed Source Coding Using Raptor Codes for Hidden
Markov Sources
M. Fresia
1
, and L. Vandendorpe
2
, and H. V. Poor
1
1
Princeton University, Princeton NJ, USA;
2
Universit´ e Catholique de Louvain, Louvain-la-Neuve, Belgium.
Interest in distributed source coding (DSC) has increased in recent years due to the
development of wireless networks. In this paper we propose a solution based on a new
rateless class of codes, the Raptor codes [1]. In real applications (where the data source
length and the correlation between the sources may vary), rateless codes can be naturally
adapted by generating just a single codeword with suitable length. Raptor codes were
already considered in [2] for the lossless compression of a single source.
The contribution of this work is in showing how it is possible to adapt Raptor codes for
the DSC problem, when the sources have memory that can be modeled as Hidden Markov
Processes (HMPs). The correlation between the sources does not have memory, while
the sources are modeled as HMPs. We consider a systematic version of Raptor codes.
In this way the solution can be designed using a “parity-like” approach. We modify the
decoding process by implementing a message passing algorithm between the decoders at
each iteration. To exploit the source redundancy, we use the trellis describing the HMP
as an additive decoder that iteratively exchanges information with the Raptor decoder.
This provides better performance in terms of bit error rate (BER). The simulations are
0.38 0.4 0.42 0.44 0.46 0.48 0.5
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
H(X|Y )
BER
HMP1
HMP2
MP1
MP2
Shannon
limit
(a)
1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
Compression ratio
BER
HMP1
HMP2
MP1
MP2
Shannon
limit
(b)
Fig. 1: (a): BER versus conditional entropy rate; (b): BER versus compression ratio.
carried out considering different hidden and non-hidden (MP) 2-state Markovian models
(HMP
1
with a
0,0
= a
1,1
=0.35, b
0,0
= b
1,1
=0.05 and HMP
2
with a
0,0
= a
1,1
=0.75,
b
0,0
= b
1,1
=0.05; MP
1
with a
0,0
= a
1,1
=0.3 and MP
2
with a
0,0
= a
1,1
=0.65).
In Fig. 1a, we fix the compression ratio equal to 2. We estimate the performance for
different values of the conditional entropy rate H(X |Y ), where X and Y denote the
correlated sources. To obtain H(X |Y )=1/2 for all the different models, the correlation
between the sources must change. In Fig. 1b, we show the results obtained by using our
solution in rateless mode. We set the conditional entropy rate to H(X |Y )=1/2 and we
estimate the performance as the compression ratio decreases. It is interesting to note that
the performance of the different models considered are similar: using a fixed compression
ratio (Fig. 1a), we obtain a low BER when H(X |Y ) ≈ 0.4, while fixing H(X |Y ) (Fig.
1b), we obtain a low BER when the compression ratio is ∼ 1.7. This means that the
proposed solution is influenced only by the actual value of the conditional entropy rate.
REFERENCES
[1] O. Etesami and A. Shokrollahi, “Raptor codes on binary memoryless symmetric channels,” IEEE Transactions
on Information Theory, vol. 52, pp. 2033 – 2051, 2006.
[2] G. Caire, S. Shamai, A. Shokrollahi, and S. Verd´ u, “Fountain codes for lossless data compression,” in DIMACS
Series in Discrete Mathematics and Theoretical Computer Science, December 2005, vol. 68, pp. 1–20.
This research was supported in part by the U.S. National Science Foundation under Grants ANI -03-38807 and CNS -06-25637
Data Compression Conference
1068-0314/08 $25.00 © 2008 IEEE
DOI 10.1109/DCC.2008.89
517