IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 24, NO. 1, JANUARY 2014 127 Correlation Noise-Based Unequal Error Protected Rate-Adaptive Codes for Distributed Video Coding Jeffrey J. Micallef, Graduate Student Member, IEEE, Reuben A. Farrugia, Member, IEEE, and Carl James Debono, Senior Member, IEEE Abstract —Distributed video coding (DVC) is a paradigm that can shift most of the computational intensive tasks from the en- coder to the decoder. This allows for the design of low complexity encoders that can be deployed in devices equipped with limited resources. However, the compression efficiency obtained using practical DVC codecs are still distant from those of traditional predictive video coding schemes such as H.264/AVC. One of the limitations of the existing DVC architectures is that they consider the correlation noise to be randomly distributed across the whole video frame. This paper shows that the Wyner–Ziv (WZ) values that are closer to the endpoints of the quantization intervals have a higher probability of producing incorrect side information (SI) predictions. Thus, through this knowledge, rate- adaptive low-density parity-check accumulate codes that provide a higher level of protection to the unreliable SI bits can be exploited. Experimental results show that the proposed scheme can reduce the WZ bit-rate up to 13% relative to the state-of- the-art DISCOVER architecture when considering interpolation techniques and by up to 0.9 dB for extrapolation techniques. Index Terms—Correlation noise modeling, distributed video coding (DVC), low-density parity-check codes, Wyner–Ziv (WZ) video coding. I. Introduction S EVERAL devices, such as wireless mobile cameras and endoscopy capsules, have limited hardware resources and require lightweight encoding capabilities. However, traditional video coding schemes such as H.264/AVC employ complex motion compensation (MC) techniques to suppress temporal redundancies at the encoder. This makes the encoder very computational intensive [1] and not appropriate for such appli- cations. The distributed video coding (DVC) paradigm relies on the Slepian–Wolf (SW) [2] and Wyner–Ziv (WZ) theorems [3] to minimize encoding complexities. These theorems prove that no increase in bandwidth is required when two jointly Gaussian sources are encoded separately, as long as they are decoded together. This allows the computationally expensive tasks of exploring the source statistics to be shifted from Manuscript received October 11, 2012; revised April 17, 2013; accepted May 27, 2013. Date of publication July 16, 2013; date of current version January 3, 2014. This work was supported in part by the Strategic Educational Pathways Scholarship Scheme (Malta). The scholarship is part-financed by the European Union—European Social Fund (ESF 1.25). This paper was recommended by Associate Editor F. Wu. The authors are with the Department of Communications and Computer Engineering, University of Malta, Msida, Malta (e-mail: jeffrey.micallef@ieee.org; reuben.farrugia@um.edu.mt; c.debono@ieee.org). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCSVT.2013.2273630 the encoder to the decoder, facilitating the implementation of lightweight encoders. Furthermore, the low-cost solution offered by DVC is suitable for systems with a large number of encoders such as video surveillance and multiview video capture. In the DVC paradigm, the decoder predicts the WZ frames using motion compensated temporal interpolation (MCTI) be- tween the adjacent key frames. The dependency error between the predicted frame, known as the side information (SI), and the original WZ frame is then modeled as a virtual channel. The WZ frames can be recovered using channel coding techniques, where the encoder transmits only the parity information required to correct SI. Compression efficiency in DVC is thus influenced by the performance of the used channel codes and the statistical modeling of the side information. The first practical WZ coding framework was presented in [4], using syndrome decoding techniques. This was used to compress correlated images in [5] and [6] and for video compression in [7]. Later on, this concept was extended to use more sophisticated codes, where the statistics of the virtual channel could be exploited during the decoding process to improve performance [8]. The authors in [9]–[14] used Turbo- codes to approach the SW bounds, while [15] and [16] showed that even low-density parity-check (LDPC) codes are suitable for distributed source coding. Rate-compatible punctured Turbo-codes (RCPT) were con- sidered for DVC applications in [17], developing the first variant of the Stanford architecture. However, punctured LDPC codes [18]–[20] performed badly due to the poor graph con- nections that hinder the decoding process at lower rates [21]. Better rate-adaptive LDPC codes, known as LDPC accumulate (LDPCA) codes, were presented in [22]. Here, the LDPC code in [18]–[20] serves as the base code and the higher rate codes are obtained by recursively splitting the base code until the required rate is achieved. The structure of these codes was improved in [23] by considering the highest rate code as the base code, so that this can be optimized using graph condi- tioning techniques [24], [25], while the lower rate codes are obtained through check-node merging. Conversely, the authors in [26] considered optimizing the degree distribution over a specific entropy range. These codes have replaced Turbo-codes in state-of-the-art DVC codecs, such as the DISCOVER codec [27], [28] due to higher coding efficiencies. 1051-8215 c 2013 IEEE