Perceptually Adaptive Joint Deringing-Deblocking Filtering for Scalable Video Transmission over Wireless Networks Shuai Wan, Marta Mrak, Naeem Ramzan and Ebroul Izquierdo Multimedia and Vision Group, Queen Mary, University of London, Mile End Road, E1 4NS, London, UK {shuai.wan, marta.mrak, naeem.ramzan, ebroul.izquierdo}@elec.qmul.ac.uk Abstract: Video transmission over low bit-rate channels, such as wireless networks, requires dedicated filtering during decoding for crucial enhancement of the final perceptual video quality. For that reason deringing and deblocking modules are inevitable components of decoders in wireless video transmission systems. Aimed at improving the visual quality of decoded video, in this paper a new perceptually adaptive joint deringing-deblocking filtering technique for scalable video streams is proposed. The proposed approach is specially designed to deal with the artifacts inherent to transmissions over very low-bit rate channels, specifically wireless networks. It considers the update step for motion compensated temporal filtering in an in-loop filtering architecture. The proposed architecture integrates three different filtering modules to deal with low pass, high pass and after-update frames, respectively. Since rings and blocks artifacts are visually annoying, important characteristics of the human visual system are considered in the used bilateral filtering model. Here, the amount of filtering is adjusted to the perceptual distortion by integrating a human visual system model based on luminance, activity and temporal masking. Furthermore, the filter strength is adaptively tuned according to the number of discarded bit-plane, which in turn depends on the channel bit-rate and the channel error conditions. As a consequence, the