Editorial Special issue on Image and Video Quality Assessment Quality assessment and control of images and videos from the input (capture device) to the final output (display and its environment) presented to the human viewer is essential for image/video applications and services. PSNR and other related measures have long been the most popular quality criteria used by engineers and researchers to evaluate and optimize the performances of digital image and video processing schemes. However, these simple-to-compute measurements are not in good agreement with human visual quality judgment. There- fore, there is an established need for efficient Image and Video Quality Assessment (IQA/VQA) schemes. Over the past two decades, objective image and video quality assessment methods have been extensively stu- died, and many criteria have been proposed. Depending on the application and the type of information accessible, IQA/ VQA can be classified into three types: Full Reference (FR) metrics, Reduced Reference (RR) metrics, and No Reference (NR) metrics. All these types of visual quality metrics are now considered for standardization by various groups, including the Video Quality Experts Group (VQEG) for video and JPEG Advanced Image Coding (AIC) for images. Even if improvements have been achieved compared to PSNR, there is a need for more efficient and generic IQA/ VQA algorithms that are more robust to the image & video content and also to the different types of impairments introduced by the degrading systems. The aim of this special issue is to provide an overview of state-of-the-art IQA/VQA methods and to address new developments towards various directions. The issue starts with an invited paper that surveys an area of great current research interest, namely NR QA. It is followed by another survey paper, which examines audiovisual QA. The next two papers each propose a novel image quality metric, one based on gradient profiles, and the other on structural similarity. They are followed by a paper on perceptual deblocking. Video quality is the topic of the last two papers, one investigating spatio-temporal interactions in VQA, and the other the impact of visual attention on VQA. The invited paper by S.S. Hemami and A.R. Reibman is a survey on the topic of ‘‘No-reference image and video quality estimation: Applications and human-motivated design.’’ Their paper proposes a three-stage framework for no-reference quality estimators that encompasses the range of potential use scenarios and allows knowledge of the human visual system to be incorporated throughout. It also surveys the measurement stage of the framework, considering methods that rely on bitstream, pixels, or both. By exploring the accuracy requirements of potential uses as well as evaluation criteria to stress-test a quality metric, the paper sets the stage for the IQA/VQA commu- nity to make substantial future improvements to the challenging problem of NR quality estimation. The paper ‘‘Perception-based quality assessment for audio–visual services: A survey’’ by J. You, U. Reiter, M.M. Hannuksela, M. Gabbouj, and A. Perkis addresses the important issue of joint audio–visual quality assessment. While many studies have targeted audio and video quality assessment separately, fundamental research is required on multi-modal perception to better understand the mutual influence between auditory and visual stimuli. This paper reviews subjective quality assessment meth- odologies for audio–visual signals, and surveys percep- tual-based audio–visual quality metrics. The paper ‘‘No-reference perceptual image quality metric using gradient profiles for JPEG2000’’ by L. Liang, S. Wang, J. Chen, S. Ma, D. Zhao, and W. Gao deals with one of the major difficulties in NR QA, which is that certain features of natural images can be confused with artifacts. They tackle this problem using statistical information on image gradient profiles and propose a novel quality metric for JPEG2000 images. The key part of the metric is a histogram representing the sharpness distribution of the gradient profiles, from which a blur metric is derived that is insensitive to inherently blurred structures in the natural image. Furthermore, a ringing metric is defined based on ringing visibilities of regions associated with the gradient profiles. The combined model is robust to various types of image content and achieves a performance competitive with state-of-the-art metrics. The contribution by C. Li and A.C. Bovik on ‘‘Content- partitioned structural similarity index for image quality Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/image Signal Processing: Image Communication 0923-5965/$ - see front matter & 2010 Published by Elsevier B.V. doi:10.1016/j.image.2010.07.001 Signal Processing: Image Communication 25 (2010) 467–468