Real-time, color image barrel distortion removal Henryk Blasinski * , Wei Hai , Frantz Lohier Logitech Inc. 6505 Kaiser Drive Fremont, CA * h.blasinski@stanford.edu, wei@plantronics.com, frantz@lohier.com Abstract—This paper describes a new hardware architecture for barrel distortion correction in a color video stream. Such distortion is omnipresent in images acquired with a large field of view optics: wide-angle or fish-eye lenses. The designed platform is composed of a standard image sensor, a USB video class ASIC chip, a low cost FPGA and a SDRAM memory chip. Image processing algorithms are implemented in the FPGA, which is in- serted between the sensor and the ASIC. The FPGA is connected to an external SDRAM in which a frame buffer is implemented. Barrel distortion is modeled using a polynominal relationship between corrected and distorted image spaces. Combinatorial logic circuit at the frame buffer output validates correct ordering of luminance and chrominance bytes in the data stream. The proposed design is capable of removing geometric distortion from 640 × 480 pixel images at the rate of 30 frames per second. Colors in reconstructed images are within ΔE = 2 from the originals in the CIELab color space. I. I NTRODUCTION Digital cameras have become ubiquitous in today’s world. They are present not only as devices dedicated to image acquisition such as single-lens reflex cameras, but they have migrated into many other objects such as mobile phones, computers, tablets, or even cars. Along with popularization came novel applications, for example enhanced reality or telepresence. However, the mid-range field of view (FOV) of current cameras, typically about 60 , significantly limits the user experience associated with these applications. Note that 60 of the visual field covers approximately 1 / 3 of the human visual system FOV. This problem may be partially solved by using special optics, such as wide-angle or fish-eye lenses. Unfortunately, these lenses introduce geometric distortion into the image, which seriously impacts the perceptual image quality. Barrel distortion affects straight lines from the real world and represents them as curves in the image. Past research into barrel distortion removal focused on two aspects. The first one described mathematical models approx- imating the nature of this phenomenon and proposing image restoration algorithms [1]–[5]. Computational complexity was rarely analyzed though. The second aspect addressed the issue of efficient, real-time computation on designated hardware platforms [6]–[9]. Implementations varied in terms of logic requirements or frame rates often achieving good performance; however, only grayscale images were analyzed. H.B. is currently with Stanford University, CA. W.H. is currently with Plantronics Inc., CA, F.L. is currently with Kudelski Group. In many commercial applications, color is often represented in the luminance and subsampled chrominance format, for example YUV 4:2:2. This means that the spatial sampling frequency of color information is two times lower, thus chrominance data is shared between neighboring pixels. Such a solution significantly lowers the bandwidth requirement, but complicates image manipulation. Recently a real-time barrel distortion system for color images was proposed [10], but it processed very small, QQVGA (160 × 120) images that severely limited any practical use. This paper is organized as follows. Section II presents a high level overview of the distortion model used. Section III describes the proposed architecture; it is followed by experimental results discussed in section IV. Conclusions are presented in section V. II. DISTORTION REMOVAL In this paper we are modeling the barrel distortion as having two components: radial, affecting the distance R of a given point from the image centre, and tangential, which affects the angle θ between the radius and the reference direction [11]. In general, the relationship between corrected (R c c ) and distorted (R d d ) parameters can be expressed by m and n-th order polynominals: θ c = m k=0 a k · θ k d = p(θ d ) (1) R c = n k=0 b k · R k d = q(R d ) (2) Direct implementation of this approach in hardware can be problematic due to polar to rectangular conversion. For this reason, the coefficient method proposed by [10] was imple- mented. In brief, this method assumes that the radial distortion is negligible (θ d =0), and relates distorted and corrected coordinates of image pixels to a coefficient C f . The value of this coefficient is approximated with a n-th order polynominal function of the square of the pixel distance from the image center (i.e. x 2 + y 2 ). We refer to the original paper for more details. In this implementation, a 2 nd order polynominal was used; its coefficient values are given in Table I. Numerical values taken from [10]