Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis Decomposition Guillermo Carbajal Universidad de la Rep´ ublica Uruguay Patricia Vitoria Universitat Pompeu Fabra Spain Mauricio Delbracio Universidad de la Rep´ ublica Uruguay Pablo Mus´ e Universidad de la Rep´ ublica Uruguay Jos´ e Lezama Universidad de la Rep´ ublica Uruguay Abstract Characterizing and removing motion blur caused by camera shake or object motion remains an important task for image restoration. In recent years, removal of motion blur in photographs has seen impressive progress in the hands of deep learning-based methods, trained to map di- rectly from blurry to sharp images. Characterization of mo- tion blur, on the other hand, has received less attention and progress in model-based methods for restoration lags be- hind that of data-driven end-to-end approaches. In this pa- per, we propose a general, non-parametric model for dense non-uniform motion blur estimation. Given a blurry im- age, we estimate a set of adaptive basis kernels as well as the mixing coefficients at pixel level, producing a per-pixel map of motion blur. This rich but efficient forward model of the degradation process allows the utilization of exist- ing tools for solving inverse problems. We show that our method overcomes the limitations of existing non-uniform motion blur estimation and that it contributes to bridging the gap between model-based and data-driven approaches for deblurring real photographs. 1. Introduction Motion blur results from the relative motion between the camera and the scene, which is determined by the interac- tion of three elements: the motion of the camera or ego- motion, the three-dimensional geometry of the scene, and the motion of objects in the scene. When the exposure time is large compared to the relative motion, the camera sensor at each point receives and accumulates light coming from different sources, producing different amounts of blur. Psychophysical and neurological evidence show that motion blur provides important cues for visual perception, scene understanding and locomotion [4, 17, 40]. Besides deblurring, motion blur estimation has been successfully applied to different tasks such as scene interpretation, struc- ture from motion, image segmentation, and uncertainty characterization of the observation [11, 19, 28]. Most non-uniform motion blur estimation methods as- sume a parametric model of the motion field, either by con- sidering a global parametric form induced by camera mo- tion [16, 18, 41, 47], or by locally modeling the motion field with linear kernels, parameterized by the length of the ker- nel support and its orientation [15, 23, 41, 44]. In most sit- uations, for instance under camera shake from hand tremor, those models are not adapted to real case scenarios [13]. To overcome these limitations, we propose a novel ap- proach for non-parametric, dense, spatially-varying motion blur estimation based on an efficient low-rank represen- tation of the pixel-wise motion blur kernels. More pre- cisely, for each blurred image, a neural network estimates an image-specific set of kernel basis functions, as well as a set of pixel-wise mixing coefficients, cf. Figure 1. In this way, for each pixel a unique motion blur kernel is as- signed, given by the corresponding linear combination of the image-specific kernel basis functions. We show that this procedure allows to generate a wide range of complex mo- tion blur kernels that are well adapted to real acquisition scenarios. To the best of our knowledge, the proposed ap- proach is the first dense non-parametric non-uniform mo- tion blur estimation method. To further validate our method, we apply our estimated motion blur fields to two tasks: model-based image deblur- ring [6, 25, 26, 49, 47] and blur detection [14, 29, 42]. We show that in both cases we achieve results that are compa- rable to those of state-of-the-art end-to-end deep learning methods in standard benchmarks of real blurred images, therefore contributing to bridge the gap between model- based and data-driven approaches. 1 arXiv:2102.01026v1 [cs.CV] 1 Feb 2021