On the Use of Quasi-arithmetic Means for the Generation of Edge Detection Blending Functions C. Lopez-Molina, J. Fernandez, A. Jurio, M. Galar, M. Pagola and B. De Baets Abstract— The edge detection process can be broken down into four basic transformations, modifying the image from the original presentation to the final edges one. The adoption of this framework makes the process far more understandable, and offers an starting point for the combination and comparison of different edge detection methods. In this work we analyze the role of the third of the transformations, the blending, where the edge features are combined to obtain the edginess values. This work studies the use of quasi-aritmethic means for the combination of the edge features. Moreover, we show results obtained with different operators on real images, in order to illustrate the importance of the blending phase in the edge detection process. Results will show the impact of the function selection in the final results. I. I NTRODUCTION The process of edge detection in a image has always lacked a formal, widely accepted structure. Indeed, it is hard to find a definition of what an edge is, being Canny constraints [7] the most recognized approach. Some attempts have been carried out to characterize, at least, the different stages of the problem. An early mention to the problem was done by Torre and Poggio [29], stating that “the goal cannot be reached in a single step”. Bezdek et al. [5] introduced the first mathematical breakdown structure of the problem. It aimed to embrace all the previous work, as long as a wide variety of imaging data. This work studies the role of the quasi-arithmetic means in the blending phase, where the edge features at every pixel are turned into edginess values. Moreover, we intend to point out the influence of this phase, sometimes ignored, in the final results. The remainder of this article is organized as follows. In the Section II the edge detection process, after Bezdek et al. [5], is introduced. Section III analyzes in depth the role of the blending phase in the literature. Some practical results using quasi-arithmetic means for the blending are shown in Section IV. To finish, some brief conclusions are depicted in Section V. II. CHARACTERIZATION OF THE EDGE DETECTION PROCESS After [5], the processing of an image in order to obtain the edges of its objects can be divided in four sequential phases, C. Lopez-Molina, J. Fernandez, A. Jurio, M. Galar and M. Pagola are with the Dpto. Autom´ atica y Computaci´ on, Public University of Navarra, Campus Arrosadia s/n, P.O. Box 31006, Pamplona, Spain (phone: +34 948 169839; email: carlos.lopez@unavarra.es ). B. De Baets is with the Dept. of Applied Mathematics, Biometrics and Process Control, Universiteit Gent, Coupure links 653, 9000 Gent, Belgium (email:bernard.debaets@ugent.be). Original Image Conditioning Enhanced Image Feature Extraction Features Image Blending Edginess Image Scaling Final edges image ❄ ❄ ❄ ❄ Fig. 1. Edge detection process after Bezdek et al. each of them represented by a function: conditioning (c), feature extraction (f ), blending (b) and scaling (s). Figure 1 comprises the sequence. Considering an initial image E, the composition of all the functions produces the edges image G so that G = s(b(f (c(E)))) (1) Each of the phases is characterized by the information it processes and its interpretation. Therefore, in order to compare or combine different edge detection methods, we only have to understand the meaning of the information at every step of the algorithm. The first of the phases, conditioning, consists of the adequation of the image for edge detection. This might imply denoising, equalizing, smoothing or any other procedure ([3], [19], [21]). This function could even modify the number of values per pixel in the image, as it could include channel merging or dividing [26]. The second phase, feature extraction, is the most exten- sively covered in the literature. It consists of the extraction of information about the changes around each position of the image, i.e. to characterize how is the image changing at each position. Procedures used at this phase include,