IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998 1079 [4] K. T. Lay and A. K. Katsaggelos, “Image identification and restoration based on the expectation-maximization algorithm,” Opt. Eng., vol. 29, pp. 436–445, May 1990. [5] R. L. Lagendijk, J. Biemond, and D. E. Boekee, “Identification and restoration of noisy blurred images using the expectation-maximization algorithm,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1180–1191, July 1990. [6] , “Hierarchial blur identification,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, 1990, pp. 1889–1892. [7] R. L. Lagendijk, A. M. Tekalp, and J. Biemond, “Maximum likelihood image and blur identification: A unifying approach,” Opt. Eng., vol. 29, pp. 422–435, May 1990. [8] J. Kim and J. W. Woods, “Image identification and restoration in the sub-band domain,” IEEE Trans. Image Processing, vol. 3, pp. 312–314, May 1994. [9] A. K. Katsaggelos and R. W. Schafer, “Iterative deconvolution using several different distorted versions of an unknown signal,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, Boston, MA: 1983, pp. 659–662. [10] D. C. Ghiglia, “Space-invariant deblurring given independently blurred images of a common object,” J. Opt. Soc. Amer. A, vol. 1, pp. 398–402, Apr. 1984. [11] R. K. Ward, “Restoration of differently blurred versions of an image with measurement errors in the PSF’s,” IEEE Trans. Image Processing, vol. 2, pp. 369–381, July 1993. [12] A. N. Rajagopalan and S. Chaudhuri, “Maximum likelihood estima- tion of blur from multiple observations,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, Munich, Germany, Apr. 1997, pp. 2577–2580. [13] M. Subbarao, “Efficient depth recovery through inverse optics,” in Machine Vision for Inspection and Measurement, H. Freeman, Ed. New York: Academic, 1989. [14] F. A. Graybill, An Introduction to Linear Statistical Models. New York: McGraw-Hill, 1961, vol. 1. [15] A. N. Rajagopalan and S. Chaudhuri, “Space-variant approaches to the recovery of depth from defocused images,” Computer Vision and Image Understanding, to be published. [16] , “Optimal selection of camera parameters for recovery of depth from defocused images,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, U.S. Virgin Islands, June 1997. Region Growing: A New Approach S. A. Hojjatoleslami and J. Kittler Abstract— A new region growing method for finding the boundaries of blobs is presented. A unique feature of the method is that at each step, at most one pixel exhibits the required properties to join the region. The method uses two novel discontinuity measures, average contrast and peripheral contrast, to control the growing process. I. INTRODUCTION The segmentation of regions is an important first step for a variety of image analysis and visualization tasks. There is a wide range of image segmentation techniques in the literature, some considered general purpose and some designed for a specific class of images. Conventional segmentation techniques for monochromatic images Manuscript received November 5, 1995; revised October 27, 1997. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Patrick A. Kelly. The authors are with the Centre for Vision, Speech and Signal Pro- cessing, University of Surrey, Guildford, Surrey GU2 5XH, U.K. (e-mail: a.hojjatoleslami@ee.surrey.ac.uk). Publisher Item Identifier S 1057-7149(98)04449-2. can be categorized into two distinct approaches [3]. One is region based, which relies on the homogeneity of spatially localized features, whereas the other is based on boundary finding, using discontinuity measures. The two methods exploit two different definitions of a region which should ideally yield identical results. Homogeneity is the characteristic of a region and nonhomogeneity or discontinuity is the characteristic of the boundary of a region. Based on one or both of these properties, diverse approaches to image segmentation exhibiting different characteristics have been suggested [1], [2], [4], [8]–[10], [12], [13]. We present here a new idea for region growing by pixel aggrega- tion, which uses new similarity and discontinuity measures. A unique feature of the proposed approach is that in each step at most one candidate pixel exhibits the required properties to join the region. This makes the direction of the growing process more predictable. The procedure offers a framework in which any suitable measurement can be applied to define a required characteristic of the segmented region. We use two discontinuity measurements called average contrast and peripheral contrast to control the growing process. Local maxima of these two measurements identify two nested regions, called the average contrast and the peripheral contrast regions. The method first finds the average contrast boundary of a region, then a reverse test is applied to produce the peripheral contrast boundary. Like existing procedures, the method proposed in this paper is not universal, but it does appear to have a fairly wide application potential, especially in medical image analysis, where the areas corresponding to a tissue of interest appear as bright/dark objects relative to the surrounding tissues. The concept of the method is presented in the next two sections. The similarity measure used by the method is presented in Section II. Section III introduces the two different discontinuity measures, pe- ripheral contrast and average contrast, and illustrates their behavior on a Gaussian shape image. The capability of our method is then demonstrated on a set of real images in Section IV, followed by a summary and conclusion in Section V. II. GROWING PROCESS The concept of our method, like that of other region growing methods by pixel aggregation, is to start with a point that meets a detection criterion and to grow the point in all directions to extend the region. Let us assume that the process starts from an arbitrary pixel. The pixel is labeled as a region that then grows based on a similarity measure. In our approach, a boundary pixel is joined to the current region provided it has the highest grey level among the neighbors of the region. This induces a directional growing such that the pixels of high grey level will be absorbed first. When all the high grey level pixels in the region are absorbed, the process continues by absorbing the boundary pixels with monotonically lower and lower grey levels. When several pixels with the same grey level jointly become the candidates to join the region, the first-come first-served strategy is used to select one of them. This makes the region more compact, particularly in situations where the grey levels of the background or the region pixels are very homogeneous. In order to monitor the pixels joining the region, a grey-level mapping is generated. The mapping is very similar to the mapping of data points from a high-dimensional feature space onto a sequence which is used in the mode separating (MODESP) procedure for cluster analysis proposed by Kittler [7]. The mapping for a small 1057–7149/98$10.00 1998 IEEE