OPTIMIZED ALGORITHM FOR FACE DETECTION INTEGRATING DIFFERENT ILLUMINATING CONDITIONS Sumaya Abusaleh, Varun Pande and Khaled Elleithy Department of Computer Science and Engineering University of Bridgeport {abusaleh, vpande, elleithy }@bridgeport.edu Bridgeport, CT 06604, USA ABSTRACT Face detection is a significant research topic to recognize the identity for many automated systems. In this paper, we propose a face detection algorithm to detect a single face in an image sequence in the real-time environment by finding unique structural features. The proposed method allows the user to detect the face in case the lighting conditions, pose, and viewpoint vary. Two methods are combined in the proposed approach. First, we use the components Y, C b, and C r in YC b C r color space as threshold conditions to segment the image into luminance and chrominance components. Second, we use Roberts cross operator [1] to approximate the magnitude of the gradient of the test image and outline the edges of the face. Experimental results show that the proposed algorithm achieves high detection rate and low false positive rate. Index TermsFace detection, YC b C r color space, Roberts cross operator, Computer Vision, Illumination, Segmentation. It is also discussed in [5], that after capturing the image using a camera, some processing should be performed on the image to analyze the information on the detected faces in order to extract the features. These features are important to determine the location of the face, recognize, verify, and track its motion. However, there are several peculiarities that makes face detection more challenging; faces are non-rigid and have different components. For instance, color, shape, size, and texture are different components of the face. Also the variations in illumination intensity and chromaticity in real- time environment affects the appearance of the skin color of the face. Moreover, the face may also be occluded by other objects such as glasses, a scarf, long hair and partial or full occlusion of faces with each other. Furthermore, facial features such as beards and mustaches may affect the appearance of the face. Orientation of faces can also be affected by facial expressions, for example, smiles, winks, and anger, etc. There are two kinds of rotations that can be also attributed as some of the major challenges associated 1. INTRODUCTION with face detection. In-plane rotation is the orientation of the image such as frontal, upside down, and profile. Out-of- Recently, Computer Vision has become one of the fields that have inspired a large number of researchers to develop efficient techniques for programming computers to understand the features in the images. Digital image processing technology makes the challenges of automated image interpretation more attractive and interesting. This growing interest can be attributed to useful applications such as: medical imaging, video surveillance, video coding, content-based image retrieval, movie-post processing, plane rotation means the angular pose of the face relative to the cameras optical axis. The paper is organized as follows. In Section 2 we overview some related works. Section 3 illustrates the motivation for our research. Section 4 presents our proposed approach. Section 5 shows experimental results and discussions. The conclusion and future work are given in Section 6. human computer interaction (HCI), industrial inspection, 2. RELATED WORKS and counting people, etc [2] [3] [4]. Vision, as a source of semantic information, is one of the computer Vision themes. This means using algorithmic methods to recognize objects, people, and motions in order to understand the relationships among different components in the real world. Face detection is the highlight of such automated applications. Detecting the faces is considered as an indispensable first step, because it is concerned with finding out whether or not there are any faces in a given image. If any face appears, it should be localized and extracted from the background. Many authors have addressed the problem of face detection and developed many approaches to detect faces. Yang et al [6] surveyed the face detection techniques and divided the single image face detection approaches into four areas: knowledge-based methods, feature invariant approaches, template matching methods, and appearance-based methods. Yang and Huang [7] utilized a hierarchical knowledge- based method to locate unknown human faces in black & white pictures by defining certain rules. The system consists of three levels. In the first level, all the possible candidates