Gender Classification Using Facial Images and Basis Pursuit Rahman Khorsandi and Mohamed Abdel-Mottaleb Department of Electrical and Computer Engineering, University of Miami R.khorsandi@umiami.edu, Mottaleb@miami.edu Abstract. In many social interactions, it is important to correctly recognize the gender. Researches have addressed this issue based on facial images, ear images and gait. In this paper, we present an approach for gender classification using facial images based upon sparse representation and Basis Pursuit. In sparse rep- resentation, the training data is used to develop a dictionary based on extracted features. Classification is achieved by representing the extracted features of the test data using the dictionary. For this purpose, basis pursuit is used to find the best representation by minimizing the l 1 norm. In this work, Gabor filters are used for feature extraction. Experimental results are conducted on the FERET data set and obtained results are compared with other works in this area. The results show improvement in gender classification over existing methods. Keywords: Gender Classification, Basis Pursuit, Sparse Representation, Facial Images, Gabor Wavelets. 1 Introduction Gender classification is an important task in social activities and communications. In fact, automatically identifying gender is useful for many applications, e.g. security surveillance [4] and statistics about customers in places such as movie theaters, build- ing entrances and restaurants [3]. Automatic gender classification is performed based on facial features [8], voice [10], body movement or gait [23]. Most of the published work in gender classification is based on facial images. Moghaddam et al. [16] used Support Vector Machines (SVMs) for gender classification from facial images. They used low resolution thumbnail face images (21 × 12 pixels). Wu et al. [21] presented a real time gender classification system using a Look-Up-Table Adaboost algorithm. They extracted demographic information from human faces. Gol- lomb et al. [8] developed a neural network based gender identification system. They used face images with resolution of 30x30 pixels from 45 males and 45 females to train a fully connected two-layer neural network, SEXNET. Cottrell and Metcalfe [6] used neural networks for face emotion and gender classification from facial images. Gutta and Wechsler [9] used hybrid classifiers for gender identification from facial images. The authors proposed a hybrid approach that consists of an ensemble of RBF neural networks and inductive decision trees. Yu et al. [23] presented a study of gen- der classification based on human gait. They used model-based gait features such as height, frequency and angle between the thighs. Face-based gender classification is still an atractive research area and there is room for developing novel algorithms that are R. Wilson et al. (Eds.): CAIP 2013, Part I, LNCS 8047, pp. 294–301, 2013. c Springer-Verlag Berlin Heidelberg 2013