I.J.Modern Education and Computer Science, 2012, 1, 12-18
Published Online February 2012 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijmecs.2012.01.02
Copyright © 2012 MECS I.J. Modern Education and Computer Science, 2012, 1, 12-18
Perceived Gender Classification from Face
Images
Hlaing Htake Khaung Tin
University of Computer Studies, Yangon, Myanmar
hlainghtakekhaungtin@gmail.com
Abstract—Perceiving human faces and modeling the
distinctive features of human faces that contribute most
towards face recognition are some of the challenges faced by
computer vision and psychophysics researchers. There are
many methods have been proposed in the literature for
the facial features and gender classification. However, all of
them have still disadvantage such as not complete reflection
about face structure, face texture. The features set is applied
to three different applications: face recognition, facial
expressions recognition and gender classification, which
produced the reasonable results in all database. In this
paper described two phases such as feature extraction phase
and classification phase. The proposed system produced
very promising recognition rates for our applications with
same set of features and classifiers. The system is also real-
time capable and automatic.
Index Terms—Face Recognition, Facial Expression, Gender
Classification, Feature Extraction, Eigen faces.
I. INTRODUCTION
In the last several years, various feature extraction and
pattern classification methods have been developed for
gender classification. Emerging applications of computer
vision and pattern recognition in mobile devices and
networked computing require the development of
resource limited algorithms. Perceived gender
classification is a research topic with a high application
potential in areas such as surveillance, face recognition,
video indexing, and dynamic marketing surveys.
Moghaddam and Yang [1] introduced the best gender
recognition algorithm in terms of reported classification
rate. They adopted an appearance-based approach with a
classifier based on a Support Vector Machine with Radial
Basis Function Kernel (SVM+RBF)[1]. They reported a
96.6 percent recognition rate for classifying 1,775 images
from the FERET database using automatically aligned
and cropped images and a fivefold cross validation.
Previous simulations by Fleming and Cottrell [2],
using masking of bottom and top face areas, with a more
computationally demanding nonlinear approach, were
not strikingly accurate for sex classification under either
masking condition, but showed better performance on the
top portion of the face. When the top of the face was
masked the model was 29% correct, and when the bottom
of the face was masked the model was 55% correct.
However, this relatively low performance could be
attributed, at least partially, to the great variation of
training stimuli, which included non-face images. One
possible reason for the difference found between top and
bottom conditions is that the Fleming and Cottrell stimuli
included the hair and thus provided more variation in the
lower portion of the image, particularly for female faces.
Previous studies of facial area contribution to sex
classification by human subjects from photographic
images have used several approaches: presenting features
(or a combination of features) in isolation [3,4] masking
features [5,4] and replacing features within a full image
[3,6]. In some cases, the studies have used individual
photographic images, and in other cases, male and female
prototypes have been created using various averaging
techniques. These studies have produced varying results.
Differences obtained between tests of features in isolation
and substitution of features have been attributed to the
role of configuration in facial tasks [3,4]. For example,
although the nose alone provides little information,
masking it diminishes the total amount of configural
information perceived. In general, these studies indicate
that the isolated areas contributing the most to sex
classification are: the eye region (particularly the
eyebrows), and the face outline (particularly the jaw).
Human facial image processing has been an active and
interesting research issue for years. Since human faces
provide a lot of information, many topics have drawn lots
of attentions and thus have been studied intensively. The
most of these is face recognition [7]. Other research
topics include predicting feature faces [8] reconstructing
faces from some prescribed features [9].
Gender classification is important visual tasks for
human beings, such as many social interactions critically
depend on the correct gender perception. As visual
surveillance and human-computer interaction
technologies evolve, computer vision systems for gender
classification will play an increasing important role in our
lives [10].
Gender classification is arguably one of the more
important visual tasks for an extremely social animal like
us humans many social interactions critically depend on
the correct gender perception of the parties involved.
Arguably, visual information from human faces provides
one of the more important sources of information for
gender classification. Not surprisingly, thus, that a very
large number of psychophysical studies has investigated
gender classification from face perception in humans [11].
The usual assumptions (behind inductive learning)
may not hold for many applications. For example, if the
input values of the test samples are known (given), then
an appropriate goal of learning may be to predict outputs