IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 21, NO. 6, JUNE 1999 557
Face Detection From Color Images Using a
Fuzzy Pattern Matching Method
Haiyuan Wu, Qian Chen, and Masahiko Yachida
Abstract—This paper describes a new method to detect faces in
color images based on the fuzzy theory. We make two fuzzy models
to describe the skin color and hair color, respectively. In these
models, we use a perceptually uniform color space to describe the
color information to increase the accuracy and stableness. We use
the two models to extract the skin color regions and the hair color
regions, and then comparing them with the prebuilt head-shape
models by using a fuzzy theory based pattern-matching method to
detect face candidates.
Index Terms—Face detection, fuzzy pattern matching, perceptually
uniform color space, skin color similarity, hair color similarity, head
shape model.
———————— F ————————
1 INTRODUCTION
FACE detection from images is a key problem in human computer
interaction studies and in pattern recognition researches. It is also
an essential step in face recognition. Many studies on automatic
face detection have been reported recently. Most of them concen-
trate on quasi-frontal view faces [3], [4], [5], [6], [7]. This is because
the prior knowledge of the geometric relation with regard to the
facial topology of frontal view faces can help the detection of facial
features and it also makes the face modeling with a generic pattern
possible. However, the quasi-frontal view assumption limits the
kind of faces that can be processed.
A representative paradigm detects faces with two steps:
1) locating the face region [4], [9], [6], or assuming that the lo-
cation of the face part is known [3], [5], [6], [7] and
2) detecting the facial features in the face region based on edge
detection, image segmentation, and template matching or
active contour techniques.
One disadvantage of step 1 is that the face location algorithm is
not powerful enough to find out all possible face regions while
remainning the false positive rates to be low. Another disadvan-
tage is that the facial-feature-based approaches rely on the per-
formance of feature detectors. For small faces or low quality im-
ages, the proposed feature detectors are not likely to perform well.
Another paradigm is the visual learning or neural network ap-
proach [8], [10], [16], [11], [14]. Although the performance reported
is quite well, and some of them can detect nonfrontal faces, ap-
proaches in this paradigm are extremely computationally expen-
sive. A relatively traditional approach of face detection is template
matching and its derivations [15], [12], [13]. Some of them can de-
tect nonfrontal faces. This approach uses a small image or a simple
pattern that represents the average face as the face model. It does
not perform well for cluttered scenes. Face detection based on
deformable shape models was also reported [17]. Although this
method is designed to cope with the variation of face poses, it is
not suitable for generic face detection due to the high expense of
computation.
This paper describes a new face detection algorithm that can
detect faces with different sizes and various poses from both in-
door and outdoor scenes. The goal of this research is to detect all
regions that may contain faces while remaining a low false positive
output rate. We first develop a powerful skin color detector based
on color analysis and the fuzzy theory, whose performance is
much better than the existing skin region detectors. We also de-
velop a hair color detector, which makes possible the use of the
hair part as well as the skin part in face detection. We design mul-
tiple head-shape models to cope with the variation of the head
pose. We propose a fuzzy theory based pattern-matching tech-
nique, and use it to detect face candidates by finding out patterns
similar to the prebuilt head-shape models from the extracted skin
and hair regions.
2 DETECTING SKIN REGIONS AND HAIR REGIONS
2.1 Perceptually Uniform Color Space
The terms skin color and hair color are subjective human con-
cepts. Because of this, the color representation should be simi-
lar to the color sensitivity of human eyes to obtain a stable out-
put similar to the one given by the human visual system. Such
a color representation is called the perceptually uniform color
system or UCS. Many researchers have proposed conversion
methods from the Commission Internationale de l’Éclarirage’s
(CIE) XYZ color system to UCS. Among them, the L
*
u
*
v
*
and
L
*
a
*
b
*
color representations were proposed by G. Wyszecki.
Although they are simple and easy to use, both of them are just
rough approximations of UCS. The psychologist Farnsworth
proposed a better UCS through psychophysical experiments in
1957 [2]. In this color system, the MacAdam ellipses that de-
scribe the just noticeable chromatic difference become circles
with approximately the same radius (see Fig. 1). This indicates
that two colors, with an equal distance as perceived by human
viewers, will project with an equal distance in this color sys-
tem, and that is the feature we wanted.
We first convert the RGB color information in images to CIE’s
XYZ color system:
X R G B
Y R G B
Z R G B
x
X
X Y Z
y
Y
X Y Z
= + +
= + +
= + +
%
&
K
'
K
=
+ +
=
+ +
%
&
K
'
K
0 619 0 177 0 204
0 299 0 0 115
0 000 0 056 0 944
. . .
. .586 .
. . .
, (1)
where Y carries the luminance information, and (x, y) describe the
chromaticity. Then we convert the chromaticity (x, y) to the Farns-
worth’s UCS with a nonlinear transformation.
1
The result of this
conversion is represented by a tuple value (u
f
, v
f
). The values of
(u
f
, v
f
) of all visible colors are in the range of:
u
v
f
f
%
&
K
'
K
0 91
0 139
,
,
2.2 Skin Color Distribution Model
In conventional methods, all visible colors are divided into two
groups: One is the “skin color,” and the other is not. However, con-
1. A C program to perform this conversion can be found at:
http://www.sys.wakayama-u.ac.jp/~chen/ucs.html .
0162-8828/99/$10.00 © 1999 IEEE
²²²²²²²²²²²²²²²²
H. Wu is with the Department of Mechanical and System Engineering, Kyoto
Institute of Technology, Matsugasaki, Sakyo-ku, Kyoto 606-8585, Japan.
E-mail: wuhy@ipc.kit.ac.jp.
Q. Chen is with the Department of Design and Information Sciences, Fac-
ulty of Systems Engineering, Wakayama University, 930 Sakaedani, Wa-
kayama, 640-8510, Japan.
E-mail: chen@sys.wakayama-u.ac.jp.
M. Yachida is with the Department of Systems and Human Science,
Graduate School of Engineering Science, Osaka University, 1-3 Machi-
kaneyama-cho, Osaka, 560-8531, Japan.
E-mail: yachida@sys.es.osaka-u.ac.jp.
Manuscript received 16 July 1997; revised 2 Mar. 1999. Recommended for accep-
tance by D. Kriegman.
For information on obtaining reprints of this article, please send e-mail to:
tpami@computer.org, and reference IEEECS Log Number 107728.