Facial Expression Classification Using RBF AND Back-Propagation Neural Networks R.Q.Feitosa 1,2 , M.M.B.Vellasco 1,2 , D.T.Oliveira 1 , D.V.Andrade 1 , S.A.R.S.Maffra 1 1 – Catholic University of Rio de Janeiro, Brazil Department of Electric Engineering 2 – State University of Rio de Janeiro, Brazil Department of Computer Engineering e-mail: [raul, marley]@ele.puc -rio.br, tuler@inf.puc-rio.br, [diogo, sam]@tecgraf.puc-rio.br ABSTRACT This article presents a facial expressions classification experiment using neural networks. The classification system is based on attributes extracted from human faces images using the principal component analysis (PCA) technique. Well-framed images were used in order to simplify the face detection on the image . Two different models of neural networks have been applied as classifiers: back-propagation and RBF networks. In the experiments for performance evaluation the networks achieved a recognition rate equal to 71.8% and 73.2% respectively for Back Propagation and RBF, w hich is consistent with the best results reported in the literature for the same data base and testing paradigm. An analysis of the confusion matrix suggest the combination of both networks for a better performance. Keywords : PCA, facial expression recognition, neural networks 1 INTRODUCTION The interest in systems for the automatic recognition of facial expressions has recently increased. Such systems are clearly relevant in studies of human behavior, since facial expressions are a manifestation of human emotions. Facial expressions also have an important role in the non- verbal communication among human beings. Studies indicate that the role of facial expressions many times surpasses the one of the actual words [1]. This has awakened the interest in many computer vision researchers, who are trying to develop more effective techniques for computer-human interaction. Any automatic system for the recognition of facial expressions must deal with three basic problems: detection of the human face in a generic image, extraction of relevant attributes from the facial image; and finally the classification itself. Locating a face in a generic image is not an easy task, which continues to challenge researchers. Once detected, the image region containing the face is extracted and geometrically normalized, usually maintaining a constant inter-ocular distance. References to detection methods using neural networks and statistical approaches can be found in [2] and [3]. This paper does not tackle the problem of face detection. All of the experiments presented in the next sections used well-framed face images as input. The second problem concerns with the selection of a set of attributes that could represent appropriately the emotions expressed on the images. Among the proposed approaches for the selection of attributes [ 4] the Principal Component Analysis algorithm (PCA) has been frequently used [5]. Regarding the third problem, neural networks have been successfully used as classifiers on face recognition systems (as in [6], [7] and [8]). In Rosemblum et al., geometrical characteristics have been extracted from sequences of images and applied to a RBF (Radial Basis Function) neural network, acting as a classifier. This paper evaluates the performance of two neural network algorithms for the automatic facial expressions recognition: Back-Propagation and RBF neural networks [9]. Unlike [6], the system proposed here utilizes well- framed, static images, obtained by a semi-automatic method. Instead of geometrical attributes, the principal components analysis have been applied to generate the vector of relevant attributes. Many experiments have been carried out in order to evaluate the performance of the system proposed. The remaining of this paper is organized as follows. Section 2 describes the system proposed, presenting a brief description of the PCA technique and of neural networks. Section 3 describes the experiments that have been performed. The results are then shown in section 4, which is followed by the conclusions in section 5. 2 METHODOLOGY 2.1 System’s General Architecture The automatic system proposed for the recognition of facial expressions is composed of three stages: detection, extraction of attributes and classification, as shown in figure 1. The first stage is performed by a semi-automatic method. The extraction of attributes is performed using Principal Component Analysis algorithm, as described in section 2.2. On the classification stage, two neural network models have been used: Back-Propagation (BP) e Radial Basis Functions (RBF) [9]. Figure 1: General Architecture of the Facial Expression Recognition System. 2.2 Using PCA for the Extraction of Attributes The system presented in this work explores the concept of eigenfaces, proposed originally in [10] and extended in [11], [12], [13], [14] and [15]. An image having n =N? M pixels can be seen as a point in an n- dimensional space. (PCA) identifies the orthonormal base Extraction of Attributes Classification (Neural Network) Detection Image Class