FACE EXPRESSION RECOGNITION USING AR-BURG MODEL AND NEURAL NETWORK CLASSIFIER M. Saaidia 1 , A. Gattal 1 , M. Maamri 1 and M. Ramdani 2 1 Dept. of electrical Engineering, University of Tebessa. Algeria. 2 University of Annaba. Algeria. 1 {msaaidia , a.gattal , m.maamri}@mail.univ-tebessa.dz , 2 mes_ramdani@yahoo.com ABSTRACT Neural network classifying method is used in this work to perform facial expression recognition. The processed expressions were the six most pertinent facial expressions and the neutral one. This operation was implemented in three steps. First, a neural network, trained using Zernike moments, was applied to the set of the well known Yale and JAFFE database images to perform face detection. In the second step, Auto Regressive modeling (AR) using 2D-Burg filter was used for facial parameterization. At the last step, a neural network, trained on a set of the AR models, was applied to the rest of the images models to test method's performances. KEY WORDS Image processing, Facial expression recognition, Face detection, Autoregressive modeling , Neural networks. 1 INTRODUCTION The initial works on the human facial expression phenomenon were initiated by psychologists who have studied its individual and social importance. They showed that it plays an essential role in coordinating human conversation [1] through the multitude of information it carries. Moreover, Mehrabian [2] found that, while overall impact of the text content of a message is limited to only 7% and the intonation of the speaker's voice contributes by 38%, the facial expressions carry the most part of the message's information i.e. 55%. The recognition of any facial expression is linked to several semantic notions that make the problem difficult to manage given the relativism that it generates in terms of solutions found. Thus, it is quickly pointed out to distinguish between ''expression'' and ''emotion''. Indeed, the latter term represents only a semantic interpretation of the first one as ''happy'' to ''smile''. A facial expression may be the result of an emotion or not (expression simulated for example). So, a facial expression is a physiological activity of one or more parts of the face (eyes, nose, mouth, eyebrows,...) while an emotion is our semantic interpretation of this activity. However, given the difficulties still encountered in this area we can ignore this distinction. The significant advances in several related areas such as image processing, pattern recognition, detection and face recognition have to come out studies of this phenomenon from the field of human psychology to the applied sciences domain such as analysis, classification, synthesis, and even the expressive animation control [3]. Different works that have been conducted to date are mostly oriented to the study and classification of the six so-called basic facial expressions (universally recognized): Smile, disgust, fear, surprise, anger and sadness. A multitude of methods which were developed, can be classified according to the parameterization step in the recognition process or to the classification one [4]. According to the first step, methods are "based motion extraction" [5], [6] or "based deformation extraction" [7], [8]. According to the classification step, methods can be "spatial methods"[9], [10], or "spatiotemporal methods" [11], [12]. Method proposed here, is a "spatial model based motion extraction" one. Section 2, of this manuscripte, contains an introduction to the face detection method and the modeling method used to perform parameterisation. In section 3 we present the neural network classifier and the way to proceed. Section 4 contains the experiments carried out and the results obtained. Conclusions are given in section 5. 112