International Journal of Computer Applications (0975 – 8887) Volume 119 – No.21, June 2015 17 Radially Defined Local Binary Patterns for Facial Expression Recognition Megha V. Jonnalagedda Associate Professor Department of Information Technology SGGS Institute of Engineering and Technology, Vishnupuri, Nanded, M.S. India Dharmpal D. Doye Professor Department of Electronics & Telecomm n . Engineering SGGS Institute of Engineering and Technology, Vishnupuri, Nanded, M.S. India ABSTRACT Automatic Facial Expression Recognition (FER) has attracted the attention of many researchers due to its potential applications. Extraction of proper and sufficient features from the facial image is the most important step for effective FER. As facial images can be differentiated from other textural images in the sense that they exhibit specific information as regards expressions around certain face regions (such as areas surrounding the eyes, nose and mouth), efforts need to be done on identifying the specific facial expression related information. Two different approaches have been envisaged and proposed in this paper taking into consideration the pixel value variations exhibited in different directions or regions when different expressions are subjected to feature extraction. The technique proposed basically finds Local Binary Pattern (LBP) like features but along the radial lines taken at specific angle. Another approach proposed considers the expression specific areas like eyes, nose and mouth for finding similar radial LBPs. The overall efficiency obtained is comparable to the popularly used LBP technique. Comparatively lesser time required for feature extraction and recognition as well as smaller region considered for feature extraction are promising aspects of the proposed techniques. General Terms Feature Extraction, Local Binary Patterns, Support Vector Machine Keywords Radial Local Binary Pattern (RLBP) 1. INTRODUCTION A human face carries lot of information about the emotional and mental state of a person. One of the most powerful and natural means for human beings to communicate their emotions, feelings and intentions is a Facial Expression. Facial Expressions and other gestures, help convey non-verbal communication cues in face-to-face interactions. These cues can also complement speech by helping the listener to elicit the meaning of spoken words [1]. Automatic Facial Expression Recognition (FER) has attracted the attention of many researchers due to its potential applications in areas like Human Computer Interaction (HCI), Sign Language Recognition (SLR), Virtual Reality (VR) Systems, etc. Automatic FER could bring facial expressions into man- machine interaction as a new modality and make the integration tighter and more efficient [2]. Research on automatic FER addresses the problem of representing and categorizing the static or dynamic characteristics of deformations of facial components and their spatial relations, or changes in the pigmentation of the face [1]. Extraction of proper and sufficient features from the facial image is the most important step for effective FER. Facial feature extractors should be selected in such a way that they help to derive a set of features from original facial image which would minimize the intra-class differences and maximize the inter-class variations. Two main types of approaches have been used by researchers for extracting facial features in FER, one is based on face geometry while the other uses textural information of the facial image. In geometry based feature extraction techniques, face shape and location of facial components are used for defining feature vectors. Various 2D, 3D models and Facial Action Coding Systems (FACS) are used to describe the face structure [3] [4] [5]. The geometry based models require reliable and accurate feature detection / tracking [6]. They exhibit high recognition efficiency but are time and memory demanding. Appearance based techniques extract and use either holistic or local features for FER. Holistic features are extracted using various techniques such as Gabor filters [7] [8], Principle Component Analysis (PCA) [1] [4], Independent Component Analysis (ICA) [9], Linear Discriminant Analysis (LDA) [10], etc. Local feature extraction approaches predominantly use Local Binary Patterns (LBP) [11] or its variants [12] [13] [14] [15] to describe texture of the face. Higher order autocorrelation like features [16], Local PCA [1], Local LDA [1], etc. have also been used by researchers besides LBP. Amongst all the feature extraction techniques, LBP method has become quite popular due to its simplicity, impressive computational efficiency and good texture discriminative property [17]. As facial images can be differentiated from other textural images in the sense that they exhibit specific information as regards expressions around certain face regions (such as areas surrounding the eyes, nose, mouth, forehead and chin), efforts need to be done on augmenting the LBP with specific facial expression related information. This paper reports one such effort made to strengthen the LBP so that its suitability for application to facial expression recognition can be enhanced. 2. REVIEW OF LBP The original LBP operator was introduced by Ojala et al. [18] for texture description. The LBP operator labels the pixels ƒ p (p = 0,1,....7) of an image by thresholding a 3×3 neighbourhood of each pixel with the value of the centre pixel ƒ c and considering the result as a binary number S(ƒ p -ƒ c ) as shown in figure 1, where: 1 if ƒ p ≥ ƒ c S(ƒ p -ƒ c ) = 0 otherwise