Emotion Recognition through Voting on Expressions in Multiple Facial Regions Ekanshi Agrawal 1 , Jabez Christopher 1 a and Vasan Arunachalam 2 1 Department of Computer Science and Information Systems, BITS Pilani, Hyderabad Campus, Telangana, India 2 Department of Civil Engineering, BITS Pilani, Hyderabad Campus, Telangana, India Keywords: Facial Expression Recognition, Emotion Classification, Periocular Region, Machine Learning. Abstract: Facial Expressions are a key part of human behavior, and a way to express oneself and communicate with others. Multiple groups of muscles, belonging to different parts of the face, work together to form an expression. It is quite possible that the emotions being expressed by the region around the eyes and that around the mouth, don’t seem to agree with each other, but may agree with the overall expression when the entire face is considered. In such a case, it would be inconsiderate to focus on a particular region of the face only. This study evaluates expressions in three regions of the face (eyes, mouth, and the entire face) and records the expression reported by the majority. The data consists of images labelled with intensities of Action Units in three regions – eyes, mouth, and the entire face – for eight expressions. Six classifiers are used to determine the expression in the images. Each classifier is trained on all three regions separately, and then tested to determine an emotion label separately for each of the three regions of a test image. The image is finally labelled with the emotion present in at least two (or majority) of the three regions. Average performance over five stratified train-test splits it taken. In this regard, the Gradient Boost Classifier performs the best with an average accuracy of 94%, followed closely by Random Forest Classifier at 92%. The results and findings of this study will prove helpful in current situations where faces are partially visible and/or certain parts of the face are not captured clearly. 1 INTRODUCTION Emotional analysis is a technique used by various researchers to develop systems that attempt to quantify the emotion being conveyed by an audience. It is also used to judge emotional engagement in various situations. Judging audience feedback in seminars and lectures generally requires the use of facial expressions and only seldom use gestural forms of communication. Further use of emotion recognition comes in some systems that judge an observer’s stance on some target topic or event (Küçük and Can, 2020). This makes a big contribution to advertising campaigns, political manifestos, product testing, political alignment testing, among other uses. These systems either take emotional feedback through surveillance videos, written or video feedback from the observers, or even by analysing public social media posts concerning the event for which the feedback is being taken. Facial a https://orcid.org/0000-0001-6744-9329 emotion recognition systems are being used by many products to attend to the user’s innermost feelings, and to use the information to improve the interaction between the user and the product. Any human facial expression uses multiple parts of the face to be formed. Since muscles lie in close proximity to each other, and are often connected, groups of muscles move together to form even the slightest expression. Humans are inherently wired to be able to recognize expressions, by analysing the various regions of the face and the possible expressions that are being displayed. These expressions need not be linked to just one emotion; a good mix of emotions is often expressed. However, in most cases there is always a highlighted emotion that stands out as the major one. People are thus able to not only recognize the highlighted emotion, but also hints of other emotions. However, emotions are a subjective topic in such a scenario, since every human perceives expressions differently. Thus, one 1038 Agrawal, E., Christopher, J. and Arunachalam, V. Emotion Recognition through Voting on Expressions in Multiple Facial Regions. DOI: 10.5220/0010306810381045 In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021) - Volume 2, pages 1038-1045 ISBN: 978-989-758-484-8 Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved