A CNN based facial expression recognizer B. Sai Mani Teja, C.S. Anita ⇑ , D. Rajalakshmi, M.A. Berlin Department of Computer Science and Engineering, R.M.D. Engineering College, Kavaraipettai, Tamil Nadu 601206, India article info Article history: Received 11 August 2020 Accepted 19 August 2020 Available online xxxx Keywords: Facial expression recognition Artificial intelligence Deep learning Convolutional neural networks Algorithms abstract Facial expression recognition [FER] has gained attraction among many researchers in the field of artificial intelligence. The existing models available for facial expression recognition are developed with the help of native machine learning models. But the accuracies and efficiency achieved by these models are still undergoing extensive research. The proposed research work uses Convolutional Neural Networks (CNN) deep learning models with sufficient Computational power to run the algorithms. This model is able to achieve good accuracy even on the new datasets. Our experimental results achieved an accuracy of 57% in a five-classification task. Ó 2020 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the International Conference on Newer Trends and Innovation in Mechanical Engineering: Materials Science. 1. Introduction Human beings are good at identifying the emotions of other humans by seeing the expression of face. Our goal is to answer the question, ‘‘What if the computer can mimic the human brain and is able to identify those emotions?” It must be able to perform expression classification. This can be made possible with the help of deep learning models. In the year 1971, a research paper titled ‘‘Constants Across Cul- tures in the Face and Emotion”, Ekman et al. detected six facial expressions that are universal across all cultures: anger, disgust, fear, happiness, sadness, and surprise [3]. Many researchers have used Computer vision techniques to identify the emotions on the face. The Kaggle competition [8] on FER added a new expression Neutral to its dataset which represents no emotion. Deep Learning techniques have gained popularity in recent times and as computers have also gained more computational power, these algorithms which is based on CNN model have an upperhand in the facial expression classification. This gives an overview of our proposed work. One of the famous datasets EmotiW has achieved an accuracy of 60% with the help of Deep Learning Techniques. The objective of our proposed work is to identify the emotions in the human face. All the native FER models that are currently in use are unable to perform facial expression recognition on new datasets. But due to the increase in Deep learning models the pos- sibility of performing FER on new datasets has tremendously increased. Our aim is to use those Deep Learning Techniques to achieve the better accuracy. 2. Related works Yu and Zhang achieved state-of-the-art results on EmotiW data- set in 2015 using CNNs to perform FER [5]. They used a group of CNNs with five convolutional layers each. Among the insights from their paper was that randomly shuffled the input images yielded a 2–3% increase in accuracy. Specifically, Yu and Zhang made trans- formations to the images that are passed as input during the train- ing time. while during the testing, their model generated predictions for multiple perturbations of each test example and voted on the class label to produce a final answer. Also interesting is that they used stochastic pooling rather than max pooling because of its better performance when the training data is less. Kim et al. achieved a test accuracy of 61 percent in EmotiW2015 competition by using an ensemble related method with changing network architectures and parameters [1]. They used a hierarchical decision tree and an exponential rule to combine decisions of differ- ent networks rather than using a simply weighted average, and this made their performance to boost. They initialized weights by train- ing neural networks on other FER datasets and using these weights for fine-tuning their model in order to achieve better accuracy. Mollahosseini et al. got good results in this face expressions recognition. Their network consisted of two convolutional layers, max-pooling, and 4 Inception layers as introduced by Google Net. https://doi.org/10.1016/j.matpr.2020.08.501 2214-7853/Ó 2020 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the International Conference on Newer Trends and Innovation in Mechanical Engineering: Materials Science. ⇑ Corresponding author. E-mail address: csa.cse@rmd.ac.in (C.S. Anita). Materials Today: Proceedings xxx (xxxx) xxx Contents lists available at ScienceDirect Materials Today: Proceedings journal homepage: www.elsevier.com/locate/matpr Please cite this article as: B. S. M. Teja, C. S. Anita, D. Rajalakshmi et al., A CNN based facial expression recognizer, Materials Today: Proceedings, https://doi. org/10.1016/j.matpr.2020.08.501