(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 13, No. 6, 2022 Multi-layer Stacking-based Emotion Recognition using Data Fusion Strategy Saba Tahseen 1 PhD Scholar Dept. of Computer Science and Engineering Christ University, Bangalore, India Ajit Danti 2 Professor Dept. of Computer Science and Engineering Christ University, Bangalore, India Abstract—Electroencephalography (EEG), or brain waves, is a commonly utilized bio signal in emotion detection because it has been discovered that the data recorded from the brain seems to have a connection between motions and physiological effects. This paper is based on the feature selection strategy by using the data fusion technique from the same source of EEG Brainwave Dataset for Classification. The multi-layer Stacking Classifier with two different layers of machine learning techniques was introduced in this approach to concurrently learn the feature and distinguish the emotion of pure EEG signals states in positive, neutral and negative states. First layer of stacking includes the support vector classifier and Random Forest, and the second layer of stacking includes multilayer perceptron and Nu-support vector classifiers. Features are selected based on a Linear Regression based correlation coefficient (LR-CC) score with a different range like n 1 , n 2 ,n 3 ,n 4 a, for d 1 used n 1 and n 2 dataset ,for d 2 dataset, combined dataset of n 3 and n 4 are used and developed a new dataset d 3 which is the combination of d 1 and d 2 by using the feature selection strategy which results in 997 features out of 2548 features of the EEG Brainwave dataset with a classification accuracy of emotion recognition 98.75%, which is comparable to many state-of-the-art techniques. It has been established some scientific groundwork for using data fusion strategy in emotion recognition. Keywords—Electroencephalograph (EEG); linear regression based correlation coefficient; feature selection; multi-layer stacking model; machine learning techniques; emotion recognition I. INTRODUCTION With the rapid advancement of computers and human contact technology, there is a significant demand in the area of human interaction for a more intelligent and humanized human-machine interface (HMI). A (BCI) brain-computer interface transforms a way of transforming brain processes that are connected directly to the brain of a living organism, such as a human or an animal. BCI serves as a bridge for communication between both the human brain and as a tool to detect different applications, such as emotion identification and different applications [1]. Human communication, daily life, and work all rely heavily on emotional expressiveness. It can be characterized as positive, neutral, or negative experiences arising from a variety of physiological activities, and it includes a wide range of emotions such as sadness, happiness, surprise, anger, and disgust [2][3][4]. Emotion recognition research has become more common among researchers with the development of sensor-based technology and processes and accessibility have improved. Emotion recognition can have important applications, whether professional, personal, or personal [5] such as in the fields of medicine [6], education, psychology, computer games, driving, security, entertainment [7], and workload evaluation, and many others [8]. Emotions can be detected in a variety of ways, including brain waves and facial expressions. Brain waves are a means of obtaining emotion that can be both intrusive and non- invasive. In wires and an intrusive Brain-computer-interface (BCI) are surgically implanted on the accessible brain surface. The non-invasive method is known as BCI, and it provides a simple, fast, and beneficial method for collecting the brainwaves, which comprises functional magnetic resonance imaging (fMRI), Magnetoencephalogram (MEG), Electroencephalogram (EEG), and numerous signaling have already been approved, recognized, and classified as non- physiological and physiological signals, respectively. In the practical application of emotion recognition, Gesture, Text, movement, speech, voice intonation, and facial expression, among other non-physiological signals, are indeed the original concept is a term that has been used a lot in the past. More studies have recently been conducted using physiological signals such as electrocardiogram (ECG) and electroencephalogram (EEG) [9]. This study proposes an electroencephalography (EEG) signal analysis technique for recognizing and classifying emotional states, as well as a correlation-based data reduction strategy coefficient score between features with a different range and developed a new dataset. Machine learning models are grouped into three types: supervised, unsupervised, and reinforcement learning, as well as a specific form termed ensembles learning, based on the methodology utilized [10]. An ensemble machine learning algorithm is often known as stacking. Stacking is the process of learning how to aggregate the prediction of participating ensemble components using a machine learning model and minimize the variation. In this paper extension of Stacking Classifier with two layers of learning models i.e., Multi-layer Stacking Classifier is developed. In this research article there are four contributions: 1) Calculated the correlation coefficient score of features 433 | Page www.ijacsa.thesai.org