Research Article
Human-Computer Interaction for Recognizing Speech
Emotions Using Multilayer Perceptron Classifier
Abeer Ali Alnuaim ,
1
Mohammed Zakariah,
2
Prashant Kumar Shukla,
3
Aseel Alhadlaq,
1
Wesam Atef Hatamleh,
4
Hussam Tarazi,
5
R. Sureshbabu,
6
and Rajnish Ratna
7
1
DepartmentofComputerScienceandEngineering,CollegeofAppliedStudiesandCommunityServices,KingSaudUniversity,
P.O. BOX 22459, Riyadh 11495, Saudi Arabia
2
College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
3
DepartmentofComputerScienceandEngineering,KoneruLakshmaiahEducationFoundation,Vaddeswaram,Guntur522502,
Andhra Pradesh, India
4
Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178,
Riyadh 11543, Saudi Arabia
5
Department of Computer Science and Informatics, School of Engineering and Computer Science, Oakland University,
Rochester Hills, MI 318 Meadow Brook Rd, Rochester, MI 48309, USA
6
Department of ECE, Kamaraj College of Engineering and Technology, Virudhunagar, TN, India
7
Gedu College of Business Studies, Royal University of Bhutan, imphu, Bhutan
Correspondence should be addressed to Rajnish Ratna; rajnish.gcbs@rub.edu.bt
Received 17 February 2022; Revised 28 February 2022; Accepted 4 March 2022; Published 28 March 2022
Academic Editor: M.A. Bhagyaveni
Copyright © 2022 Abeer Ali Alnuaim et al. is is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control
modalities such as voice, gesture, and mimicry. Particularly, speech has a great deal of information, conveying information about the
speaker’s inner condition and his/her aim and desire. While word analysis enables the speaker’s request to be understood, other
speech features disclose the speaker’s mood, purpose, and motive. As a result, emotion recognition from speech has become critical in
current human-computer interaction systems. Moreover, the findings of the several professions involved in emotion recognition are
difficult to combine. Many sound analysis methods have been developed in the past. However, it was not possible to provide an
emotional analysis of people in a live speech. Today, the development of artificial intelligence and the high performance of deep
learning methods bring studies on live data to the fore. is study aims to detect emotions in the human voice using artificial
intelligence methods. One of the most important requirements of artificial intelligence works is data. e Ryerson Audio-Visual
Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. e RAVDESS dataset contains
more than 2000 data recorded as speeches and songs by 24 actors. Data were collected for eight different moods from the actors. It
was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised
moods. e multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. e
proposed model’s performance was compared with that of similar studies, and the results were evaluated. An overall accuracy of 81%
was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset.
1. Introduction
Fewer emotions are critical in human-computer interaction
1]. Past years had increased interest in speech emotion
recognition (SER), which uses speech cues to analyze emotion
states. Nonetheless, SER remains a challenging endeavor due
to extracting practical emotional elements. SER is handy for
investigating human-computer identification. is indicates
that the system must comprehend the user’s feelings to define
the system’s activities appropriately. Numerous activities,
Hindawi
Journal of Healthcare Engineering
Volume 2022, Article ID 6005446, 12 pages
https://doi.org/10.1155/2022/6005446