© 2014, IJARCSMS All Rights Reserved 31 | P age
ISSN: 2321-7782 (Online)
Volume 2, Issue 4, April 2014
International Journal of Advance Research in
Computer Science and Management Studies
Research Article / Paper / Case Study
Available online at: www.ijarcsms.com
Emotion Recognition Based on MFCC Features using SVM
E. Vijayavani
1
Department of Information Technology
E. G. S. Pillay Engineering College
Nagapattinam, Tamil Nadu – India
S.Lavanya
2
Department of Information Technology
E. G. S. Pillay Engineering College
Nagapattinam, Tamil Nadu – India
P. Suganya
3
Department of Information Technology
E. G. S. Pillay Engineering College
Nagapattinam, Tamil Nadu – India
E. Elakiya
4
Department of Computer Science and Engineering
E. G. S. Pillay Engineering College
Nagapattinam, Tamil Nadu – India
Abstract: Music oftentimes referred to as a language of emotion and hence music emotion could be useful in music
understanding, retrieval and some other musical related applications. This paper discusses the method to extract features
from samples, and using those features, to detect the emotion. we focus on challenging issue of recognizing music emotions
such as happy, sad, anger, fear, and neutral. Musical data is collected from various areas. A mel frequency cepstral
coefficient (MFCC) is extracted as a feature from the data collected. These features result in different MFCC coefficients
that are input to the support vector machine (SVM), which will analyze them with the stored database recognize the emotion.
Data are collected from various websites and referred using recorded data.
Keywords: Mel Frequency cepstral coefficient, support vector machine, Thayer’s model.
I. INTRODUCTION
Many issues for music emotion recognition have been addressed by different disciplines such as physiology, psychology,
and musicology. In this paper, the challenging issue of recognizing music emotions based on subjective human emotions and
acoustic music signal features and present an intelligent music emotion recognition system is focused.
Hence, one of the most important prerequisites for realizing such an advanced user interface is a reliable emotion
recognition system that guarantees acceptable recognition, robustness, and adaptability to practical applications. To develop
such a system requires the following stages: modelling, analysing, processing, training, and classifying emotional features
measured from the implicit emotion channels of human communication, such as speech, facial expression, physiological
responses, etc.
A. Music and Emotion
Automatic emotion detection and recognition in speech and music is growing rapidly with the technological advances of
digital signal processing and various effective feature extraction methods. Emotion recognition can play an important role in
many other potential applications such as music entertainment and human-computer interaction systems. Many researchers have
explored models of emotions and factors that give rise to the perception of emotion in music. Many other researchers investigate
the problem of automatically recognizing emotion in music. Traditional mood and emotion research in music has focused on
finding psychological and physiological factors that influence emotion recognition and classification. During the 1980s, several
emotion models were proposed, which were largely based on the dimensional approach for emotion rating.
The dimensional approach focuses on identifying emotions of dimensions such as valence and activity. Thayer suggested a
two dimensional emotion model that is simple but powerful in organizing different emotion responses. The dimension of stress