Please cite this article in press as: Al Zoubi O, et al. Anytime multipurpose emotion recognition from EEG data using a Liquid State
Machine based framework. Artif Intell Med (2018), https://doi.org/10.1016/j.artmed.2018.01.001
ARTICLE IN PRESS
G Model
ARTMED-1569; No. of Pages 8
Artificial Intelligence in Medicine xxx (2018) xxx–xxx
Contents lists available at ScienceDirect
Artificial Intelligence in Medicine
journal homepage: www.elsevier.com/locate/aiim
Anytime multipurpose emotion recognition from EEG data using a
Liquid State Machine based framework
Obada Al Zoubi
a,b,c
, Mariette Awad
a,∗
, Nikola K. Kasabov
d
a
Department of Electrical and Computer Engineering, American University of Beirut, Lebanon
b
School of Electrical and Computer Engineering, University of Oklahoma, USA
c
Laureate Institute for Brain Research, OK, USA
d
Auckland University of Technology, New Zealand
a r t i c l e i n f o
Article history:
Received 30 October 2017
Received in revised form
29 December 2017
Accepted 3 January 2018
Keywords:
Emotion recognition
EEG
Liquid State Machine
Machine learning
Pattern recognition
Feature extraction
a b s t r a c t
Recent technological advances in machine learning offer the possibility of decoding complex datasets
and discern latent patterns. In this study, we adopt Liquid State Machines (LSM) to recognize the emo-
tional state of an individual based on EEG data. LSM were applied to a previously validated EEG dataset
where subjects view a battery of emotional film clips and then rate their degree of emotion during each
film based on valence, arousal, and liking levels. We introduce LSM as a model for an automatic feature
extraction and prediction from raw EEG with potential extension to a wider range of applications. We
also elaborate on how to exploit the separation property in LSM to build a multipurpose and anytime
recognition framework, where we used one trained model to predict valence, arousal and liking levels
at different durations of the input. Our simulations showed that the LSM-based framework achieve out-
standing results in comparison with other works using different emotion prediction scenarios with cross
validation.
© 2018 Elsevier B.V. All rights reserved.
1. Introduction
The affective states are psycho-physiological components that
can be measured using two main principle dimensions: valence
and arousal. Valence varies from negative to positive, and mea-
sures emotion’s consequences, emotion eliciting circumstances or
subjective feeling and attitudes. Arousal measures the activation of
the sympathetic nervous system and ranges in intensity from not-
at-all to extreme. A couple of studies proposed different models
to explain the affective state such as the six basic emotions model
[1], dimensional scale of emotions model [2], the tree structure of
emotions model [3] and the valence-arousal scale model [4]. In this
work we rely on the valence-arousal scale model, due to its sim-
plicity. The model explains emotion variation in a 2D plane, where
emotion is affiliated with the corresponding valence and arousal
levels. Fig. 1 shows the valence-arousal scale proposed by Russell in
which emotions are described in a 2D plane; the horizontal axis rep-
resents the valence while the vertical one represents the arousal.
More specifically, Russell’s model is divided into four regions: Low
∗
Corresponding author.
E-mail addresses: obada.alzoubi@ou.edu (O. Al Zoubi), ma162@aub.edu.lb
(M. Awad), nkasabov@aut.ac.nz (N.K. Kasabov).
Valence–Low Arousal (LVLH), Low Valence–High Arousal (LVHA),
High Valence–Low Arousal (HVLA) and High Valence–High Arousal
(HVHA). Thus, the problem of identifying the emotional state is con-
verted in most of the cases into determining valence and arousal
levels.
There are different resources to infer the emotional state in
humans such as facial expression, speech, and physiological signals
like skin temperature, galvanic resistance, ECG, fMRI and EEG.
This work uses EEG signals for emotion recognition. EEG signals
are brainwaves that are produced by population action potential
of brain’s neurons during activities. Hence, they may be one of the
most reliable sources of emotion due to their high temporal resolu-
tions. Moreover, EEG signals are relatively easy to acquire due to the
recent advancement in building wireless and wearable EEG sensors
[5,6]. To identify and study the emotional state from EEG, several
machine learning (ML) techniques have been applied such as deep
learning (DL) [7–9], support vector machine (SVM) [10], k-nearest
neighbors (KNN) [11], and Artificial Neural Networks (ANN) [10].
This works applies a novel framework based on Liquid State
Machine (LSM) [12–14] approach for emotion recognition. LSM is
a temporal pattern recognition paradigm, and hence it is apt to
handle the temporal nature of EEG signals. LSM has been applied
successfully to many problems that include spatio/spectro tem-
poral properties like speech recognition [15–17], facial expression
https://doi.org/10.1016/j.artmed.2018.01.001
0933-3657/© 2018 Elsevier B.V. All rights reserved.