From Semantic to Emotional Space
Mitra Mohtarami
1
, Man Lan
2
, and Chew Lim Tan
1
1
Department of Computer Science, National University of Singapore;
2
Institute for Infocomm Research
mitra@comp.nus.edu.sg; mlan@i2r.a-star.edu.sg; tancl@comp.nus.edu.sg
Abstract
This paper proposes an effective approach to model the
emotional space of words to infer their Sense Sentiment
Similarity (SSS). SSS reflects the distance between the
words regarding their senses and underlying sentiments. We
propose a probabilistic approach that is built on a hidden
emotional model in which the basic human emotions are
considered as hidden. This leads to predict a vector of
emotions for each sense of the words, and then to infer the
sense sentiment similarity. The effectiveness of the
proposed approach is investigated in two Natural Language
Processing tasks: Indirect yes/no Question Answer Pairs
Inference and Sentiment Orientation Prediction.
Introduction
Sentiment analysis or opinion mining aims to enable
computers to derive sentiment from human language. In
this paper, we aim to address sense sentiment similarity
that aims to infer the similarity between word pairs with
respect to their senses and underlying sentiments.
Previous works employed semantic similarity measures
to estimate sentiment similarity of word pairs (Kim and
Hovy 2004; Turney and Littman 2003). However, it has
been shown that although the semantic similarity measures
are good for relating semantically related words like "car"
and "automobile" (Islam et al., 2008), but are less effective
to capture sentiment similarity (Mohtarami et al., 2012).
For example, using Latent Semantic Analysis (Landauer et
al., 1998), the semantic similarity of "excellent" and
"superior" is greater than the similarity between
"excellent" and "good". However, the intensity of
sentiment in "excellent" is more similar to "superior" than
"good". That is, sentiment similarity of "excellent" and
"superior" should be greater than "excellent" and "good".
Copyright © 2013, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
This paper shows that not only semantic similarity
measures are less effective, considering just the total
sentiment of words (as positive or negative) is also not
sufficient to accurately infer sentiment similarity between
words senses. The reason is that, although the opinion
words can be categorized into positive and negative
sentiments with different sentiment intensity values, they
carry different human emotions. For instance, consider a
fixed set of emotions e = [anger, disgust, sadness, fear,
guilt, interest, joy, shame, surprise] where each dimension
ranges from 0 to 1. Given the above emotions, the emotion
vectors and the sentiment orientation (SO) of the words
"doleful", "rude" and "smashed" will be as follows
(Neviarouskaya et al., 2007; Neviarouskaya et al., 2009):
e(rude) =[0.2,0.4,0,0,0,0,0,0,0], SO(rude)=-0.2-0.4=-0.6
e(doleful)=[0,0,0.4,0,0,0,0,0,0], SO(doleful)=-0.4
e(smashed)=[0,0,0.8,0.6,0,0,0,0,0], SO(smashed)=-1.4
All the three words have negative sentiment and SO of
"doleful" is closer to "rude" than "smashed". However, the
emotional vectors indicate that "rude" only carries the
emotions "anger" and "disgust", while "doleful" and
"smashed" carry the same emotion "sadness". As such,
considering the emotional space of words, the word
"doleful" should be closer to "smashed" than "rude".
This paper shows that using emotional vectors of the
words is more effective than using semantic similarity
measures or considering sentiment of the words (as
positive or negative) to infer sense sentiment similarity. To
achieve this aim, we propose a probabilistic approach by
combining semantic and emotional spaces. Furthermore,
we show the utility of sentiment similarity in Indirect
yes/no Question Answer Pairs (IQAPs) Inference and
Sentiment Orientation (SO) prediction tasks explained as
follows:
in Probabilistic Sense Sentiment Analysis
Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence
711