Advances in Computing 2014, 4(2): 31-40 DOI: 10.5923/j.ac.20140402.01 Emotion-Based System for Social Media Content Processing and Event Monitoring Mohamed Abdur Rahman, Mohamed A. Ahmed * Advanced Media Laboratory, Computer Science Department, College of Computer and Information Systems, Umm Al-Qura University, Makkah Al Mukarramah, Kingdom of Saudi Arabia Abstract In this paper, we propose an open source web-based emotion retrieval service for textual social media contents, called MediaTagger, which can extract emotional value from online content coming from diversified Internet-based services. MediaTagger mashes up state of the art emotion services and allocates the right emotion retrieval service using several emotion visualization metaphors based on the content of each service. MediaTagger also incorporates a flexible and adaptable emotion authoring service based on Naïve Bayes machine learning theorem. The authoring service within the system also helps creating new domains of emotion extraction services. We share some quantitative and qualitative results that show the viability of our system as well as the experience and satisfaction of regular end-users. We will describe a case study that leverages the features provided by the MediaTagger. This study shows the utilization of emotion-based textual posts for monitoring and decision-support systems during major events such as Hajj. Keywords Social Media, Emotion Mashup, Multimedia 1. Introduction We are surrounded with a lot of user generated content, thanks to the widespread acceptance of social networks. Examples include blogs, RSS news feeds, social networks such as Twitter, Email messages, image/video sharing services. Contents from each source have emotional value to the information producer [1]. People leave their emotional footprint mostly in diversified social network services in the form of reviews, comments and answers through varieties of media. For example, people upload videos of numerous domains such as weather, technology, news, sports and different products and services. They also provide comments about those entities while expressing their emotions. However, the text containing user emotion about weather for example does not have same emotional value as the user comments about the review of Microsoft Kinect as an XBOX sensor. This leads to the fact that for every domain of knowledge, user emotion extraction requires separate emotion extraction knowledge. In [3], the author analyzes the emotion classifications and states from cognition perspective. The work provides a 3D circumplex model that describes the relationships between different emotion possibilities. The authors in [4] present a platform called SenticNet for mining online opinions and * Corresponding author: mamahmed@uqu.edu.sa (Mohamed A. Ahmed) Published online at http://journal.sapub.org/ac Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved discovering human emotions using common sense reasoning, polarity concepts and their own characterization model. In [5], the authors discuss the topic of tagging in general especially handling image and photo tags in Flickr/ZoneTag online services. With the ever growing use of online Web 2.0 tools and especially customer reviews, the authors of [6, 7, 8] analyzed the effect of online customer reviews and emotions on new customer purchases and on branding image, perception and marketing strategies of companies and vendors. In [9], the authors try to classify the sentiment of Twitter posts and trends using machine learning techniques. However, their algorithm needs to refine noisy knowledge and provide option of feedback and control mechanism to update and insert new knowledge. Meanwhile, the authors in [10] analyze keywords within microblog feeds such as Twitter, Plurk and Jaiku to learn about the sentiments of those keywords while visualizing the results using audiovisual interface through music tones (using dynamic arousal and valence values) that represent the sentiment of each microblog post. They make use of several factors such as response, context and friendship in deciding the sentiment labels. Some research is also done on evaluating the mood sentiments of video contents such as the work done in [11] where the authors try to utilize low-level video features such as color and sound that are mapped to their corresponding Valence-Arousal values to determine the emotion within standalone video contents. However, the main issues with that work are that it targets standalone video files and the accuracy rate they provided is relatively low (about 60%).