EXPLANATIONS IN RECOMMENDER SYSTEMS: OVERVIEW AND RESEARCH APPROACHES MOHAMMED Z. AL-TAIE Computer Science Department, Al-Salam University College, Iraq mza004@live.aul.edu.lb Abstract Recommender systems are software tools that supply users with suggestions for items to buy. However, it was found that many recommender systems functioned as black boxes and did not provide transparency or any information on how their internal parts work. Therefore, explanations were used to show why a specific recommendation was provided. The importance of explanations has been approved in a number of fields such as expert systems, decision support systems, intelligent tutoring systems and data explanation systems. It was found that not generating a suitable explanation might degrade the performance of recommender systems, their applicability and eventually their value for monetization. Our goal in this paper is to provide a comprehensive review on the main research fields of explanations in recommender systems along with suitable examples from literature. Open challenges in the field are also manifested. The results show that most of the work in the field focus on the set of characteristics that can be associated with explanations: transparency, validity, scrutability, trust, relevance, persuasiveness, comprehensibility, effectiveness, efficiency, satisfaction and education. All of these characteristics can increase the system's trustworthiness. Other research areas include explanation interfaces, over and underestimation and decision making. Keywords Recommender Systems; Explanations; Explanation Styles; Explanation Attributes; Decision Making; Research Approaches; Received July 31, 2013; accepted September 19, 2013 1. INTRODUCTION In recommender systems (RSs), an explanation can be defined as a bit of information that serves different goals such as why a specific recommendation was given, and how to maintain better communication styles in commercial transactions [15]. Another definition for an explanation is that it is a description that makes users better realize if the recommended item is relevant to their needs or not [17]. It can also be defined as an important piece of information that is used by both selling and buying agents through their communication process to increase their performance [7]. Explanations in recommender systems have gained an increasing importance in the last few years. Although they cannot completely compensate for poor recommendations, it was found that they can increase user acceptance of Collaborative Filtering (CF) recommender systems, help them make decisions more quickly, convince them to buy and develop users' trust in the system as a whole [4]. It was also found that providing explanations along with recommendations would lead to better understand the recommender system and establish a "sense of forgiveness" when users do not like the new recommended items [1]. Explanations can benefit from the advances that come from a number of disciplines such as intelligent systems, human-computer interaction, and information systems. The purpose of this paper is to give an overview of the area of explanations in recommender systems in addition to survey existing research approaches. Even though previous studies such as [14] and [12] provided a review on the field (with a special focus on explanation attributes) they failed to comprehend all important research approaches. The main contribution of this work is that it surveys the main strands of research in the field and put them in one complete work. The remainder of this paper is organized as follows. The next subsections review the relevant literature on explanations in RSs, their history, their types and their styles. Section 2 addresses the main research approaches in explanations. Section 3 discusses open challenges and finally section 4 is for conclusions and future work. 1.1 RELATED WORK A number of studies, from different perspectives, have studied the effect of explanations in the performance of recommender systems or explored different variants of explanations. For example, Tintarev and Masthoff [14], [16] discussed the influence of a number of explanation characteristics in the behavior of the system: transparency, scrutability, trust, persuasiveness, effectiveness, efficiency, satisfaction. All of these characteristics can help increase the system trustworthiness. Herlocker et al. [6] compared the performance of 21 explanation interfaces and sought the best techniques that help support explanations in a collaborative-filtering The International Arab Conference on Information Technology (ACIT2013)