Explaining User Models with Diferent Levels of Detail for
Transparent Recommendation: A User Study
Mouadh Guesmi
University of Duisburg-Essen
Duisburg, Germany
mouadh.guesmi@stud.uni.de
Mohamed Amine Chatti
University of Duisburg-Essen
Duisburg, Germany
mohamed.chatti@uni-due.de
Laura Vorgerd
University of Duisburg-Essen
Duisburg, Germany
laura.vorgerd@stud.uni-due.de
Thao Ngo
University of Duisburg-Essen
Duisburg, Germany
thao.ngo@uni-due.de
Shoeb Joarder
University of Duisburg-Essen
Duisburg, Germany
shoeb.joarder@uni-due.de
Qurat Ul Ain
University of Duisburg-Essen
Duisburg, Germany
qurat.ain@stud.uni.de
Arham Muslim
National University of Sciences and
Technology
Islamabad, Pakistan
arham.muslim@seecs.edu.pk
ABSTRACT
In this paper, we shed light on explaining user models for trans-
parent recommendation while considering user personal character-
istics. To this end, we developed a transparent Recommendation
and Interest Modeling Application (RIMA) that provides interactive,
layered explanations of the user model with three levels of detail
(basic, intermediate, advanced) to meet the demands of diferent
types of end-users. We conducted a within-subject study (N=31) to
investigate the relationship between personal characteristics and
the explanation level of detail, and the efects of these two variables
on the perception of the explainable recommender system with
regard to diferent explanation goals. Based on the study results,
we provided some suggestions to support the efective design of
user model explanations for transparent recommendation.
CCS CONCEPTS
· Human-centered computing → Interactive systems and tools;
· Computing methodologies → Artifcial intelligence.
KEYWORDS
intelligent explanation interfaces; recommender systems; explain-
able recommendation; explainable user modeling, personal charac-
teristics
ACM Reference Format:
Mouadh Guesmi, Mohamed Amine Chatti, Laura Vorgerd, Thao Ngo, Shoeb
Joarder, Qurat Ul Ain, and Arham Muslim. 2022. Explaining User Models
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
UMAP ’22 Adjunct, July 4ś7, 2022, Barcelona, Spain
© 2022 Association for Computing Machinery.
ACM ISBN 978-1-4503-9232-7/22/07. . . $15.00
https://doi.org/10.1145/3511047.3537685
with Diferent Levels of Detail for Transparent Recommendation: A User
Study. In Adjunct Proceedings of the 30th ACM Conference on User Modeling,
Adaptation and Personalization (UMAP ’22 Adjunct), July 4ś7, 2022, Barcelona,
Spain. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3511047.
3537685
1 INTRODUCTION
Recommender systems (RS) are one of many adaptive systems that
leverage user models to deliver relevant content to their end-users.
User models have been enriched with various features such as open-
ness, scrutability, and explainability. These features are the most
investigated ones by researchers in view of their signifcant impact
on the user’s perception of adaptive systems and their outcomes
[7, 21]. Opening the user model means allowing users to see how
the system is perceiving them in a human-understandable form,
which will lead to several benefts such as improving the accuracy
of the model [12]. Scrutinizing the user model is a concept built
on top of openness and is related to user control in a sense that, in
addition to letting the users inspect their models, they can interact
with them (e.g., edit the content, provide more information) [12].
Explaining the user model consists of providing explanations about
how these models were generated [19]. Recently, research on ex-
plainable recommendations started to focus on explaining the user
models (i.e., explaining the recommendation input) as an alterna-
tive to revealing the inner working of the system (i.e., explaining
the recommendation process) or justifying the recommended items
(i.e., explaining the recommendation output) [8, 21].
In addition to the explanation scope (i.e., input, process, output),
another crucial design choice in explainable recommendation re-
lates to the level of explanation detail that should be provided to the
end-user [2]. Users may not be interested in all the information that
the explanation can produce [38]. Diferent users have diferent
needs for explanation and explanations may cause negative efects
(e.g., high cognitive load, confusion, lack of trust) if they are difcult
to understand [18, 27, 30, 52, 53]. The majority of current designs
175