Expectation as a Mediator of User Satisfaction Anthony Cox and Maryanne Fisher Faculty of Computer Science Dalhousie University Halifax, Nova Scotia, Canada Department of Psychology York University Toronto, Ontario, Canada Abstract An important issue when evaluating an information retrieval tool is the satisfaction of the tool’s users. While there are many factors that affect satisfaction, we advocate that a user’s expectation, with respect to a query’s response, is a key mediator of their satisfaction. We validate this perspective by measuring individuals’ expectation, judgment of response quality and satisfaction level for a set of four queries. Our experiment indicates that the difference between response quality and expectation significantly correlates with user satisfaction for all four queries. As well, the experiment demonstrates the effectiveness of a within-subjects measure to account for a variety of extraneous factors resulting from the characteristics and preferences of individual tool users. 1 Introduction Information retrieval is a complex process with many elements, such as query formation, query refinement, relevance judgment and solution identification. While each element of the retrieval process can be evaluated independently, such evaluation does not capture the interactions among the elements. Furthermore, there are numerous external user and contextual factors that confound the evaluation of a specific element of the retrieval process. Consequently, accurate evaluation of retrieval tasks must encompass the entire retrieval process without being sensitive to extraneous factors such as the user’s experience level and personal search strategies. Experimental design methodology, as practiced in the social sciences, utilizes statistically sound and scientifically robust techniques for managing the diverse characteristics displayed by experimental subjects. The use of a within- subjects experimental design controls for extraneous factors by examining the effect of an experiment on each subject individually. Subjects are not compared against each other so that differences introduced by each subject’s individual characteristics are avoided. Each subject, therefore, serves as their own ‘control’, validating comparisons within a single subject as opposed to among subjects. We believe that accurate evaluation of the information retrieval process must use measurements made on a within-subjects basis. Perhaps the single most important issue when examining an information retrieval tool is the satisfaction exhibited by the tool’s users [3]. A user that is highly satisfied is likely to use the tool for future retrieval tasks, to use it more frequently and to adopt it as their primary tool. Consequently, it is necessary to examine the factors that affect a user’s satisfaction level so that composite measures can be developed. When searching for a lost object, casual observation suggests that we are most satisfied when we find the object when we least expect to do so. That is, our satisfaction is mediated by our expectation of success. Information retrieval is no different in that a tool user’s satisfaction is also significantly impacted by their expectations. Using this observation as a basis, it is possible to define a user’s satisfaction as follows: Given a value for the quality of the response and for the expectation of receiving an appropriate response, the user’s satisfaction can be defined as the difference of these values. Moving to St. Mary’s University, Halifax, Nova Scotia, Canada as of 1 July 2004. 1