IRn in the CLEF Robust WSD Task 2008 Sergio Navarro, Fernando Llopis, Rafael Mu˜ noz Natural Language Processing and Information Systems Group University of Alicante, Spain snavarro,llopis,rafael@dlsi.ua.es Abstract This paper describes our participation in the Robust WSD Task within the CLEF 2008. The aim of this pilot task is exploring methods which can take profit of WSD in- formation in order to improve the IR systems. In our approach we have used a passage based system jointly with a WordNet based expansion method for the collection docu- ments and the queries using the two WSD systems runs provided by the organization. Furthermore we have experimented with two well known relevance feedback methods - LCA and PRF -, in order to figure out which is more suitable to take profit of the WSD query expansion based on Wordnet. Our best run has obtained a 4th place in the competition with a value of 0.4008 MAP. We conclude that LCA fits better than PRF to this task. And that our WSD expansion is useful for some query subsets. In future works we will study the features of the query subsets for which the performance of our system decreases. Categories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.1 Content Analysis and Indexing; H.3.2 Infor- mation Storage H.3.3 Information Search and Retrieval; H.3.4 Systems and Software; H.3.7 Digital Libraries General Terms Measurement, Performance, Experimentation Keywords Information Retrieval, PRF, LCA, WordNet, Automatic Query Expansion, Relevance Feedback, WSD 1 Introduction The aim of the CLEF Robust WSD Task task is exploring the contribution of Word Sense Disam- biguation (WSD) to monolingual and multilingual Information Retrieval, in order to find success- ful methods to take profit of WSD information which helps the systems to increase their levels of robustness. We are researching in the IR area, and it is so common to find - specially in collections of image annotations - documents which use narrow texts. It has a direct impact over the textual retrieval. Indeed, the problem of mismatch between a concept in a query and in a document, when it is expressed with different terms than found in the collection, is aggravated in this type of collections with small sized documents. Despite the fact that relevance feedback is a good tool for improving the results, it often shows unpredictable behaviour. Which makes us look for