UNED-UV at Medical Retrieval Task of ImageCLEF 2011 A.Castellanos 1 , X. Benavent 2 , J.Benavent 2 , Ana García-Serrano 1 1 Universidad Nacional de Educación a Distancia, UNED 2 Universitat de València xaro.benavent@uv.es,agarcia@lsi.uned.es Abstract. The main goal of this paper it is to present our experiments in ImageCLEF 2011 Campaign (Medical Retrieval Task). This edition we use textual and visual information, based on the assumption that the textual module better captures the meaning of a topic. So that, the TBIR module works firstly and acts as a filter, and the CBIR system reorder the textual result list. We also investigate if query expansion with image terms or with modality classification could be a way to improve base queries This paper is profiting on the work done in previous years on ImageCLEF (Wikipedia Retrieval Task). In this edition we submitted a total of ten runs (4 textual and 6 mixed). Textual ones have better results (being two of them 2 nd and 6 th within their category). Mixed runs are about the middle of the results, although have demonstrated that can improve the only textual results. With our results we have proved that query expansion with term concerning image type of the query is a promising way to further research. Keywords: Query Expansion, Textual-based Retrieval, Content-based Retrieval, Merging 1 Introduction The main goal of this paper it is to present our experiments in ImageCLEF 2011 Campaign (Medical Image retrieval task) [1]. In this working note, we rather focus on our participation in two sub-tasks of the Medical Retrieval Tasks (Image Modality Classification and Ad-hoc Image Retrieval). This ImageCLEF edition our group presents a way of working using the information of the Content Based Image Retrieval (CBIR) system and the information of the Textual Based Image Retrieval (TBIR) system. We use the work done in this regards in previous editions of ImageCLEF [2], based on the assumption that the conceptual meaning of a topic is initially better captured by the text module itself than by the visual module, so our merging method gives greater weight.