Collaborative environment of the PROMISE infrastructure: an "ELEGantt" approach Marco Angelini Sapienza University of Rome Italy angelini@dis.uniroma1.it Claudio Bartolini HP Labs USA claudio.bartolini@hp.com Gregorio Convertino Xerox Research Centre,Europe convertino@xrce.xerox.com Guido Granato Sapienza University of Rome Italy granato@dis.uniroma1.it Preben Hansen SICS Sweden preben@sics.se Giuseppe Santucci Sapienza University of Rome Italy santucci@dis.uniroma1.it ABSTRACT This paper focuses on developing lightweight tools for knowledge sharing and collaboration by communities of practice operating in the field of information retrieval. The paper contributes a moti- vating scenario, a characterization of these communities, a list of requirements for collaboration, and then a system design proposed as a proof-of-concept implementation that is being evaluated. 1. INTRODUCTION This paper focuses on the problem of supporting knowledge shar- ing and collaboration in communities of practice that operate in the field of information retrieval (IR). These communities include developers, researchers, and stakeholders who periodically collect and use scientific data produced by the experimental evaluation of IR systems. Specifically, the communities considered include those involved in three specific IR domains: Patent, Cultural Heritage, and Radiology. The research context of the work reported in this paper is the PRO- MISE NoE. This project aims at advancing the current tools for IR communities to perform experimental evaluation of complex mul- timedia and multilingual information systems. The ultimate goal of the project is to develop a unified infrastructure for the community to efficiently collect and reuse data, knowledge, tools, methodolo- gies, and communities of end users. In this context, providing ad- equate support for collaboration is crucial. Herefrom the specific goal of the work reported in this paper: designing and evaluating lightweight support for knowledge sharing and collaboration. Currently, the following problems result from lack of suitable col- laboration tools: 1) Greater effort is required by individual members, who contribute as volunteers, for sharing knowledge and collaborating. In the long term, this discourages broader participation. 2) Poor reuse of content and process information across the mul- tiple instantiations of similar experimental evaluation processes. Over time, this leads to inefficient processes: e.g., content is al- Presented at EuroHCIR2012. Copyright c 2012 for the individual papers by the papers’ authors. Copying permitted only for private and academic purposes. This volume is published and copyrighted by its editors. ways recreated from scratch, successful processes (best practices) cannot be reused, novices cannot be easily trained based on shared experience. 3) The overall community cannot easily reflect on (and thus re- engineer) its own workflow around specific TRECs. 2. MOTIVATING SCENARIO The starting point of our analysis is a typical IR evaluation cam- paign (lab). In a typical scenario, Adam (lab organizer) is prepar- ing an IR experiment and evaluation task and spends time and re- sources for coordinating, communicating and assembling people and resources in order to proceed with the overall evaluation task, e.g. recruiting people that will be responsible for different evalu- ation task(s). Communication and sharing of information may be different within different across sub-tasks. Furthermore, they may be different between labs without any awareness among actors of the similarities/differences in the evaluation task processes. Thus, it is important to identify the stages in the evaluation task process as well as how collaborative and information sharing activities are manifested. 3. CHARACTERIZING IR COMMUNITIES The CLEF experimental platform involves a series of CLEF Labs and one or more tracks within each Lab. Each Lab as well as each track involves a certain set of tasks that could be considered as a task process or workflow. In order to define and describe these tasks we have investigated the lab and track organizers of a CLEF experiment, how they performed their work and what steps they went through during their work. Furthermore, we have extracted requirements for collaborative information handling and informa- tion sharing activities specifically [3, 5]. An evaluation campaign is an activity intended to support IR researchers providing a large test collection and uniform scoring procedures. An evaluation cam- paign is organized within an evaluation framework like TREC or CLEF and can involve different domains (cultural heritage, patent, radiology and so on). Within an evaluation campaign there are many tracks, such as multimedia, multilingual, text, music, images, etc. A track can be organized differently according to a specific do- main and include, in turn, several tasks. A task is used to define the structure of the experiment, specifying a set of documents, a set of topics, and a relevance assessment. For each task the set of doc- uments can be structured defining, for example, a title, keywords, images and so on. A topic represents an information need. Doc- uments can be assessed as being relevant or not (or more or less relevant) for a given information need (topic). Some of the most common tasks that we observed as part of a