Resource Space Management Systems Marcos Baez Dipartimento di Ingegneria e Scienza dell’Informazione University of Trento Trento, Italy baez@disi.unitn.it Fabio Casati Dipartimento di Ingegneria e Scienza dell’Informazione University of Trento Trento, Italy casati@disi.unitn.it Liquidpub 1 is an EU project within the “future and emerging technologies” category whose goal is to capture the lessons learned and opportunities provided by the Web and open source, agile software development to develop concepts, models, metrics, and science support services for an efficient (for people), effective (for science), and sustainable (for publishers and the community) way of creating, disseminating, evaluating, and consuming scientific knowledge [1]. Novel services for science are a hot topic these days. From social bookmarking sites to online ranking of scientists, these services try to assist scientists in sharing content and assessing people and their scientific contributions. These services are however still very much anchored to a traditional notion of publication and are only scratching the surface of what can be done to help scientists collaborate for the greater good. Examples of Scientific Services. An example of services that Liquidpub intends to deliver is that of Liquid Journals 1 (LJ), that redefines the traditional notion of journal which was born at a time where the paper was the only possible form of non-verbal knowledge dissemination, printing was the scarce resource, and therefore peer review and pre-publication filtering was necessary. Liquid journals are based on these notions i) separation of publication from inclusion in a journal: contributions are posted online (without any review) or published in traditional journals following a traditional process, and then they can be included in an arbitrarily high number of LJs. Each LJ decides policies and rules to determine if a contribution is included. Essentially, LJs are ways to aggregate all sort of available content based on what is interesting and relevant for its readers. This can be done via review, collaborative filtering, looking at journals of people we consider highly, etc; ii) Everybody (even individuals) can create and run LJs; iii) Papers are not the only source of knowledge. Blogs, experiments, datasets, slides, comments/feedback and the like are valid and useful forms of dissemination, some of them having the additional benefits of allowing early dissemination and therefore better collaboration. Including feedback as a form of contribution has the effect that it is considered as part of what is evaluated from a scientist and therefore it encourages giving feedback, which is fundamental to the scientific creation process. All is driven towards what the purpose of a journal should be: providing people with interesting content to read, minimizing the dissemination overhead, and maximizing the collaboration. Current journals are a 1 http://project.liquidpub.org particular case of LJs. In terms of web services, liquid journals require an infrastructure that allows defining LJs and fetching/filtering content from the web based on profiles, preferences, recommendations, policies, and so on. The effort in developing the liquid journals is on the definition of a query language capable of capturing the notions of “interestingness” and “relevance”, and on the development of the underlying query engine on top of scientific resources on the web, capable of merging results from various resource managers (e.g. search engines, social bookmarking services), filtering and grouping the results according to the query definition and to rank them according to their relevance. Another service LP provides is research evaluation (also based on LJs, but not only). Evaluation is a necessary aspect of research, not only to filter contributions but also to help select people for hiring or promotion. In this respect, the LiquidPub project aim at developing scientific metrics that i) take into account the different aspects of the research activity: that of creating content, filtering content, proposing good ideas, setting up good experiments, and ii) encourage “good” behaviors (sharing content early, providing feedback, etc) and that not only look at what people have done but that try to assess interest in what scientist will produce. Besides defining metrics, what we want to provide is a way to make it easy for scientists and evaluation agencies to define their own metrics. To this end, we need to provide services that allow programmatic access to scientific data and metadata -- both traditional ones (Google scholar, citeseer, citeUlike, SpringerLink,..) and more novel ones (blogs, liquid journals, …), that allows for sophisticated features such as author disambiguation or for comparing people of different communities and therefore having different scientific metrics (this is hard because it is hard to define what a community is), and that allow people to easily define and plug in their own metric which use data from their favorite sources. Implications for Research Spaces Management Systems. Given the above, we need a common platform to access the various kinds of scientific resources available on the web, in a way that it easy (or at least easier) to develop services for scientists on top. For this, such a platform should provide programmatic access to scientific resources, hiding the tedious problem of accessing heterogeneous platforms which very often are not even available for programmatic access but are only designed for Web browser access (e.g., Google scholar). The large (and growing) amount of scientific web applications providing access to these resources makes it practically impossible to design a monolithic infrastructure