SPORTAL: Profiling the Content of Public SPARQL Endpoints Ali Hasnain † Qaiser Mehmood † Syeda Sana e Zainab † Aidan Hogan ‡ † INSIGHT Centre for Data Analytics, ‡ Center for Semantic Web Research, National University of Ireland, Galway Department of Computer Science, University of Chile Abstract Access to hundreds of knowledge-bases has been made available on the Web through public SPARQL endpoints. Unfortunately, few endpoints publish descriptions of their content (e.g., using VoID). It is thus unclear how agents can learn about the content of a given SPARQL endpoint or, relatedly, find SPARQL endpoints with content relevant to their needs. In this paper, we investigate the feasibility of a system that gathers information about public SPARQL endpoints by querying them directly about their own content. With the advent of SPARQL 1.1 and features such as aggregates, it is now possible to specify queries whose results would form a detailed profile of the content of the endpoint, comparable with a large subset of VoID. In theory it would thus be feasible to build a rich centralised catalogue describing the content indexed by individual endpoints by issuing them SPARQL (1.1) queries; this catalogue could then be searched and queried by agents looking for endpoints with content they are interested in. In practice, however, the coverage of the catalogue is bounded by the limitations of public endpoints themselves: some may not support SPARQL 1.1, some may return partial responses, some may throw exceptions for expensive aggregate queries, etc. Our goal in this paper is thus twofold: (i) using VoID as a bar, to empirically investigate the extent to which public endpoints can describe their own content, and (ii) to build and analyse the capabilities of a best-effort online catalogue of current endpoints based on the (partial) results collected. 1 Introduction Linked Data aims at making data available on the Web in an interoperable format so that agents can discover, access, combine and consume content from different sources with higher levels of automation than would otherwise be possible [22]. The envisaged result is a “Web of Data”: a Web of structured data with rich semantic links where agents can query in a unified manner – across sources – using standard languages and protocols. Over the past few years, hundreds of knowledge-bases with billions of facts have been published according to the Semantic Web standards (using RDF as a data model and RDFS and OWL to provide explicit semantics) following the Linked Data principles. As a convenience for consumer agents, Linked Data publishers often provide a SPARQL endpoint for querying their local content [25]. SPARQL is a declarative query language for RDF in which graph pattern matching, disjunctive unions, optional clauses, dataset construction, solution modifiers, etc., can be used to query RDF knowledge-bases; the recent SPARQL 1.1 release adds features such as aggregates, property paths, sub-queries, federation, and so on [19]. Hundreds of public endpoints have been published in the past few years for knowledge-bases of various sizes and topics [25, 10]. Using these endpoints, clients can receive direct answers to complex queries using a single request to the server. However, it is still unclear how clients (be they human users or software agents) should find endpoints relevant for their needs in the first place [36, 10]. A client may have a variety of needs when looking for an endpoint, where they may, for example, seek endpoints with data: 1. about a given resource, e.g., michael jackson; 2. about instances of a particular type of class, e.g., proteins; 3. about a certain type of relationship between resources, e.g., directs-movie; 1