R. Meersman et al. (Eds.): OTM 2010 Workshops, LNCS 6428, pp. 45–46, 2010.
© Springer-Verlag Berlin Heidelberg 2010
Performance Testing of
Semantic Publish/Subscribe Systems
Martin Murth
1
, Dietmar Winkler
2
, Stefan Biffl
2
, Eva Kühn
1
, and Thomas Moser
2
1
Institute of Computer Languages, Vienna University of Technology
2
Christian Doppler Laboratory “Software Engineering Integration for
Flexible Automation Systems”, Vienna University of Technology
{mm,eva}@complang.tuwien.ac.at,
{dietmar.winkler,stefan.biffl,thomas.moser}@tuwien.ac.at
Abstract. Publish/subscribe mechanisms support clients in observing knowl-
edge represented in semantic repositories and responding to knowledge
changes. Currently available implementations of semantic publish/subscribe
systems differ significantly with respect to performance and functionality. In
this paper we present an evaluation framework for systematically evaluating
publish/subscribe systems and its application to identify performance bottle-
necks and optimization approaches.
1 Introduction and Motivation
The application of semantic repositories enables managing highly dynamic knowledge
bases [4]. Semantic publish/subscribe mechanisms foster the notification of changes
systematically [3]. Registered queries (e.g., using SPARQL) on repositories and indi-
vidual subscriptions will lead to the notification of individual subscribers initiated by
knowledge base updates. Several publish/subscribe mechanisms have been developed in
the past, e.g., the Semantic Event Notification System (SENS) [3] due to various appli-
cation requirements (e.g., focus on functional behavior and performance measures).
Nevertheless, a key question is how to evaluate publish/subscribe systems with focus on
performance measures efficiently. Several benchmark frameworks, e.g., LUBM [1] [4],
focus on the assessment of load, reasoning, and query performance of semantic reposi-
tories. However, a standardized approach for evaluating semantic publish/subscribe
mechanisms is not yet available. We developed the SEP-BM (Semantic Event Process-
ing Benchmark) framework focusing on two common performance metrics, i.e., notifi-
cation time and publication throughput, and implemented a framework for measuring
these metrics for semantic publish/subscribe systems [4].
2 SEP-BM Benchmark Framework
Figure 1 presents the concept of the novel benchmark framework consisting of a
benchmark base configuration and a benchmark runner. The benchmark base con-
figuration comprises data sets based on an ontology and 20 query definitions for
subscription to test performance measures: The configuration generator provides
sequences of publication operations (i.e., scenarios); the reference data generator
provides traceability information regarding publication/notification relationships for