A QoI-aware Framework for Adaptive Monitoring Bao Le Duc * , Philippe Collet ‡ , Jacques Malenfant † and Nicolas Rivierre * * Orange Labs, Issy les Moulineaux, France Email: {bao.leduc, nicolas.rivierre}@orange-ftgroup.com ‡ Universit´ e de Nice Sophia Antipolis, CNRS, UMR 6070 I3S, Sophia Antipolis, France Email: Philippe.Collet@unice.fr † Universit´ e Pierre et Marie Curie-Paris 6, CNRS, UMR 7606 LIP6, Paris, France Email: Jacques.Malenfant@lip6.fr Abstract—Monitoring application services becomes more and more a transverse key activity in information systems. Beyond traditional system administration and load control, new activities such as autonomic management and decision making systems raise the stakes over monitoring requirements. In this paper, we present ADAMO, an adaptive monitoring framework that tackles different quality of information (QoI)-aware data queries over dynamic data streams and transform them into probe configuration settings under resource constraints. The framework relies on a constraint-solving approach as well as on a component-based approach in order to provide static and dynamic mechanisms with flexible data access for multiple clients with different QoI needs, as well as generation and configuration of QoS and QoI handling components. The monitoring framework also adapts to resource constraints. Keywords-Monitoring, Adaptive systems, Quality of informa- tion, Component framework I. I NTRODUCTION As distributed and pervasive systems are now deployed everywhere with 24/7 availability constraints, monitoring be- comes more and more a transverse key activity in enterprise computing. Beyond traditional system administration and load control, new activities increasingly require automated management of the systems, raising the stakes over monitor- ing requirements. Specific tasks such as scheduling, resource allocation and problem diagnosis make their decisions upon the online and continuous monitoring of the services, sys- tems and infrastructures. Besides, autonomic management and decision making systems are now organized around Service Level Agreements referring to some Quality of Service (QoS) criteria. As large QoS variations are easily observable by clients when calling distant applications and services, there is also a large variation in the monitoring requirements, in terms of the types of monitoring data to be acquired, their lifespan, precision and granularity. This is generally referred as Quality of Information (QoI), i.e., an expression of the properties required from the monitored QoS data [1]. Moreover, deployment contexts have evolved in size and complexity, from intra-enterprise Service-Oriented Architec- tures (SOA) principles with low-latency network to large- scale inter-enterprise infrastructures with high latency, and finally to pervasive systems with dynamic contexts. Moni- toring a distributed system involves extracting information among the deployed processes and their interactions, collect- ing it efficiently and making them available to the interested users in an appropriate format. The distributed context makes the monitoring activity inherently more complex than the more traditional centralized one, as it forces to handle several control flows, communication delays between nodes, nondeterministic event ordering and an extensive behavioral alteration on the observed system [2]. These challenges are hardly addressed by current mon- itoring systems. In a SOA context, prior works show that behavioral and basic QoS constraints can be expressed and monitored at runtime [3], [4], but with no QoI or only some implicit ones like statistics on QoS [5]. A monitoring system must currently provide several information flows to multiple clients, with different QoI requests, everything being dynamically reconfigurable. Finally, the monitoring system, being constantly operational, is itself subject to con- straints on the resources it consumes to provide its services. Consequently, designing and deploying monitoring systems that are well-adapted to such requirements now become a complex and tedious activity for software architects and system administrators. Automation of this process is clearly needed. Recent works focus on QoI and adaptive monitoring for context-aware computing, data stream processing or transactional systems [6], [7], [8], but no monitoring system is currently adapted to all (changing) requirements together. In this paper, we present ADAMO, an adaptive monitoring framework that tackles different QoI-aware data queries over dynamic data streams, transform them into probe config- urations settings under resource constraints. This process relies on a constraint-solving approach. The framework also factors out the common structure and behavior of monitoring systems so that they can be reusable and extensible. To do so, it leverages component-based techniques so that a common base architecture is provided as an assembly of interacting components. Different parts of the architecture are then configurable, or can be partly generated from high-level descriptions of the monitoring requirements. ADAMO thus aims at providing solutions for i) flexible access to dynamic 133 ADAPTIVE 2010 : The Second International Conference on Adaptive and Self-Adaptive Systems and Applications Copyright (c) IARIA, 2010 ISBN: 978-1-61208-109-0