From Assessment to Standardised Benchmarking: Will it Happen? What Could We Do About It? Henrique Madeira 1 and Istvan Majzik 2 1 CISUC, University of Coimbra, Portugal 2 Budapest University of Technology and Economics, Hungary henrique@dei.uc.pt, majzik@mit.bme.hu Abstract This summary gives a brief overview of the panel on dependability and security assessment and benchmarking. The panel is organized in the frame of the 39th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Performance and Dependability Symposium (PDS). The panel aims at the presentation and exchange of ideas, solicit inputs from the audience and encourage debate in order to give answers to the question: can we really expect to grow from dependability and security assessment to standardized benchmarking? The panelists are asked to address key problems related to the (1) needs, drivers, and challenges, (2) recent developments, (3) future research directions, and (4) the needs of training and standardization. 1. Introduction Cost pressure, short time to market, and increased complexity are responsible for an increase of the failure rate of computing systems, while the cost of failures is growing rapidly, as a result of an unprecedented degree of dependence of our society on computing systems. The combination of these factors has created a dependability and security gap that is often perceived by users as a lack of trustworthiness in computer applications, and that is in fact undermining the network and service infrastructures that constitute the very core of the knowledge-based society. Having effective and accurate methods and tools to assess dependability and security is essential to understand current risks of network and service infrastructures and contribute to improving the current situation. However, although considerable efforts have been made, assessing dependability and security is still a difficult problem. The quality of measurements, the assessment of dependability in component based, dynamic and adaptive systems and networks, and the integration with the development process are among the evident challenges. The problem is even harder when it comes to the assessment of dependability in a standard and comparable way, and all major classes of threats, namely accidental faults (component failures, software bugs, human mistakes, interaction mistakes) as well as malicious attacks are considered. Dependability benchmarking aims at providing generic, repeatable and widely accepted methods for characterizing and quantifying the system (or component) behavior in the presence of faults, and comparing the dependability of alternative solutions. Benchmarking approaches seem to be promising, not only because they could provide a catalyzing effect to improve computer systems dependability and security (in the same way performance benchmarks have contributed to a dramatic boost in computer performance) but also because having the capacity of measuring resilience in a consistent and comparable way is very much needed to give credibility to the whole dependability assessment discipline. There are however, several problems and challenges identified in relation with benchmarking, for example subdividing the benchmark application domains, constructing proper benchmarks (without unintended negative effects) that are robust while easy to use, creating acceptance and avoiding misuse through training and standardization. Thus it is time to ask the questions: can we really expect to grow from dependability and security assessment to standardized benchmarking? Will it happen? What could we do about it? The panel is meant to foster lively debate looking at the main reasons why dependability and security assessment is still so difficult to attain in practice, even for relatively simple systems. Why dependability benchmarking still seems a distant promise? Is there stagnation in this research area? Or are there exciting recent developments? What future research directions