Towards Software Architecture-based Regression Testing Henry Muccini Marcio S. Dias Debra J. Richardson Dipartimento di Informatica University of L’Aquila, Italy Department of Computer Science University of Durham, UK Department of Informatics Donald Bren School of Information and Computer Sciences University of California Irvine, USA muccini@di.univaq.it marcio.dias@dur.ac.uk djr@ics.uci.edu ABSTRACT When architecting dependable systems, in addition to improving system dependability by means of construction (fault-tolerant and redundant mechanisms, for instance), it is also important to evaluate, and thereby confirm, system dependability. There are many different approaches for evaluating system dependability, and testing always has been an important one. Previous work on software architecture testing has shown it is possible to apply conformance-testing techniques to yield confidence that the behavior of an implemented system conforms to the expected behavior of the software architecture, specified with Architecture Description Languages. In this work, we explore how regression testing can be systematically applied at the software architecture level in order to reduce the cost of retesting modified systems, and also to assess the regression testability of the evolved system. We consider assessing both “top-down” and “bottom-up” evolution, i.e., whether a slightly modified implementation conforms to the initial architecture, and whether the (modified) implementation conforms an evolved architecture. A better understanding on how regression testing can be applied at the software architecture level will help us to assess and identify architecture with higher dependability. Categories and Subject Descriptors D.2.5 [Software Engineering]: Testing and Debugging - Testing tools, Monitors, Tracing. D.2.11 [Software Engineering]: Software Architectures - Languages, Data abstraction. Keywords Software Architecture (SA), Dependable Systems, Regression Testing (RT), Architecture-based Testing and Analysis. 1. INTRODUCTION A Software Architecture (SA) [7] specification captures system structure (i.e., the architectural topology), by identifying architectural components and connectors, and required system behavior, designed to meet the system requirements, by specifying how components and connectors are intended to interact. Software architectures can serve as useful high-level “blueprints” to guide the production of lower-level system designs and implementations, and later on for guidance in maintenance and reuse activities. Moreover, SA-based analysis methods provide several value added benefits, such as system deadlock detection, performance analysis, component validation and much more [6]. Additionally, SA-based testing methods are available to check conformance of the implementation’s behavior with SA-level specifications of expected behavior [5] and to guide integration testing [6, 17]. Reaping these architectural benefits, however, does not come for free. To the contrary, experience indicates that dealing with software architectures is often expensive perhaps even too expensive, in some cases, to justify the benefits obtained. For example, consider the phenomenon of “architectural drift” [20]. It is not uncommon during evolution that only the low-level design and implementation are changed to meet tight deadlines, and the architecture is not updated to track the changes being made to the implementation. Once the architecture “drifts” out of conformance with the implementation, many of the aforementioned benefits are lost: previous analysis results cannot be extended or reused, and the effort spent on the previous architecture is wasted. Moreover, even when implementation and architecture are kept aligned, SA- based analysis methods often need to be re-run completely from the beginning, at considerable cost, whenever the system architecture or its implementation change. SARTE (Software Architecture-based Regression TEsting) is a collaborative project among three universities focused on providing a framework and approach for SA-based testing in the context of evolution, when both architecture and implementation are subject to change. The topic of architecture-based testing has been extensively analyzed by one of the authors in [17], where a general framework for software architecture-based conformance testing has been proposed. A software architecture-based testing and analysis toolset (Argus-I) was developed by two of the authors, as described in [5]. SARTE builds upon the research and development in both previous projects. In this context, this paper shows how SA-based regression testing provides a key solution to the problem of retesting an SA after its evolution. In particular, after identifying SA-level behavioral test cases and testing conformance of the code with respect to the expected architectural behaviors [17], we show what should be tested when the code and/or architecture is modified and how testing information previously collected may be reused to test the conformance of the revised implementation with respect to either the initial or revised architecture. We describe, in general terms, i) how implementation-level test cases may be reused to test the conformance of modified code with respect to the architectural specification, and ii) how to reuse architecture-level test cases when the architecture evolves. Our approach relies on reusing and modifying existing code-level regression testing (RT) techniques. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WADS'05, May 17, 2005, St. Louis, MO, USA. Copyright 2005 ACM 1-59593-124-4/05/0005...$5.00. 1