Prioritizing Requirements-Based Regression Test Cases: A Goal-Driven Practice Mazeiar Salehie, Sen Li, Ladan Tahvildari Software Technologies Applied Research (STAR) Lab University of Waterloo Waterloo, Canada {msalehie,s35li,ltahvild}@uwaterloo.ca Rozita Dara, Shimin Li, Mark Moore BIS-E Software Verification & Validation Department Research In Motion (RIM) Waterloo, Canada {rdara,shili,mamoore2}@rim.com Abstract—Any changes for maintenance or evolution pur- poses may break existing working features, or may violate the requirements established in the previous software releases. Re- gression testing is essential to avoid these problems, but it may be ended up with executing many time-consuming test cases. This paper tries to address prioritizing requirements-based regression test cases. To this end, system-level testing is focused on two practical issues in industrial environments: i) addressing multiple goals regarding quality, cost and effort in a project, and ii) using non-code metrics due to the lack of detailed code metrics in some situations. This paper reports a goal-driven practice at Research In Motion (RIM) towards prioritizing requirements-based test cases regarding these issues. Goal- Question-Metric (GQM) is adopted in identifying metrics for prioritization. Two sample goals are discussed to demonstrate the approach: detecting bugs earlier and maintaining testing effort. We use two releases of a prototype Web-based email client to conduct a set of experiments based on the two mentioned goals. Finally, we discuss lessons learned from applying the goal-driven approach and experiments, and we propose few directions for future research. Keywords-Test case prioritization; Software regression test- ing; Requirements-based test cases; Goal-driven approaches I. I NTRODUCTION Regression testing is performed to avoid unwanted side- effects of changes in a new release. Onoma et al. list several scenarios where regression testing is essential [1], such as developing a product family, maintaining large programs over a long period of time, and evolving a rapidly- changing product. This paper focuses on the system-level requirements-based testing and tries to address some chal- lenges related to regression testing in this phase. This research work is the result of collaboration between the University of Waterloo STAR Lab and the RIM Software V&V group for BlackBerry Internet services. Two practical issues in test case prioritization (TCP) for regression testing motivated us to do this research. First, TCP for regression features and requirements can be performed to achieve different goals, such as reducing effort and maintaining the current product quality. Although some researchers noted the key role of goals (e.g., [2]), goal-driven TCP has not been systematically addressed yet. We use GQM for identifying non-code metrics that can help us achieving regression testing goals. Second, in some practical situations, code-based metrics (e.g., the code coverage for each single test case) are not available or is costly to collect. One of the reasons is that functional test cases are not always fully-automated due to time constraints or other management issues. Also, test cases may be run on shared distributed environments by a number of testers simultaneously and collecting code coverage data may not be straightforward for each test case. We target two research questions: RQ1: how goals can direct us to identify appropriate metric(s) for prioritization in requirements-based regression testing? RQ2: how non-code information can help us prioritize test cases? We conduct a set of experiments on an email client to practice GQM in identifying non-code metrics for TCP, and to investigate the usefulness of the metrics. II. RELATED WORK Several test suite optimization approaches have been dis- cussed for regression testing improvement. These ap- proaches generally include [3]: i) selection, ii) minimization, and iii) prioritization. The existing research efforts indicate that prioritization is the safest approach in terms of the num- ber of escaped bugs from the testing process. If it is possible to define a test adequacy criteria to be evaluated during the testing process, we will be able to stop testing after executing a subset of test cases (converting prioritization to selection). Goals in regression testing can be of different kinds. For example, Rothermel et al. noted revealing bugs or high-risk bugs earlier as potential goals in testing [2]. Other goals related to cost and effort can also be articulated in a project. Although regression testing has drawn a considerable amount of attention from researchers and practitioners, to the best of our knowledge, there are only few works that discuss selection techniques for regression testing based on non-code metrics. For example, a selection technique is discussed by Chen et al. which is based on the modification of a system’s activity diagram, a notation of the UML [4]. The ideas proposed in these papers are interesting and innovative, but all of them are selection techniques and are not requirements-based. To the best of our knowledge, prioritization techniques for requirements-based regression test cases have not been extensively addressed yet.