International Journal of Software Engineering & Applications (IJSEA), Vol.14, No.4, July 2023 DOI: 10.5121/ijsea.2023.14401 1 TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATION Nader Kesserwan 1 , Jameela Al-Jaroodi 1 , Nader Mohamed 2 and Imad Jawhar 3 1 Department of Engineering, Robert Morris University, Pittsburgh, USA 2 Department of Computing and Engineering Technology, Pennsylvania Western University, California, Pennsylvania, USA. 3 Faculy of Engineering, AlMaaref University, Beirut, Lebanon ABSTRACT Executable test cases originate at the onset of testing as abstract requirements that represent system behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating system requirements into behavioral models and then transforming them into a scripting language has the potential to automate their conversion into executable tests. Ideally, an effective testing process should start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases. We propose a methodology that enables automation in converting functional requirements into executable test cases via model transformation. The proposed testing process starts with capturing system behavior in the form of visual use cases, using a domain-specific language, defining transformation rules, and ultimately transforming the use cases into executable tests. KEYWORDS Model-Driven Testing, Transformation Rules, Model Transformation, TDL, UCM & TTCN-3 1. INTRODUCTION The complexity of software development is on the rise in the modern era, leading to a surge in the need for software verification. Implementing an inappropriate testing methodology could undermine system safety. This is especially true in the avionics industry, where there has been a significant increase in safety-critical software, whether for military or civilian use. One of the key challenges faced by testing engineers is time constraints, which often limit the opportunity for detailed calculations. During the software development phase, the manual generation of test artifacts continues to be a significant cost driver, accounting for over 50% of the total development effort [1]. Automating the testing process can enhance software quality and ensure the reliability of the test results, thereby reducing liability costs and human effort. In the realm of software engineering, there's a growing trend of using scenarios to gather, document, and validate requirements [2]. Scenarios, which depict a series of behavior-related actions, can simplify an application's complexity, thereby facilitating better comprehension and prioritization of the required scenario. System requirements, including functional and operational ones, are encapsulated in these scenarios and leveraged to define test cases (TCs). A scenario offers insights into the practical implementation of a system's behavior. Since the collective scenarios of a system embody its behavior and functional domain, they ensure statement and decision coverage when the system is segmented into test scenarios. When use cases are employed to model requirements, a scenario can follow a specific route through the model to instantiate a use case [3], [4]. As a result, when a use case is chosen for testing, all of its