International Journal of Computer Applications (0975 – 8887) Volume 183 – No. 29, October 2021 33 Application of T-SEC to Measure the Performance of Static Analyzers and Penetration Testing Approaches Akwasi Amponsah Mamp. Tech. Coll of Educ. Asante Mampong, Ghana Richard Amankwah Accra Institute of Technology Accra, North Ghana Daniel PaaKorsah Komenda Coll of Educ Komenda, Ghana ABSTRACT Software vulnerability analysis is very relevant in the process of investigating the existence of bugs (referred to as vulnerabili- ties) in software application. Recently, several empirical studies such as static code analyzers (SCA) and penetration testing ap- proaches such as web vulnerability scanners (WVS) have been purported to aid the analysis of vulnerabilities in web applica- tions. Although, there are several SCA and penetration testing tools (both open and commercial source) proposed in literature, the performance of these tools varies and make vendors skep- tical in relation to the one most suited for detecting a particular type of vulnerability or bug, have a high precision and recall value, a low false positive and a high detection rate.In this study, we applied the standard evaluation criteria (T-SEC), namely precision and recall, Youden index, OWASP web benchmark evaluation (WBE) and the web application security scanner evaluation criteria (WASSEC) to measure the perfor- mance of the aforementioned approaches using the Damn Vul- nerable Web Application (DVWA) and extracted report from the Juliet Test Suite. General Terms Software Privacy • Information security • Software Analysis Keywords Open-source scanner, Vulnerability detection, Vulnerability scanner, damn vulnerable web application 1. INTRODUCTION Security vulnerabilities are uncovered on a regular basis in modern-day systems such as networking, application software and most importantly web applications. Currently,the web ap- plication has become the main attacking spot by hackers due to its enormous benefits. The National Vulnerability Database (NVD) [1]which is managed by the National Institute of Stan- dards and Technology (NIST) showsthat vulnerability such as SQL Injection, File Inclusion and Cross-Site Scripting (XSS) continually increased at an astronomical rate yearly in web ap- plication [2]. This is because most of the web applications dep- loyed are not totally devoid of vulnerabilities. These vulnerabil- ities normally cause data breaches and have serious security implications when they are exploited by attackers. To address such a challenge, vulnerability analysis such as manual code inspection, static code analyzers (SCA) and penetration testing approaches have been proposed as a better alternative to im- prove the quality and efficiency of the manual procedure used in previous studies for bug detection.Unfortunately, the tradi- tional method, which involves manual examination of numer- ous lines of code is often difficult, unproductive and produce a high rate of false positives. Current techniques which involve the use of automated SCA and WVS also shows varied effi- ciency and detection capabilities as reported by Antunes and Vieira [3], Makino and Kleve [4], making it difficult to select the appropriate tool for vulnerability detection. Consequently, this study presents an application of the standard evaluation cri- teria (T-SEC), namely precision and recall, Youden index, OWASP Web Benchmark Evaluation (WBE) and the Web Ap- plication Security Scanner Evaluation Criteria (WASSEC) to measure the performance of the static code analyzers and pene- tration testing approach using the Damn vulnerable web appli- cation (DVWA) and vulnerability report from the Juliet Test Suite.The key idea of this study is to apply the standard evalua- tion criteria (T-SEC): To evaluate the performance of eight WVS, namely Acunetix, HP WebInspect, IBM AppScan, OWASP ZAP, Skipfish, Arachni, Vega and Iron WASP in identifying security vulnera- bility in web service environment using the DVWA. To evaluate the effectiveness of seven widely use SCA, namely Findbug, PMD, LAPSE+, JLint, Bandera, ESC/Java and YAS- CA using Juliet Test Suite v1.2 test cases. To suggest possible measures that can be used to improve SCA and WVS The remaining section of the paper is organized as follows: Sec- tion presents the standard evaluation criteria which were used to measure the performance of the tools. Section 3 discusses the methodology and experimental setup for the study. In section 4, we present evaluation of the SCA and the WVS tools. Section 5 present the conclusion and future directions in this domain of study. 2. THE STANDARD EVALUATION CRI- TERIA (T-SEC) We evaluated the performance of the tools using the standard evaluation metrics: precision and recall, Youden index, OWASP Web Benchmark Evaluation (WBE) and the Web Ap- plication Security Scanner Evaluation Criteria (WASSEC) fol- lowing a similar procedure in[1]. 2.1 Precision and Recall Precision [5] which is also known as predictive value is the per- centage of a correctly detected bug to the number of all detected bugs (i.e. number of bugs detected by the tool that are actually rear bugs). Eq.1 shows how it is calculated. Precision value of 100% represents a high detection accuracy of the exact bug. Precision = TP/(TP + FP) (Eq. 1) Recall [6] is the percentage of a correctly detected bug to the number of known bugs (i.e. a number of bugs that were sup- posed to be detected by the tool but couldn’t detect. Eq.2 shows the formula for recall. Recall = TP/(TP + FN) (Eq. 2)