Automatic Testing of Program Security Vulnerabilities
Hossain Shahriar and Mohammad Zulkernine
School of Computing
Queen’s University, Kingston, Canada
{shahriar, mzulker}@cs.queensu.ca
Abstract— Vulnerabilities in applications and their widespread
exploitation through successful attacks are common these days.
Testing applications for preventing vulnerabilities is an important
step to address this issue. In recent years, a number of security
testing approaches have been proposed. However, there is no
comparative study of these work that might help security
practitioners select an appropriate approach for their needs.
Moreover, there is no comparison with respect to automation
capabilities of these approaches. In this work, we identify seven
criteria to analyze program security testing work. These are
vulnerability coverage, source of test cases, test generation
method, level of testing, granularity of test cases, testing
automation, and target applications. We compare and contrast
prominent security testing approaches available in the literature
based on these criteria. In particular, we focus on work that
address four most common but dangerous vulnerabilities namely
buffer overflow, SQL injection, format string bug, and cross site
scripting. Moreover, we investigate automation features available
in these work across a security testing process. We believe that our
findings will provide practical information for security
practitioners in choosing the most appropriate tools.
Keywords: Security testing, Vulnerabilities, Buffer overflow, SQL
injection, Format string bug, Cross site scripting.
I. INTRODUCTION
Today’s applications (or programs) are complex in nature
and accessible to almost everyone. These programs are
developed using implementation languages (e.g., ANSI C),
library functions (e.g., ANSI C library, Java API), processors
(e.g., SQL query engine, HTML parser, JavaScript engine,
etc.) that often suffer from inherent vulnerabilities such as
buffer overflow [4], SQL Injection [5], format string bug [6],
and cross site scripting (XSS) [7]. Moreover, these
applications are not always used by legitimate users in a
legitimate manner. As a result, exploitations of these known
vulnerabilities through successful attacks are very common.
The practice of developing secure application has been
established for more than a decade ago. Several widely
complementary techniques of testing have been established
to detect and prevent vulnerabilities. These include static
analysis tools [26-30] to identify vulnerable code, combined
static analysis and runtime monitoring approach [31, 32],
automatic fixing of vulnerable code [33, 34], etc. Despite
usage of such techniques, we still find numerous exploitation
reports in different publicly available databases such as
Common Vulnerabilities and Exposures (CVE) [1] and Open
Source Vulnerability Databases (OSVDB) [2].
A practical approach to deal with this situation is to apply
appropriate application security testing techniques to prevent
vulnerabilities and attacks before their deployment. In recent
years, many program security testing methods have been
proposed and applied in practice [10-25, 35-38]. Each work
is valuable from certain perspective such as automatic test
case generation, test case execution, and covering particular
vulnerabilities. However, there is no extensive comparative
study of these work that might guide testing practitioners to
choose tools to perform the task of security testing.
Moreover, there is no comparative analysis in current
literature with respect to test automation of security testing.
As a result, it is difficult to identify costs incurred due to
manual process in security testing process.
In this work, we identify seven criteria to analyze
program security testing techniques. These are vulnerability
coverage, source of test cases, test generation method, level
of testing, granularity of test cases, tool automation, and
target applications. We compare and contrast 20 program
security testing techniques based on these criteria. We
choose these work as they claim to be superior to other
contemporary tools in terms of both detecting vulnerabilities
effectively and identifying previously unknown
vulnerabilities. We focus on security testing work that
address four widely known vulnerabilities, which are buffer
overflow (BOF) [4], SQL injection (SQLI) [5], format string
bug (FSB) [6], and cross site scripting (XSS) [7]. These are
the worst vulnerabilities found in today’s application [3].
Moreover, we perform a comparative analysis of testing
automation supported by these work with respect to three
identified criteria: test case generation, oracle generation,
and test case execution. Our initial findings indicate that
most of the available tools are geared towards web-based
vulnerabilities such as SQLI and XSS [13-16, 18-24, 36, 38].
While some tools provide testing of BOF, their automation
support is poor [10, 11, 12, 17, 35]. Moreover, very few
work test FSB vulnerabilities [25, 37].
The paper is organized as follows: Section II provides
some background information on four major vulnerabilities,
and security testing process. Section III discusses the seven
criteria for the classification of the existing security testing
work and categorizes work based on these criteria. In Section
IV, we compare automation aspect of different testing work.
Finally, Section V draws the conclusions and discusses
current open issues.
2009 33rd Annual IEEE International Computer Software and Applications Conference
0730-3157/09 $25.00 © 2009 IEEE
DOI 10.1109/COMPSAC.2009.191
550
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.