Complexity Estimation Approach for Debugging in Parallel
Maneesha Srivastav
Department of Computer Science and Engineering and
Information technology
Jaypee Institute of Information Technology, University
Noida, India
ek.maneesha@gmail.com
Yogesh Singh
University School of Information Technology
Guru Gobind Singh Indraprastha University
Kashmere Gate, Delhi, India
ys66@rediffmail.com
Chetna Gupta
Department of Computer Science and Engineering and
Information technology
Jaypee Institute of Information Technology, University
Noida, India
chetnagupta04@gmail.com
Durg Singh Chauhan
Uttarakhand Technical University
Dehradun, India
pdschauhan@gmail.com
Abstract— Multiple faults in a software many times prevent
debuggers from efficiently localizing a fault. This is mainly due
to not knowing the exact number of faults in a failing program
as some of the faults get obfuscated. Many techniques have
been proposed to isolate different faults in a program thereby
creating separate sets of failing program statements. To evenly
divide these statements amongst debuggers we must know the
level of work required to debug that slice. In this paper we
propose a new technique to calculate the complexity of faulty
program slices to efficiently distribute the work among
debuggers for simultaneous debugging. The technique
calculates the complexity of entire slice by taking into account
the suspiciousness of every faulty statement. To establish the
confidence in effectiveness and efficiency of proposed
techniques we illustrate the whole idea with help of an
example. Results of analysis indicate the technique will be
helpful (a) for efficient distribution of work among debuggers
(b) will allow simultaneous debugging of different faulty
program slices (c) will help minimize the time and manual
labor.
Keywords-debugging; fault localization; software testing
I. INTRODUCTION
Debugging is the most expensive and time consuming
process for software developers. The cost related to
debugging is measured mostly on two parameters: (a)
manual labor and (b) time required to discover and correct
bugs to produce a failure free program. Among all
debugging activities, fault localization is among the most
expensive [1].
When software fails it is usually due to more than one
cause. At the time of failure debuggers are not aware of the
number of causes one failure might have. Thus, usually one-
bug-at-a-time debugging approach is carried out in a
sequential manner to locate a fault and then to fix it. In this
approach debugger might utilize the data from failed test
cases and apply a fault localization technique where one bug
is targeted at a time. After localizing and fixing the fault the
program is retested, which might lead to another failure and
the cycle gets repeated till the program becomes failure free.
However, presence of more than one bug creates a
possibility of distributing the debugging task among more
than one debugger. By distributing the debugging task we
can save time and hence make the debugging activity less
expensive. [2] have presented a new mode of debugging
technique that provides a way for multiple developers to
debug simultaneously a program for multiple faults by
automatically producing specialized test suites for targeting
individual faults. This technique has been termed as parallel
debugging.
This research aims at making the task of distribution of
faulty program slices more optimally and efficient in terms
of labor and time which will result in simultaneous
debugging of different faulty program slices more efficient.
We propose a simple and efficient technique to estimate the
complexity related to every faulty program slice to help
minimize time and manual labor to improve the cost related
to fault localization.
II. RELATED WORK
Much of the recent work in debugging has been focused
on fault localization as it is one of the most expensive parts
of debugging practice. There are various coverage-based
fault localization techniques aiming at identifying the
executing program elements. Among them some use
coverage information provided by test suites to locate faults.
Such techniques [3, 4, 5, 6] typically instrument and execute
the program with the test suite to gather runtime information.
Other faults localization techniques are: χSlice [7] which
collects coverage from a failed test run and a passed test run
and the set of statements executed only in the failed test run
are reported as the likely faulty statements, Nearest
Neighborhood (NN) [8] is an extension of [7] which features
an extra step of passed test run selection. Tarantula [9]
defines a color scheme to measure the correlation i.e. it
searches for those statements whose coverage has a relatively
Second International Conference on Computer Research and Development
978-0-7695-4043-6/10 $26.00 © 2010 IEEE
DOI 10.1109/ICCRD.2010.14
223