The Journal of Systems and Software 92 (2014) 157–169
Contents lists available at ScienceDirect
The Journal of Systems and Software
j ourna l ho mepage: www.elsevier.com/locate/jss
Failure factors of small software projects at a global outsourcing
marketplace
Magne Jørgensen
∗
Simula Research Laboratory and University of Oslo, P.O. Box 134, NO-1325 Lysaker, Norway
a r t i c l e i n f o
Article history:
Received 2 July 2013
Received in revised form 22 January 2014
Accepted 23 January 2014
Available online 3 February 2014
Keywords:
Outsourcing
Project failures
Risk management
a b s t r a c t
The presented study aims at a better understanding of when and why small-scale software projects
at a global outsourcing marketplace fail. The analysis is based on a data set of 785,325 projects/tasks
completed at vWorker.com. A binary logistic regression model relying solely on information known at
the time of a project’s start-up correctly predicted 74% of the project failures and 67% of the non-failures.
The model-predicted failure probability corresponded well with the actual frequencies of failures for
most levels of failure risk. The model suggests that the factors connected to the strongest reduction in
the risk of failure are related to previous collaboration between the client and the provider and a low
failure rate of previous projects completed by the provider. We found the characteristics of the client to
be almost as important as those of the provider in explaining project failures and that the risk of project
failure increased with an increased client emphasis on low price and with an increased project size. The
identified relationships seem to be reasonable stable across the studied project size categories.
© 2014 Elsevier Inc. All rights reserved.
1. Introduction
A great deal of resources are spent on software projects that
fail to deliver useful functionality. For example, the proportion of
started and then cancelled projects, sometimes termed “aborted”
or “abandoned” projects, is reported to be 9% (Sauer et al., 2007),
11% (Tichy and Bascom, 2008), and 11.5% (El Emam and Koru, 2008).
Several non-peer reviewed reports claim a much higher proportion
of cancelled software projects, but may be less reliable or less rep-
resentative of the population of software projects. The frequently
cited Standish Group Chaos Report (1995), for example, claims that
as many as 31% of all software projects get cancelled. The low
reliability of that report is discussed in (Jørgensen and Moløkken-
Østvold, 2006; Eveleens and Verhoef, 2010). While the cancellation
rates described in the Standish Group Chaos Reports and similar
non-peer reviewed surveys are likely to be exaggerated, there is no
doubt that the proportion of cancelled projects is substantial.
The definition of a failed project in software surveys typically
includes both cancelled projects and projects completed with a very
poor product or process quality. Consequently, the reported fail-
ure rates appear higher than the corresponding cancellation rates.
Exactly how much higher depends on the failure criteria used. For
example, El Emam and Koru (2008) categorized a project as having
failed if it received a score of “poor” or “fair” in four out of five of
∗
Tel.: +47 924 333 55.
E-mail address: magnej@simula.no
the following performance criteria: user satisfaction, ability to meet
budget targets, ability to meet schedule targets, product quality and
staff productivity. This definition led to a failure rate of more than
twice the cancellation rate for the same set of projects, i.e., a failure
rate of 26% for the data set reporting a cancellation rate of 11.5%.
Defining every project that does not deliver the specified product,
is over budget, or is not on time as a failure, as is the case in several
reports, typically amounts to 50–80% of all software projects being
failures. For an overview of software failure surveys see (Hashmi
and Stevrin, 2009).
The challenge of defining project failures meaningfully is fur-
ther illustrated in (Boehm, 2000), where Barry Boehm makes the
reasonable claim that not all cancellations should be considered to
be failures. There may, for example, be good reasons for cancelling
a well-managed project if the project’s original assumptions of use-
fulness are no longer valid. In that case, the failure would clearly be
to continue a project that is no longer needed instead of cancelling
it. A similar problem may occur when a project is interpreted as
a failure because it delivers something other than what was origi-
nally specified or expected. There are development processes, e.g.,
agile methods, in which requirements are meant to evolve as part of
the learning process and, clearly, it would be meaningless to define
the learning process leading to change in requirements as indicat-
ing a failure. It may also be important to separate a project failure
from a product failure, see for example (Baccarini, 1999). Finally,
there may be differences in the failure perspectives of different
project stakeholders, which also lead to different interpretations
of whether a project has failed or not (Agarwal and Rathod, 2006).
0164-1212/$ – see front matter © 2014 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.jss.2014.01.034