F. McCaffery, R.V. O'Connor, and R. Messnarz (Eds.): EuroSPI 2013, CCIS 364, pp. 1–12, 2013.
© Springer-Verlag Berlin Heidelberg 2013
Making Software Safety Assessable and Transparent
Risto Nevalainen
1
, Alejandra Ruiz
2
, and Timo Varkoi
3
1
Spinet Oy, Finland
2
Tecnalia, Spain
3
Finnish Software Measurement Association – FiSMA ry, Finland
risto.nevalainen@spinet.fi, alejandra.ruiz@tecnalia.com,
timo.varkoi@fisma.fi
Abstract. Most formal assessment and evaluation techniques and standards as-
sume that software can be analysed like any physical item. In safety-critical sys-
tems, software is an important component providing functionality. Often it is
also the most difficult component to assess. Balanced use of process assessment
and product evaluation methods is needed, because lack of transparency in
software must be compensated with a more formal development process. Safety
case is an effective approach to demonstrate safety, and then both process and
product are necessary evidence types. Safety is also a likely candidate to be ap-
proached as a process quality characteristic. Here we present a tentative set of
process quality attributes that support achievement of safety requirements of a
software product.
Keywords: software process, process assessment, software safety.
1 Introduction
Critical systems are defined as those that in case of an incident or misbehaviour can
lead to an accident that will put people or the environment in danger, resulting in
injuries and or casualties. Safety is considered as a general property of the whole sys-
tem and so its plans, developments and implementations must follow strict rules in
order to prevent failures of the system and their consequences and risks.
Software-based systems are increasingly important in safety. They replace old
wired and analog systems, and they also bring new technologies in safety. They are
more standardized and functionality-rich than earlier generations. We can even say
that they are more reliable. At least we can use diversity and redundancy more effec-
tively, because digital systems are typically cheaper than old analog systems.
But software brings also problems. Behaviour of software is rather deterministic
(i.e. exactly predictable) than probabilistic (i.e. likely to happen). We have to com-
pensate these deficiencies somehow, for example by formal and visible process and
by extensive documentation. Still some uncertainty remains and the ultimate “zero
defect” or “high reliability” goal is very difficult to achieve.
To some extent we can even challenge the current definitions of safety. For in-
stance, Leveson [1] states that: “Highly reliable software is not necessarily safe. In-
creasing software reliability will have only minimal impact on safety.” With control