ORIGINAL ARTICLE Zeshan Kurd Æ Tim Kelly Æ Jim Austin Developing artificial neural networks for safety critical systems Received: 15 July 2003 / Accepted: 15 January 2005 / Published online: 30 March 2006 Ó Springer-Verlag London Limited 2006 Abstract There are many performance based techniques that aim to improve the safety of neural networks for safety critical applications. However, many of these approaches provide inadequate forms of safety assur- ance required for certification. As a result, neural net- works are typically restricted to advisory roles in safety- related applications. Neural networks have the ability to operate in unpredictable and changing environments. It is therefore desirable to certify them for highly- dependable roles in safety critical systems. This paper outlines the safety criteria which are safety requirements for the behaviour of neural networks. If enforced, the criteria can contribute to justifying the safety of ANN functional properties. Characteristics of potential neural network models are also outlined and are based upon representing knowledge in interpretable and under- standable forms. The paper also presents a safety life- cycle for artificial neural networks. This lifecycle focuses on managing behaviour represented by neural networks and contributes to providing acceptable forms of safety assurance. Keywords Safety critical Æ Neural network Æ Criteria Æ Lifecycle Æ Faults Æ Hazards Æ Symbolic knowledge 1 Introduction Artificial neural networks (ANNs) are used in many safety-related applications within industry. Typical applications within aerospace industry may involve utilisation of ANNs in flight control systems [1]. Other applications within medicine may involve ANNs for diagnosis of certain diseases [2]. A wide-ranging review of many applications of ANNs in safety-related indus- tries can be found in a UK HSE report [3]. There are many reasons why industries find ANNs appealing. Most of these reasons relate to the functional benefits offered by ANNs. These benefits may include: The ability to learn: This is useful for problems whose intentionally complete algorithmic specification can- not be determined at the initial stages of development. They are also used when there is little understanding of the relationship between inputs and outputs. Therefore the neural network uses learning algorithms and training sets to learn new features associated with the desired function. Dealing with novel inputs: Providing generalisation to novel inputs using pre-learned samples for compari- sons. Operational performance: By exploiting the general- isation ability, the neural network can outperform other methods particularly in areas of pattern recog- nition and function approximation. Computational efficiency: Neural networks are often faster and more memory efficient than other meth- ods. Although neural networks are used in many safety- related applications, they share a common problem. That is, ANNs are typically restricted to advisory roles. In other words, the ANN does not have the final deci- sion in situations where there is risk of severe conse- quences. Current safety standards have extremely limited recommendations for using artificial intelligence for safety critical systems. One example is IEC61508-7 [4], where neural networks may be used as a ‘safety bags’. The ‘safety bag’ is an external monitor that en- sures the system does not enter an unsafe state. This may protect against residual specification and implementa- tion faults which may adversely affect safety. However, application of ANNs in safety critical systems is allowed only for the lowest level of safety integrity (SIL1). Z. Kurd (&) Æ T. Kelly Æ J. Austin Department of Computer Science, University of York, York YO10 5DD, UK E-mail: zeshan.kurd@cs.york.ac.uk E-mail: tim.kelly@cs.york.ac.uk E-mail: jim.austin@cs.york.ac.uk Tel.: +44-1904-433388 Fax: +44-1904-432767 Neural Comput & Applic (2007) 16: 11–19 DOI 10.1007/s00521-006-0039-9