Appeared in: Innovation and Consolidation in Aviation, G. Edkins & P. Pfister (Eds.), Aldershot, UK: Ashgate, 2003, pp. 255-262. ISBN 0 7546 1999 0 Development of hazard analysis techniques for human-computer systems Andrew Neal, Michael Humphreys, David Leadbetter and Peter Lindsay The University of Queensland, Australia Introduction Human error is known to be responsible for approximately 80% of all system failures within industries such as aviation, power generation, and mining (Hollnagel, 1993). Many of these errors can be traced back to the design of the human-computer or human-machine system. For example, the London Ambulance Service installed a new computerised dispatch system in 1992 resulting in lengthy delays in the dispatch of ambulances to emergencies (Finklestein & Dowell, 1996). A number of the errors were caused by a slow human-computer interface in which exception messages were not prioritised, queues scrolled off the screen with no means of retrieval and duplicated calls were not identified. In order to overcome these types of design problems, a range of techniques have been developed to analyse the potential for human error within safety-critical systems, and to examine the consequences of errors for the system as a whole. It is interesting to compare the types of techniques that are used for analysing human error, with the hazard analysis techniques that are used for the design and evaluation of hardware and software. International system safety standards – such as in the defence, railways, and process industries – mandate or highly recommend formal (mathematical) modelling of safety-critical aspects of hardware and software functionality (Commonwealth of Australia, 1998; European Committee for Electrotechnical Standardization, 1995; International Electrotechnical Commisson, 1997). Formal models are used for safety assurance with software and hardware systems, because they are precise, systematic, reproducible and auditable. By contrast, the techniques currently used for modelling and analysing the safety of Human-Computer Interface (HCI) designs, and operator error rates, are informal. One of the most commonly used methods for safety analysis is Failure-Modes and Effects Analysis (FMEA). Two examples are SHERPA (Systematic Human Error Reduction and Prediction Approach) and THERP (Technique for Human Error Rate Prediction: Kirwan, 1994). In such an FMEA, the designer inspects components of the system and identifies possible human failure modes and their potential effects using a “checklist” of common human failure-modes (Hussey, 1998). These approaches require a subject matter expert to estimate the likelihood of different types of errors occurring. Such judgements are frequently difficult to make, and are inherently subjective. Empirical data regarding the frequency of different types of errors is often not available, or is difficult to collect, particularly for systems that are under development. There are a number of reasons why formal models are not currently used for modelling the performance of human operators within safety-critical systems. These include: • difficulty formally modelling the interaction between operators and the computer; • lack of understanding of the psychological processes responsible for operator error; • inability to formally specify the antecedent conditions that trigger those processes, and to estimate the resulting likelihood of errors; and • lack of precise methods for determining system risk due to operator errors.