Means-ends and whole-part traceability analysis of safety requirements Jang-Soo Lee a, * , Vikash Katta b , Eun-Kyoung Jee c , Christian Raspotnig b a Korea Atomic Energy Research Institute, Daejeon, Republic of Korea b Institute for Energy Technology, Halden, Norway c Korea Advanced Institute of Science and Technology, Daejon, Republic of Korea article info Article history: Received 22 January 2009 Received in revised form 19 July 2009 Accepted 19 August 2009 Available online 23 August 2009 Keywords: Cognitive safety engineering Means-ends and whole-part Traceability Safety abstract Safety is a system property, hence the high-level safety requirements are incorporated into the imple- mentation of system components. In this paper, we propose an optimized traceability analysis method which is based on the means-ends and whole-part concept of the approach for cognitive systems engi- neering to trace these safety requirements. A system consists of hardware, software, and humans accord- ing to a whole-part decomposition. The safety requirements of a system and its components are enforced or implemented through a means-ends lifecycle. To provide evidence of the safety of a system, the means-ends and whole-part traceability analysis method will optimize the creation of safety evidence from the safety requirements, safety analysis results, and other system artifacts produced through a life- cycle. These sources of safety evidence have a causal (cause-consequence) relationship between each other. The failure mode and effect analysis (FMEA), the hazard and operability analysis (HAZOP), and the fault tree analysis (FTA) techniques are generally used for safety analysis of systems and their com- ponents. These techniques cover the causal relations in a safety analysis. The causal relationships in the proposed method make it possible to trace the safety requirements through the safety analysis results and system artifacts. We present the proposed approach with an example, and described the usage of TRACE and NuSRS tools to apply the approach. Ó 2009 Elsevier Inc. All rights reserved. 1. Introduction Usage of digitalized systems to control safety critical operations is ever-increasing. Examples of such safety control systems include digital control systems embedded in nuclear power plants, satel- lites, and missiles. A typical control system consists of the follow- ing components as presented in Fig. 1: plant, controller, actuators, and sensors. The primary concern in developing a safety control system is that the plant (P) must behave in a safe and acceptable way. The correctness of the controller (C) is the only means for ensuring a correct and safe plant behavior. The requirements of safety control systems should be correct from its functional, timing and safety aspects. When specifying the requirements (Sp) and proving its safety properties, one must consider the behaviors of both the plant (P) and a controller (C) (Ostroff, 1989). That is, the truth of the following proposition must be demonstrated P and C ! Sp: The control structure can be a three-layered structure. First, the automated controller without software controls the plant by only using the electro-mechanical control theories. Second, the auto- mated controller with software controls the plant by using the soft- ware (e.g., control software, operating system, and device driver), computer, and electro-mechanical hardware controller. Usually, the software has a supervisory function above the hardware control- ler. Third, there can be the human supervisory controller above the software and the hardware controllers. Most of the safety-critical systems in nuclear power plants and airplanes for example, have this type of control architecture. Therefore, a human-centered con- trol design for safety systems is important to maintain the safety of the human–machine interaction. The interaction among the compo- nents should also be considered to verify that the behavior of both the plant and a controller meets the functional requirements and achieves a safe behavior by an enforcement of the safety constraints. The safety control systems must be designed (and operated) in a way not only to achieve the intended (goal-oriented) behavior but also to avoid an unintended (risk-oriented) behavior. To be safe, the original design must not only enforce the appropriate safety requirements and constraints on a behavior to ensure a safe oper- ation, but it must also continue to operate safely as changes and adaptations occur over time (Woods, 2000). A prerequisite to achieve this is to perform a safety analysis as an integrated part of a system development process, with a well-supported change management process. 0164-1212/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2009.08.022 * Corresponding author. Address: Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Youseong-gu, Daejeon 305-353, Republic of Korea. Tel.: +82 42 868 8235; fax: +82 42 868 8916. E-mail address: jslee@kaeri.re.kr (J.-S. Lee). The Journal of Systems and Software 83 (2010) 1612–1621 Contents lists available at ScienceDirect The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss