The Effect of Trust Assumptions on the Elaboration of Security Requirements Charles B. Haley 1 Robin C. Laney 1 Jonathan D. Moffett 2 Bashar Nuseibeh 1 1 Department of Computing The Open University Walton Hall, Milton Keynes, MK7 6AA, UK {C.B.Haley, R.C.Laney, B.Nuseibeh} [at] open.ac.uk 2 Department of Computer Science University of York Heslington, York, YO10 5DD, UK jdm [at] cs.york.ac.uk Abstract Assumptions are frequently made during requirements analysis of a system-to-be about the trustworthiness of its various components (including human components). These trust assumptions can affect the scope of the analysis, derivation of security requirements, and in some cases how functionality is realized. This paper presents trust assumptions in the context of analysis of security requirements. A running example shows how trust assumptions can be used by a requirements engineer to help define and limit the scope of analysis and to document the decisions made during the process. The paper concludes with a case study examining the impact of trust assumptions on software that uses the Secure Electronic Transaction (SET) specification. 1. Introduction Requirements engineering is concerned with determining the characteristics of a system-to-be. The system-to-be comprises not only software, but also all the diverse components needed for it to achieve its purpose. For example, a computing system clearly includes the computers, but also incorporates the people who will use, maintain, and depend on the system; the environment within which the system will exist; and any systems already in place. An important element of a system’s requirements is its security requirements. Security requirements arise because stakeholders assert that some objects, be they tangible (e.g. cash) or intangible (e.g. information and state), have direct or indirect value. Objects valued in this way are called assets, and the stakeholders naturally wish to protect these assets from harm. For example, tangible assets might be destroyed, stolen, or modified; information assets might be destroyed, revealed, or modified; and state might be modified, revealed, or disputed (this list is not exhaustive). An asset can be used to cause indirect harm, such as to reputation. The requirements engineer uses security requirements to restrict the number of cases wherein these undesirable outcomes can take place. This paper presents how the engineer’s derivation, elaboration and analysis of security requirements can be aided through the use of trust assumptions, problem frames, and threat descriptions. Although not required, derivation of security require- ments can be facilitated by the postulation of the existence of an attacker. The attacker’s goal is to cause harm. Ignoring the possibility of harm caused by accident or error, if one can show that no attackers exist, then security is irrelevant. An attacker causes harm by exploiting an asset in some way. The possibility of such an exploitation is called a threat. More precisely, a threat is the potential for abuse of an asset in the context of the system that will cause harm. An attack exploits a vulnerability in the system to carry out a threat. One can reason about the attacker as if he or she were a type of stakeholder. Recent work has taken this approach, looking at the requirements and goals of the attacker (e.g. [1, 2, 15, 17, 18, 24]). From this point of view, an attacker wants a system to have characteristics that create vulnerabilities. The requirements engineer wants to ensure that the attacker’s requirements are not met. A way to do this is to specify sufficient constraints on the behavior of a system to ensure that the number of vulnerabilities is kept to an acceptable minimum [19]. Security requirements provide these constraints. One school of thought holds that a requirements engi- neer should reason about a system’s characteristics in the absence of a particular implementation of the system (e.g. [14]). Under this view, requirements engineering is concerned with enumerating goals for a system under consideration and producing a description of the system’s desired behavior. Another view, exemplified by problem frames [12], is that a system is intended to solve a given problem in a given context, where the context includes design decisions. One uses problem frames to analyze the problem in terms of the context and the design decisions