Computer Security is Not a Science (but it should be) Michael Greenwald Carl A. Gunter Bj¨ orn Knutsson Andre Scedrov Jonathan M. Smith Steve Zdancewic University of Pennsylvania 1 Introduction Security research is sometimes referred to as the “Humanities of Computer Science” because, too fre- quently, “secure” systems are built using equal mea- sures of folklore and black arts. Despite the humor- ous intention, there is a kernel of truth in this jest— computer security, at least “security in the large”, is not currently a science. This claim may seem unfair, given the progress made in security over the past decades. However, our present tools and methodologies are at most adequate for understanding systems security on a small scale. Cryptography, for example, is perhaps the most thor- oughly studied and most rigorously modeled aspect of security. Despite its tremendous importance, cryp- tography alone is not sufficient for building secure systems. Indeed, the vast majority of all security flaws arise because of faulty software (e.g., the ubiq- uitous buffer overflow problem). Such security holes cannot be avoided by cryptographic techniques, and despite widely known and accepted solutions to these kinds of software flaws, buggy code persists. Why is security not a science? Some would ar- gue that, by nature, security is fundamentally unsci- entific: security is hopelessly intertwined with social and economic forces beyond the purview of science. Yet, economists and psychologists have developed testable, scientific theories. What sets science apart from other disciplines is that it produces hypotheses that can be experimen- tally verified (or falsified). But, despite the large amounts of security-relevant data collected by orga- nizations like CERT and despite our decades of expe- rience building systems, computer security research has produced little in the way of predictive models or experimentally verifiable hypotheses. How can we establish security in the large on a more scientific footing? Over the last millenium, one way that disciplines have evolved into “sciences” is through a period of quantification. For example, Galileo, among others, transformed physics from an Aristotelian philosophy to a Baconian science by describing distance, speed, and time quantitatively, rather than explaining why objects fell, rolled, or flew. Our belief is that the current pre-scientific state of security research is fundamentally due to a lack of reasonable metrics. Furthermore, although there ex- ist a few experimental methods for assessing secu- rity (i.e., tiger-teaming [5]), these methods are not yet particularly meaningful in the context of sci- ence, where quantitative evaluation—for compari- son, modeling, and measurement of achievement—is central. The main questions we are interested in addressing are: Question 1: How could one measure security quantitatively? Question 2: What experiments ought one perform to assess security? Question 3: How can we improve our models us- ing these metrics? We believe that we can eventually achieve a workfactor-like formulation to address the first ques- tion. Such a formulation will likely be a composite of a variety of measurements, with imprecise but mean- ingful weights. As in physics, an approximation that can gradually be refined with experience is very use- 1