Temporal and Spatial Locality: A Time and a Place for Everything Rick Bunt 1 and Carey Williamson 2 University of Saskatchewan University of Calgary Abstract Since its earliest articulations close to 40 years ago locality has been a force shaping computer science research. It has been the guiding principle behind the design of techniques to manage many resources, including main memory, caches of various sorts, secondary storage, and web servers. The roots of locality go much deeper, however, and its impact is even more pervasive. In this paper, we track the locality phenomenon and its application, across time and across applications, looking for common themes. 1. Introduction The notion of locality is fundamental to the performance of computing systems. Because of locality, for example, acceptable page fault rates can be achieved even when the memory allocated to a program is much less than that required to store all of its pages, internet routers can make high speed routing decisions with very modest forwarding caches, and mobile users can work with remotely stored files even though they are located half a world away from the file server. The implications of locality pervade many aspects of computing and we rely on locality properties to improve performance across a wide spectrum of applications [PBC 1982]. A great deal of computer science research has addressed the recognition, modelling, understanding and application of locality, but the underlying principles extend far beyond computing. Many naturally occurring phenomena exhibit properties similar to locality, and the concept has been used over the years to understand population distribution, distribution of biological species, article distribution in journals, distribution of wealth, and word usage, to plan the location of libraries and other facilities, to order search keys in hashing tables and estimate program length, and to model the popularity of television programs.. The underlying general principle has been described under various names, including “The Law of Diminishing Returns”, “The Principle of Least Effort”, “Zipf’s Law”, “Bradford’s Law”, and “The 80-20 Rule”. The basic idea is that there is a strong core of items relevant to a particular task, and the incremental importance of additional items in the sample diminishes rapidly as more items are considered. The practical implication is that there are usually many items of low importance and large cost savings are realized when they are ignored. In the 1930’s and 1940’s this phenomenon was articulated by Zipf [Z 1949], Bradford [B 1934] and others as “concentration of productivity” and “the law of scattering”; in the 1960’s and 1970’s it was operationalized by Denning and others [D 1968, D 1970b, DSS 1972] as the “principle of locality” with a specific focus on memory management; in 1 Bunt’s current address: Dept. of Computer Science, University of Saskatchewan, Saskatoon, Canada S7N 5A9 (rick.bunt@usask.ca) 2 Williamson’s current address: Dept. of Computer Science, University of Calgary, Calgary, Canada T2N 1N4 (carey@cpsc.ucalgary.ca)