Significance of Replacement Queue in Sequential Prefetching Elizabeth Varki University of New Hampshire varki@cs.unh.edu Allen Hubbe * EMC Allen.Hubbe@emc.com Arif Merchant † Google aamerchant@google.com Abstract The performance of a prefetch cache is dependent on both the prefetch technique and the cache replacement policy. Both these algorithms execute independently of each other, but they share a data structure - the cache replacement queue. The replacement queue captures the impact of prefetching and caching. This paper shows that even with a simple sequential prefetch technique, there is an increase in hit rate and a decrease in response time when the LRU replacement queue is split into two equal sized queues. A more significant performance improve- ment is possible with sophisticated prefetch techniques and by splitting the queue unequally. 1 Introduction Prefetching and caching refers to a cache system that loads data blocks not yet requested by the cache work- load. The goal is to ensure that the cache contains blocks that will be requested in the near-future. Prefetching and caching software essentially consists of two algorithms, namely, the prefetch technique and the replacement pol- icy. The software is responsible for leveraging the spatial and temporal locality of reference in the cache workload. A prefetch technique is the software responsible for identifying access patterns in the cache workload and loading data blocks from these patterns into the cache before the blocks are requested. The task of a prefetch technique is to determine what data blocks to prefetch and when to prefetch the blocks. For example, if a file is being read sequentially, the sequential prefetch technique associated with the file system cache may prefetch sev- eral blocks contiguous to the requested file block. The replacement policy is the cache software that determines which block is evicted from the cache when a new block is to be loaded into a full cache. Thus, the replacement * This work was done while Allen was in UNH. † This work was done before Arif joined Google. policy is responsible for keeping a prefetch block in the cache until it is requested. The prefetch technique and the replacement policy are standalone - capable of executing independently of each other - the prefetch technique does not decide when a prefetched block is evicted; the replacement policy does not decide when a block is prefetched. The two al- gorithms together determine the contents of the cache. Consequently, even though the algorithms are indepen- dent, the performance of a prefetch technique cannot be studied in isolation of the replacement policy and vice versa. The performance - cache hit ratio, response time - of a prefetch cache depends on the combined impact of both algorithms. Prefetching and caching are fundamental to computer and network systems, so this topic has been exten- sively researched. However, this topic is not well un- derstood since it is complex. There are several issues to be considered - for prefetch techniques: how much to prefetch, when to prefetch, what to prefetch; what re- placement policy to use with a particular prefetch tech- nique, and what prefetch technique to use with a par- ticular replacement policy. A prefetch/replacement pol- icy that works well with one workload could perform poorly with another workload. Relative performances vary in a seemingly arbitrary manner [6], so there are no optimal prefetch techniques nor optimal replacement policies for prefetch caches. Some papers have devel- oped integrated prefetch and replacement algorithms, but these algorithms requires a priori knowledge of the work- load for optimum performance. Most modern file system and storage caches implement separate prefetching tech- niques and replacement policies, not an integrated algo- rithm combining prefetch and cache replacement. What this paper does: This paper looks at prefetch- ing and caching from a different perspective than that of prior papers. We view prefetching and caching as a synchronization problem, similar to problems such as producer-consumer, reader-writer, and dining philoso-