International Journal of Computer Applications (0975 – 8887) Volume 12– No.12, January 2011 47 A Modified Algorithm for Buffer Cache Management ABSTRACT A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. In this paper an algorithm is proposed for buffer cache management with prefetching. The buffer cache contains two units, the main cache unit and prefetch unit. The sizes of both the units are fixed. The total sizes of both the units are a constant. Blocks are fetched in one block look ahead prefetch principle. The block placement and replacement policies are defined. The replacement strategy depends on the most recently accessed block and the defined miss count percentage or hit count percentage of the blocks. FIFO algorithm is used for the prefetch unit. KEYWORDS Buffer cache management, prefetching, data access. File systems management, operating systems performance. 1. INTRODUCTION Buffer cache management is a widely studied topic. Many algorithms have been proposed to improve the same. LFU, LRU-k, 2Q, FBR, LRFU, C-LRU (Cooperative -LRU), D-LRU (Distributed-LRU), N-Chance, RobinHood are some of them. In LFU, the least frequently used block is replaced. In LRU-k the algorithm keeps track of the last k references to a page. The page which has shortest interarrival time is retained in the cache. In 2Q two queues are maintained to place pages as either hot or cold. On a re-reference to a page in a queue, it is treated as more likely to be referenced. In FBR, the blocks are maintained in LRU order but are replaced in least frequently used order. In LRFU, a function involving the time of access to the blocks is taken into account to determine the block to be replaced. Prefetching has also been proven to be effective as discussed in [3, 4, 6 and 7]. An alternative to LRU replacement was suggested in [9]. C-LRU, RobinHood algorithm is cooperative algorithm. C-LRU based on the D- LRU et. al [8], the idea is that when a client requests a chunk from another client, a new copy of this chunk will be created by this the importance of the chunk in both clients should be reduced and also RobinHood is based on the N-Chance algorithm, N-Chance for singlet evicted but RobinHood, a singlet (i.e. “poor” chunk) is forwarded to a peer that has a chunk that is cached at many clients (i.e. “rich” chunk). The File system speed has impact on buffer cache management [1]. In [3] an algorithm called W 2 R algorithm proposes a method for prefetching in an aggressive manner. The authors propose a method where the cache is considered to be of two units – Weighing and Waiting whose sizes can be changed dynamically. The waiting room has blocks that are prefetched. The weighing room has blocks that have been accessed. The algorithm follows one block look ahead (OBL) for prefetching. The sizes of the two rooms are adjusted based on the time of access of a block from the time it is brought into the Waiting compartment. This paper proposes an algorithm to place and replace blocks in the buffer cache based on OBL principle. The buffer cache consists of two parts – prefetch unit and main cache. The sizes of the two units are fixed. Model with variable sizes is a topic of future research. On a miss, the block is fetched into the main cache. The next sequential block is fetched into the prefetch unit. On a hit in prefetch, the block is brought into the main block. The block with maximum number of misses which is not the most recently accessed is replaced by the algorithm of et al[1] but by this algorithm the maximum miss percentage miss count/(miss count + hit count ) * 100 ) and which is not the most recently accessed is replaced by the new block or maximum hit percentage block is continue. It means which block has the higher hit count, its more chance to in main cache so continue first. On every hit, the corresponding block’s hit count is incremented. Simultaneously, the miss count of all the other blocks in the main cache is incremented. This is to say that the rest of the blocks are not useful at that point of time. It is not possible for two blocks to have same hit and miss counts as they are fetched at different points of time. The algorithm keeps track of most recently accessed and the relative usage of the blocks over a period of time. The rest of the paper is organized as follows. Section 2 gives motivating example, section 3 gives the algorithm, section 4 the conclusion, and section 5 references. 2. MOTIVATING EXAMPLE Consider a list of references. In W 2 R algorithm the LRU algorithm is used for replacing blocks in the weighing room. Consider the following scenario. A block b1 is brought in at time t1. It has 20 misses and 40 hits. It has been in the cache for 60 units of time. Let block b2 have 30 misses and 70 hits and block b3 have 7 miss and 20 hit. Let the LRU block is b1. If the size of the cache is three blocks, then if block b4 is needed, then block b1 is evicted. If the future references were for block b1, then there would be a miss. The et. al[1] algorithm finds the block with maximum number of misses which is not most recently accessed,b2 is that block. Hence it replaces b2 Hence the request for b1 is a hit, but this algorithm the maximum miss percentage is b1 (33.3%) and b2 has 30% or just opposite the hit percentage of b1 is 66.7% and b2 is 70% and not most recently accessed block replace. So it replace the b1. The algorithm is based on the concept that blocks with maximum amount of non access over their life in the cache till the current point is good candidates for replacement. This is the motivation. Mukesh Kumar Chaudhary, Manoj Kumar, Mayank Rai and Rajendra Kumar Dwivedi Department of Computer Science and Engineering, M. M. M Engineering College, Gorakhpur, India