Block Pattern Based Buffer Cache Management
Urmila Shrawankar
Ph.D Scholar, IEEE Student Member
G.H. Raisoni College of Engineering
Nagpur, India
urmila@ieee.org
Reetu Gupta
M.Tech CSE Student
G.H. Raisoni College of Engineering
Nagpur, India
guptareetu.rs@gmail.com
Abstract— Efficient caching of the data block in the buffer
cache can overcome, the costly delays, associated with accesses
made to secondary storage devices. Pattern based buffer cache
management methodology is being proposed for enhancing the
system performance. The block to be replaced is determined by
identifying access patterns such as sequential, looping exhibited
in the I/O request issued to the block, through the use of program
counter. The blocks are stored in separate, pattern based
partitions of the cache, which supports multiple replacement
policies to best utilize the cache under that reference pattern.
Marginal gain functions are used for managing the partitions. As
the suggested methodology aims at improving the buffer cache
hit ratio it is capable of enhancing the performance of
multimedia application and heterogeneous storage environment.
Also it can used for power saving in database server as increased
cache hit ratio reduces the memory traffic and thus saves the
energy.
Keywords— buffer cache, replacemt policies , access patterns
program counter &cache partition
I. INTRODUCTION
Modern operating systems employ buffer cache, for
enhancing the system performance, affected by the slower
secondary storage accesses. Effectiveness of the block
replacement algorithm has a key role in determining the system
performance. Design of effective replacement algorithms is an
active research area in the field of operating systems. One of
the oldest and widely used replacement algorithms in real time
systems is the LRU (Least Recently Used) [1-3] due to its
effective utilization of principle of locality.
Due to minimal management overhead and reduced
complexity and simple implementation current GNU/Linux
kernel still implements LRU replacement policy. But this
policy is dependent on the application working set and suffers
from degraded performance when weak locality of reference is
exhibited by the application working set having a large size
compared to the cache size [4].
In today’s world where everything is available on one click
either through internet or hand held devices. Information is
buffered in memory everywhere. The technique proposed in [1]
showed that through correct prefetching and caching of the
requested data, the network traffic and bandwidth could be
saved to a larger extent.
For better performance gain and improved cache hit ratio
the information in accesses made to the data blocks through the
I/O request are used by recent replacement algorithm [1, 6-8].
These techniques exploited the patterns exhibited in the I/O
workload such as mixed, thrashing, periodic, and applied
specific replacement policies to best utilize the cache under that
reference pattern.
A. Buffer Cache Organization
Inside the computer system the storage devices such as the
registers, caches, and main memory are ordered at various
levels as depicted in “Fig.1”. At the top level there are registers
were data is accessed at the speed of processor usually in one
clock cycle. At next higher level exists programmer transparent
cache memory implementable at one or more levels and
handled by memory management unit. Primary memory is
present next one level up managed by the operating system.
Fig. 1. Storage Hierarchies
A noticeable fact is at higher levels where space available
for storing the data increases but the access time and the
transfer bandwidth i.e. the rate at which the information is
transferred between the adjacent levels is decreased. Buffer
cache is introduced as an interface in user area between the
main memory and disk drives [9]. Thus buffer cache aims to
reduce the frequency of access made to the secondary storage
devices and enhance the system throughput.
Direct access to even a single byte or character from the
secondary storage is prohibited. Whenever the requests for the
data or instruction arise, the operating system kernel initially
searches for the block containing the requested byte in the
buffer cache. If the request is not satisfied, the block containing
Buffer Cache
Level 1
Level 2
Level 3
Level 4
Registers in CPU
Cache
Main Memory
Tape Drive
Disk Storage
Level 0
978-1-4673-4463-0/13/$31.00 ©2013 IEEE
The 8th International Conference on
Computer Science & Education (ICCSE 2013)
April 26-28, 2013. Colombo, Sri Lanka
963
MoB1.18