MC
2
: Multiple Clients on a Multilevel Cache
Gala Yadgar
1∗
, Michael Factor
2
, Kai Li
3
and Assaf Schuster
1
1
Computer Science Deptartment
Technion Israel Institute of Technology
Haifa 32000, Israel
{gala,assaf}@cs.technion.ac.il
2
IBM Haifa Labs
Haifa University Campus
Haifa 31905, Israel
factor@il.ibm.com
3
Department of Computer Science
Princeton University
Princeton, NJ 08540, USA
li@cs.princeton.edu
Abstract
In today’s networked storage environment, it is common
to have a hierarchy of caches where the lower levels of the
hierarchy are accessed by multiple clients. This sharing can
have both positive or negative effects. While data fetched
by one client can be used by another client without incur-
ring additional delays, clients competing for cache buffers
can evict each other’s blocks and interfere with exclusive
caching schemes.
Our algorithm, MC
2
, combines local, per client man-
agement with a global, system-wide, scheme, to emphasize
the positive effects of sharing and reduce the negative ones.
The local scheme uses readily available information about
the client’s future access profile to save the most valuable
blocks, and to choose the best replacement policy for them.
The global scheme uses the same information to divide the
shared cache space between clients, and to manage this
space. Exclusive caching is maintained for non-shared data
and is disabled when sharing is identified. Our simulation
results show that the combined algorithm significantly re-
duces the overall I/O response times of the system.
1. Introduction
Caching is used in storage systems to provide fast access
to recently or frequently accessed data, reducing I/O delays
and improving system performance. In many storage sys-
tem configurations, client and server caches form a two- or
more layer hierarchy. With a single client, the effectiveness
∗
Gala Yadgar’s work is supported in part by the Levi Eshkol scholarship
from the Israel Ministry of Science.
of such hierarchies is optimal when caches are kept exclu-
sive; a data block should be cached in at most one cache
level at a time, avoiding data redundancy [29].
When the lower levels of the hierarchy are accessed by
several clients, data sharing may occur. Sharing introduces
both positive and negative effects on the performance of a
multilevel cache. On the positive side, blocks fetched by
one client may later be requested by another client. The
second client experiences the effect of prefetching, when
blocks are present in the shared cache and do not have to be
fetched synchronously.
On the negative side, sharing introduces two major chal-
lenges. The first is maintaining exclusivity: a block may
be cached in the first level cache by one client and in the
second level by another. Furthermore, exclusivity may de-
prive clients of the benefit of sharing described above, since
blocks fetched by one client are not available for use by
others. Several previous studies acknowledge this prob-
lem [29, 31], while others assume that clients access disjoint
data sets [26, 28].
The second challenge is meeting the demands of com-
peting clients for space in the shared cache, while minimiz-
ing their interference with each other. A common approach
for allocation in shared caches is partitioning, where each
client is allocated a portion of the cache buffers according
to a global allocation scheme [14, 28]. This approach is
problematic when blocks are accessed by several clients and
might belong to more than one partition.
We present MC
2
, an algorithm for managing multilevel
shared caches. We show that it enhances the positive effects
of sharing and reduces the negative ones, by choosing the
right replacement policy. MC
2
relies on information pro-
vided by the application running on each client. The clients
are expected to provide a division of accessed blocks into
The 28th International Conference on Distributed Computing Systems
1063-6927/08 $25.00 © 2008 IEEE
DOI 10.1109/ICDCS.2008.29
718
The 28th International Conference on Distributed Computing Systems
1063-6927/08 $25.00 © 2008 IEEE
DOI 10.1109/ICDCS.2008.29
722