Social Caching and Content Retrieval in Disruption Tolerant Networks (DTNs) Tuan Le, You Lu, Mario Gerla Department of Computer Science University of California, Los Angeles Los Angeles, USA {tuanle, youlu, gerla}@cs.ucla.edu Abstract—In this paper, we extend our previous work on content retrieval in Disruption Tolerant Networks (DTNs) to support cooperative caching. Our caching scheme can adapt to the unstable network topology in DTNs, and enable the sharing and coordination of cached data among multiple nodes to reduce data access latency. Our key idea is to cache data at cluster head nodes that have the highest social levels, and thus receive many content requests. Furthermore, to reduce the caching overhead at central nodes and to bring data closer to the content requesters, we use multiple caching nodes along the content request forwarding path. Lastly, we propose a new cache replacement policy that considers both the frequency and recency of data access. Simulations in the NS-3 environment show that our caching scheme significantly improves the performance of content retrieval in DTNs. Keywords—Disruption Tolerant Networks; Cooperative Caching; Social Network Routing I. I NTRODUCTION In Disruption Tolerant Networks (DTNs) [1], mobile nodes contact each other opportunistically. Due to unpredictable node mobility, it is difficult to maintain persistent end-to-end connection between nodes. Thus, the store-carry-and-forward methods are used for data tranfers from source to destination. Node mobility is exploited to relay packets opportunistically upon contacting other nodes. A key challenge in DTN routing is to determine the appropriate relay selection strategy in order to minimize the number of packet replicas in the network, and to expedite the data delivery process. Cooperative caching has been extensively studied in both wireline and wireless networks [2], [3], [4]. However, due to the lack of persistent network connectivity, conventional cooperative caching techniques are not applicable to DTNs. There are two challenges for caching in DTNs. First, due to unstable network topology, it is difficult to determine the appropriate caching location for reducing data access delay. Second, to enhance data accessibility and to reduce the caching overhead of a single node, multiple nodes can be involved for caching. Yet it is challenging to coordinate caching among multiple nodes. Regarding content search and retrieval services, Information-Centric Network (ICN) has been drawing increased attention in both academia and industry. In ICN, users focus on the content data they are interested in. They do not need to know where content data is stored. Each content packet is identified by a unique name generally drawn from a hierarchical naming scheme. The content retrieval follows the query-reply mode. A content consumer spreads his Interest packets through the network. When matching content is found either at the content provider or intermediate content cache server, the content data will trace its way back to the content consumer using the reverse route of the incoming Interest. Previously, we proposed a novel disruption-tolerant mobile ICN [5], which leverages social network routing to query and retrieve content in DTNs. However, we did not consider caching. We assumed that each request can be satisfied by the original content provider. The main motivation for avoiding caches in our previous work was to exercise tight control on the copies delivered and the recipients of such copies. This made revocation possible. In this paper, we relax the control on copies and recipients and allow intermediate nodes to cache copies. Namely, we extend our previous design by addressing caching related issues. We propose a cooperative, socially inspired caching scheme in which popular content data are cached at cluster head nodes. These are popular nodes that have the highest social level (i.e. highest centrality) in the network, and thus are storing and forwarding most content requests. Yet, due to limited caching buffers in the mobile nodes, we also consider distributing cached data along content query paths. Neighbors of downstream nodes may also be involved for caching when there are heavy data accesses at downstream nodes. That is, downstream nodes move some of their existing cached data to neighboring nodes to make room for new data. Finally, we also consider dynamic cache replacement policy based on both the frequency and freshness of data access. The rest of this paper is organized as follows. Section II re- views the related work. Section III summarizes our previously proposed content retrieval method. Section IV describes the design of the caching scheme in detail. Section V presents the experimental results. Section VI concludes the paper. II. RELATED WORK ICN has attracted much attention from the research commu- nity. Recent studies have focused on high-level architectures of ICN, and provide sketches of the required components. Content-Centric Network (CCN) [6] and Named Data Network (NDN) [7] are two implemented proposals for the ICN concept