Research Article
A Novel Cooperative Cache Policy for Wireless Networks
Lincan Li ,
1
Chiew Foong Kwong ,
1
Qianyu Liu ,
2
Pushpendu Kar ,
3
and Saeid Pourroostaei Ardakani
3
1
Department of Electrical and Electronic Engineering, University of Nottingham Ningbo China, 315100 Ningbo, China
2
International Doctoral Innovation Centre, University of Nottingham Ningbo China, 315100 Ningbo, China
3
School of Computer Science, University of Nottingham Ningbo China, 315100 Ningbo, China
Correspondence should be addressed to Chiew Foong Kwong; chiew-foong.kwong@nottingham.edu.cn
Received 11 February 2021; Revised 16 July 2021; Accepted 27 July 2021; Published 10 August 2021
Academic Editor: Vishal Sharma
Copyright © 2021 Lincan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Mobile edge caching is an emerging approach to manage high mobile data traffic in fifth-generation wireless networks that
reduces content access latency and offloading data traffic of backhaul links. This paper proposes a novel cooperative
caching policy based on long short-term memory (LSTM) neural networks considering the characteristics between the
features of the heterogeneous layers and the user moving speed. Specifically, LSTM is applied to predict content
popularity. Size-weighted content popularity is utilised to balance the impact of the predicted content popularity and
content size. We also consider the moving speeds of mobile users and introduce a two-level caching architecture consisting
of several small base stations (SBSs) and macro base stations (MBSs). To avoid content requests of fast-moving users
affecting the content popularity distribution of the SBS since fast-moving users frequently handover among SBSs, fast-
moving users are served by MBSs no matter which SBS they are in. SBSs serve low-speed users, and SBSs in the same
cluster can communicate with one another. The simulation results show that compared to common cache methods, for
example, the least frequently used and least recently used methods, our proposed policy is at least 8.9% lower and 6.8%
higher in terms of the average content access latency and offloading ratio, respectively.
1. Introduction
Wireless networks are undergoing exponential growth in
mobile data traffic due to the massive utilisation of mobile
devices and the tendency for high data rates and low-
latency applications [1, 2]. Based on [3], the global mobile
data traffic has increased sevenfold from 2016 to 2021, which
causes the current network capacity to be not enough. More-
over, massive low-latency applications, such as real-time
monitoring and intelligent driving, aggravate the network
overload [4]. The most common approach is to deploy ultra-
dense BSs to increase network capacity and overcome this
problem. However, upgrading these infrastructures comes
at a high cost for mobile operators [5]. And even if the capac-
ity is expanded by upgrading the infrastructure, the latency
cannot meet the requirements of low-latency applications
due to the long distance between the users and the remote
core network [6].
To meet the increasing demand for data traffic and pro-
vide low-latency services, mobile edge caching is introduced
in wireless networks [7]. Mobile edge caching deploys popu-
lar content at edge nodes close to users, either in base stations
(BSs) and/or user terminals (UTs). Users can directly access
their requested content from the edge server rather than the
remote core network via backhaul links [8], reducing content
access latency due to decreased content transmission dis-
tances. In addition, data congestion in the backhaul links
can be efficiently alleviated as many repeated content
requests are avoided in the backhaul links [9, 10].
However, the cache capacity is limited, that only a part of
rather than all the contents can be cached. The requests for
the uncached contents still need to be satisfied at the core net-
work [11]. Designing an efficient cache policy based on the
limited cache capacity is a challenge. To date, the predictive
cache scheme is proposed, where the content is precached
before the content is requested [12]. Once the cached content
Hindawi
Wireless Communications and Mobile Computing
Volume 2021, Article ID 5568935, 18 pages
https://doi.org/10.1155/2021/5568935