International Journal of Engineering and Advanced Technology (IJEAT)
ISSN: 2249 – 8958, Volume-9 Issue-2, December, 2019
1361
Published By:
Blue Eyes Intelligence Engineering
& Sciences Publication
Retrieval Number: A5089119119/2019©BEIESP
DOI: 10.35940/ijeat.A5089.129219
Abstract: An important issue incurred by users that limits the
use of internet is the long web access delays. Most efficient way to
solve this problem is to use “Prefetching”. This paper is an
attempt to dynamically monitor the network bandwidth for which
a neural network-based model has been worked upon. Prefetching
is an effective and efficient technique for reducing users perceived
latency. It is a technique that predicts & fetches the web pages in
advance corresponding to the clients’ request, that will be
accessed in future. Generally, this prediction is based on the
historical information that the sever maintains for each web page
it serves in a chronological order. This is a speculative technique
where if predictions are incorrect then prefetching adds extra
traffic to the network, which is seriously negating the network
performance. Therefore, there is critical need of a mechanism
that could analyze the network bandwidth of the system before
prefetching is done. Based on network conditions, this model not
only guides if the prefetching should be done or not but also tells
number of pages which are to be prefetched in advance so that
network bandwidth can be effectively utilized. Proposed control
mechanism has been validated using NS-2 simulator and thus
various adverse effects of prefetching in terms of response time
and bandwidth utilization have been reduced.
Keywords: Network Bandwidth, Neural Network, Prediction,
Prefetching
I. INTRODUCTION
Due to enormous information present on the World Wide
Web, users have been experiencing long delays while
accessing files from World Wide Web. Prefetching is the
solution to render these delays. The intent behind prefetching
is to take benefit of the idle time between two network
accesses i.e. when users are viewing the web pages which are
just downloaded. In this idle period, prefetching estimates and
fetches the additional web pages which will be accessed in
near future based on some intelligence added to the
applications so that users‟ waiting time can be reduced and
thus experience of using Internet could be improved. If the
prefetched web pages are indeed requested, these can be
accessed with negligible delay. If the system could exactly
predict those web pages which a user will request next, we
will fetch only those web pages in advance and user will enjoy
zero latency. Unfortunately, some prefetched web pages may
never be used which results in wastage of network bandwidth
and adds to the principal cost of prefetching. In literature,
there are a lot of prefetching techniques discussing prediction
Revised Manuscript Received on December 15, 2019.
Sonia Setia, Computer Science, YMCA, Faridabad, India. Email:
setiasonia53@gmail.com
Jyoti, Computer Science, YMCA, Faridabad, India. Email:
justjyoti.verma@gmail.com
Neelam Duhan, Computer Science, YMCA, Faridabad, India. Email:
neelam.duhan@gmail.com
algorithms, their accuracy, precision and hit ratio etc. which
are mainly its host aspects. Second aspect is networking
aspect of the prefetching i.e. how to determine the number of
web pages to prefetch to reduce its adverse effects on the
network. Though, prefetching is taking advantage of users‟
idle time, however, it is also necessary to consider whether
network is idle at prefetching time or not.
Based on these two aspects, prefetching scheme basically
consists of two modules:
A. Prediction Module
After a users‟ current request is satisfied, prediction
module immediately starts working and predicts the future
requests of the user by computing the probability with which
the web pages will be accessed in near future. Different types
of prediction algorithms have been used in literature for this
module.
B. Threshold Module
Based on network conditions, this module takes decision
for Prefetching. If it allows for prefetching then it computes
value of prefetching threshold i.e. how many numbers of
documents which are to be prefetched to achieve optimum
performance. This module is independent of the prediction
module i.e. same threshold algorithm can be applicable with
different prediction algorithms.
This paper focused on second aspect of prefetching i.e.
Threshold module which determines the prefetch threshold
based on network conditions in real time. In this view, a
control mechanism has been proposed which uses the ping‟s
ICMP (Internet control message protocol) messages to
compute the RTT (round trip time) and network bandwidth is
also measured to control the prefetch threshold so that
network performance can be optimized. It employs a Neural
Network model over the RTT and Network Bandwidth basis
which it tells if the system is ready for prefetching or not and if
yes, how many web pages to be prefetched to optimize the
network usage.
The remainder of the paper is organized as follows. Brief
review of literature work has been given in section 2. Section
3 presented the proposed work in which an algorithm has been
developed to determine the prefetch threshold based on
network conditions. In section 4, summarized results of
evaluation of the proposed work through trace-driven
simulations have been shown. Finally, conclusion has been
presented in section 5.
Neural Network Based Prefetching Control
Mechanism
Sonia Setia, Jyoti, Neelam Duhan