End-to-end congestion control techniques for
Router
B.Mahesh
1
M.Venkateswarlu
2
M.Raghavendra
3
Department of CSE, Department of CSE, Junior Engineer,
Intell Engineering College, Dr.K.V.S.R College of Engg. For women, McLaren R&D,
Anantapur, India. Kurnool, India. Hyderabad, India.
mahesh.bhasutkar@gmail.com venkateswarlu.maninti@gmail.com raghu.knl@gmail.com
Abstract— END-TO-END packet delay is one of the canonical
metrics in Internet Protocol (IP) networks and is important
both from the network operator and application performance
points of view. The motivation for the present work is a
detailed know-ledge and understanding of such “through-
router” delays. A thorough examination of delay leads
inevitably to deeper quest-ions about congestion and router
queuing dynamics in general. Although there have been many
studies examining delay statistics and congestion measured at
the edges of the network, very few have been able to report
with any degree of authority on what actually occurs at
switching elements. In existing system the single-hop packet
delay measured and analyzed through operational routers in a
backbone IP network. However since the router had only one
input and one output link, which were of the same speed, the
internal queuing was extremely limited. In this paper work
with a data set recording all IP packets traversing a Tier-1
access router. All input and output links were monitored,
allowing a complete picture of congestion and in particular
router delays to be obtained. This paper provides a
comprehensive examination of these issues from the
understanding of origins and measurement.
Keywords- congestion, delay, busy period, IP router, packet delay
analysis, input-queueing, scheduling
I. INTRODUCTION
End-to-End packet delay is an important metric to
measure in networks, both from the network operator and
application performance points of view. An important
component of this delay is the time for packets to traverse
the different forwarding elements along the path. This is
particularly important for network providers, who may have
Service Level Agreements (SLAs) specifying allowable
values of delay statistics across the domains they control. A
fundamental building block of the path delay experienced by
packets in Internet Protocol (IP) networks is the delay
incurred when passing through a single IP router. Examining
such ‘through-router’ delays is the main topic of this paper.
The first aim of this paper is a simple one, to exploit
this unique data set by reporting in detail on the magnitudes,
and also the temporal structure, of delays on high capacity
links with nontrivial congestion. Our second aim is to use the
completeness of the data as a tool to investigate how packet
delays occur inside the router. Packet delays and congestion
are fundamentally linked, as the former occur precisely
because periods of temporary resource starvation, or
microcongestion episodes, are dealt with via buffering. Our
third contribution is an investigation of the origins of such
episodes, driven by the question, “What is the dominant
mechanism responsible for delays?”. We use a powerful
methodology of virtual or semi experiments, that exploits
both the availability of the detailed packet data, and the
fidelity of the router model.
The paper is organized as follows. The
router measurements are presented in Section II, and
analyzed in Section III, where the methodology and sources
of error are described in detail. In Section IV we analyze the
origins of microcongestion episodes.
II. FULL ROUTER MONITORING
In this section, we describe the hardware involved
in the passive measurements, present the router monitoring
set-up, and detail how packets from different traces are
matched.
A. Router Architecture:
Our router is of the store & forward type, and implements
Virtual Output Queues (VOQ).. The router is composed of a
switching fabric controlled by a centralized scheduler, and
interfaces or linecards. Each linecard controls two links: one
input and one output. When packet arrives at the input link of
a linecard, its destination address is looked up in the
forwarding table. This does not occur however until the
packet completely leaves the input link and fully arrives in
the linecard’s memory. The packet is stored in the
appropriate queue of the input interface where it is
decomposed into fixed length cells. When the packet reaches
the head of line it is transmitted through the switching fabric
cell by cell to its output interface, and reassembled before
being handed to the output link scheduler, i.e. the ‘forward’
part of store & forward. The packet might then experience
queuing before being serialised without interruption onto the
output link. In queuing terminology it is ‘served’ at a rate
equal to the bandwidth of the output link capacity.
2011 International Conference on Communication Systems and Network Technologies
978-0-7695-4437-3/11 $26.00 © 2011 IEEE
DOI 10.1109/CSNT.2011.40
157