On the performance of TCP pacing with DCCP
Shahrudin Awang Nor, Suhaidi Hassan, Osman Ghazali, A.Suki M. Arif
InterNetWorks Research Group
UUM College of Arts and Sciences
Universiti Utara Malaysia
Sintok, Kedah, Malaysia
{shah, suhaidi, osman, suki1207}@uum.edu.my
Abstract-Packet pacing in TCP has been introduced as one
of the solutions to alleviate bursty traffic in TCP. In this paper,
we investigate the performance of paced and standard
(unpaced) TCP when coexist with DCCP over short and long
delay link networks. We found that paced TCP for the entire
TCP connection performs better in long delay link, with
smoother throughput and better jitter, whereas in short delay
link, there is not much positive effect of using pacing for TCP.
The existence of DCCP together with TCP flows does not much
affect the performance of paced TCP. However, the
performance of paced TCP is slightly better when coexisting
with DCCP TCP-like and DCCP TFRC in comparison to
standard TCP. Based on the result, it can be used as a
fundamental in implementing packet pacing in DCCP TCP-
like.
Keywords-TCP Pacing, DCCP TCP-like, DCCP TFRC
I. INTRODUCTION
Transport Control Protocol (TCP) [1] has known to be a
reliable transport protocol with congestion control for
delivering data traffic. Moreover, TCP can deliver best-
effort service for error-intolerant and delay-tolerant data
such as web, email, file transport, etc. All that features of
TCP make it suitable for the delivery of important, mission
critical, and error-free data which requires a reliable data
connection.
In normal network scenario, the sending rate is
determined by the sender where the new packet will only be
sent into the network when the acknowledgement of old
data packet is received. Instead of using this concept, an
ancestor of pacing, explicit rate control, controls the sending
data rate by adjusting the packets to be sent at a pre-
determined rate. Unfortunately, rate control has its own
problems such as less responsive to rapid increases in
congestion.
Pacing is a hybrid between pure rate control and TCP's
use of acknowledgements to trigger new data to be sent into
the network. Unlike explicit rate control, TCP is very
responsive to network congestion. Depending on the
congestion window size in TCP, the sender will generally
stop sending new packets into the network if congestion is
detected. The congestion detection in TCP is done through a
mechanism where timeout occurs or three duplicate
acknowledgements are received by the sender.
In this paper, we are investigating the effects on paced
TCP flow when coexist with standard TCP and Datagram
Congestion Control Protocol (DCCP) [2] flow. DCCP is a
connection-oriented and unreliable transport protocol which
has the features of error-tolerant and delay-intolerant, and it
is friendly to other flows such as TCP. In DCCP, there are
two Congestion Control Identifiers (CCIDs) to determine
the congestion control mechanism. TCP-like Congestion
Control is implemented by CCID-2 [3] for bursty and abrupt
changes traffic data flow and TCP-Friendly Rate Control
(TFRC) Congestion control by CCID-3 [4] for smoother
traffic data flow.
This paper is organized as follows: This section is an
introductory part to the research, followed by Section 2 of
related works done by other researchers. Section 3 describes
TCP pacing and the pacing algorithms. In Section 4, we
describe the experimental setup and performance metrics.
The results and analysis are included in Section 5, and
Section 6 concludes the findings.
II. RELATED WORKS
In general, there are two types of TCP pacing that have
been researched; the first one is the pacing mechanism
implemented during slow-start restart, named Rate Based
Pacing (RBP) [5], [6], [7], [8] and the second one is pacing
for the entire of the TCP connection [9], [10], [11].
RBP for TCP is introduced by Visweswaraiah and
Heidemann [5] to address HTTP’s slow-start restart problem
in TCP. Because of congestion avoidance mechanisms in
TCP are not tuned for request-response traffic like HTTP,
some TCP implementations are forcing slow-start in the
middle of a connection that has been idle for a certain
amount of time even there is no packet loss. Other existing
TCP implementations do no treat idle time as a special case
and use the prior value of the congestion window to send
data. Both cases lead to poor performance of enhancements
to HTTP over TCP. Subsequently, the simulation
mechanism for RBP is implemented for TCP Vegas and
Reno, and bundled together with current ns-2 version.
Transmission timer framework is proposed by Kobayashi
[6] for RBP in TCP to mitigate a burstiness in TCP slow-
start. In this approach, host software specifies the time for
each packet that should be sent out and gives a precise inter
frame gap for the data stream.
Simulation study of paced TCP [8] by Kulik et al.
proposes a modified leaky-bucket scheme for admitting
packets into the network. It tries to limit the size of bursts
entering the network, especially during slow-start phase.
The performance of pacing in TCP is discussed
thoroughly by Aggarwal, Savage and Anderson [9]. They
proposed a solution to bursty traffic flows on modern high-
speed networks so that data sent into the networks are
evenly paced or spaced over an entire round-trip time. The
results showed that pacing offers better fairness, throughput,
2010 Second International Conference on Network Applications, Protocols and Services
978-0-7695-4177-8/10 $26.00 © 2010 IEEE
DOI 10.1109/NETAPPS.2010.14
37
brought to you by CORE View metadata, citation and similar papers at core.ac.uk
provided by UUM Repository