2742 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 57, NO. 12, DECEMBER 2008
One-Way Delay Measurement: State of the Art
Luca De Vito, Sergio Rapuano, Member, IEEE, and Laura Tomaciello
Abstract—Nowadays, the evaluation of performance measure-
ment in computer networks is an important issue. To ensure
the quality of service of the network communication, one of the
most important network performance parameters is the one-way
delay (OWD). For accurate OWD estimation, it is essential to
consider some parameters that can influence the measure, such
as the operating system and, in particular, the threads, which
are concurrent with the measurement application. Moreover,
OWD estimation is not an easy task, because it can be affected
by synchronization uncertainties. This paper aims to review
the different solutions proposed in the scientific literature for
OWD measurement. These solutions adopt different methods
to guarantee a reasonable clock synchronization based on the
Network Time Protocol, the Global Positioning System, and the
IEEE 1588 Standard. These different approaches are critically
reviewed, showing their advantages and disadvantages.
Index Terms—Global Positioning System (GPS), IEEE 1588,
network, Network Time Protocol (NTP), one-way delay (OWD),
synchronization.
I. I NTRODUCTION
A
S COMPUTER networks become more complex and
larger, measurement infrastructures and methodologies
become essential in characterizing network performances [1].
The metrics of the greatest relevance for network performance
can be divided into four main groups: 1) availability; 2) loss and
error; 3) delay; and 4) bandwidth.
Availability metrics assess how robust the network is, i.e.,
the percentage of time the network is running without any
problems that impact the availability of services. Loss and error
metrics indicate network congestion conditions, transmission
errors, and/or equipment malfunctioning. They usually measure
the fraction of packets lost in a network due to buffer overflows
or the fraction of errored bits or packets. Bandwidth metrics
assess the amount of data that a user can transfer through the
network in a time unit that is both dependent and independent
of the existing network traffic. Finally, delay metrics also assess
network congestion conditions or the effect of routing changes.
They measure the delay [one-way delay (OWD) and round-
trip delay (RTD)] and Internet Protocol delay variation (IPDV,
Manuscript received October 12, 2007; revised April 23, 2008. First pub-
lished June 13, 2008; current version published November 12, 2008.
L. De Vito and L. Tomaciello are with the Laboratory of Signal Process-
ing and Measurement Information, Department of Engineering, University of
Sannio, 82100 Benevento, Italy, and also with the Benevento Research Labo-
ratory, Telsey Telecommunications S.p.A, 82018 San Giorgio del Sannio, Italy
(e-mail: devito@unisannio.it; laura.tomaciello@unisannio.it).
S. Rapuano is with the Department of Engineering, University of Sannio,
82100 Benevento, Italy (e-mail: rapuano@unisannio.it).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIM.2008.926052
or “jitter”) of the packets transferred by a network [2]. In
particular, the OWD is the time between the occurrence of
the first bit of a packet on the first observation point, e.g., the
transmitting monitor interface, and the occurrence of the last
bit of a packet on the second observation point (RFC 2679 [3]).
The RTD is considered to be the time interval between the time
instant a request packet is sent by a source node and the time
instant a response packet is received from the destination node
(see RFC 2681 [4]). Finally, the IPDV is the difference in the
OWD of a selected pair of packets in a test stream (see RFC
3393 [5]).
It is worth noting that each metric deals with time: time
percentage, time delay, and time unit; for this reason, the
main indicators in evaluating the network performance are
the network delays. Network delays are composed of three
components: 1) equipment delay; 2) transmission delay; and
3) propagation delay [6].
The first is the delay introduced by the equipment before
it becomes emitting equipment. This delay consists of the
processing time, packet switching, and queueing delays and
depends on the network load and congestion.
The second is the time taken to transmit all the bits of the
frame containing a packet. It depends on the data rate, media,
and distance and can only be controlled in a limited way by the
network planners [7].
The third is the time between the emission of the first bit (or
the last bit) of a packet by the transmitting equipment and the
reception of this bit by the receiving equipment [8].
Therefore, the equipment, transmission, and propagation
delay indicators allow the determination of the time that the
packet spends to travel from source to destination. This time is
called OWD.
The OWD, as shown in Fig. 1, is constituted by three time
contributions. Fig. 1(a) represents the equipment delay, which
is the time interval between instant t
0
, when the packet is
scheduled for sending, and instant t
1
, when the packet reaches
the interface. Fig. 1(b), which represents the transmission delay,
is the time interval between instants t
1
and t
2
, when the packet
is completely transmitted onto the medium. Finally, Fig. 1(c) in-
dicates the propagation delay, which is the time interval from t
2
up to t
3
, when the packet reaches the destination interface [9].
The OWD can be obtained by the sum of the transmission
delay and the propagation delay, which could be defined as the
time between the emission of the first bit of a packet by the
source and the reception of the last bit of this packet by
the receiver [3].
Some papers [10]–[24] deal with a reliable and accurate
way to measure the OWD. To measure the OWD, a sequence
of probe packets is to be sent from one end of the monitored
network to the other end. Each probe packet is marked with
0018-9456/$25.00 © 2008 IEEE