A. Ageev, D. Macii, D. Petri DISI – University of Trento, Via Sommarive 14, 38100, Trento, Ph. +39 0461881571, Fax: +39 0461882093, e’mails: ageev@disi.unitn.it, macii@disi.unitn.it, petri@disi.unitn.it The communication latency between Wireless Sensor Network (WSN) nodes can significantly affect the performance of real$time monitoring applications, especially when multi$hop network topologies are used. In fact, the end$to$end network delays may hinder the collaboration between different devices, thus preventing the applicability of data fusion algorithms or limiting the accuracy of inter$node time synchronization. Unfortunately, performing a thorough characterization of such latencies is not an easy task because they may depend considerably on the overall network data traffic as well as on the typical vagaries of RF links. In order to have a deeper insight about the communication latencies in nowadays WSNs, in this paper a suitable measurement procedure is described and some experimental results for different packet sizes, traffic conditions and number of hops are reported. Estimating the communication latencies in wired or wireless networks is a well$known measurement issue that has been extensively studied in last years [1]$[3]. In wireless networks this problem is even more critical due to the poor robustness that usually affects RF connections. In wireless sensor networks (WSNs) the contributions to the communication latency have been estimated mostly for synchronization purposes [4]$[6]. Nonetheless, just little experimental data are available in literature, although quite a deep analysis of the main uncertainty sources affecting time synchronization has been performed in [7]$[8]. Typically, a WSN node consists of a sensor board, a battery, a memory chip, a radio transceiver and a microcontroller (MCU) clocked either by an internal or an external oscillator. Although the MCU timers are commonly used to estimate the communication latencies (e.g. by measuring the round$trip time [9]), the uncertainty associated with such measurements is quite large due to the limited resolution of the timers, to the moderate stability of the local on$board oscillators and to the additional delays of the software routines reading the timers’ content both at the transmitting and at the receiving ends. Alternatively, the inter$node communication latency can be estimated at a low level by measuring the time interval between the rising edge of a pulse generated by the transmitting node on some specific MCU pin just before starting the transmission of one packet, and the rising edge of the corresponding pulse generated by the receiving node on another MCU pin immediately after the packet is successfully received. In fact, since the MCU usually controls the whole communication process, it is possible to map that process onto the executed program and to introduce additional lines of code responsible for the pulse generation. The increasing use of Medium Access Control (MAC) layers based on Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocols (such as those described in the well$known standard IEEE 802.15.4 [10]) makes the communication latency very sensitive to the traffic conditions of the network. As a consequence, when the channel is sensed busy or in case of packet collisions it is very difficult to estimate the actual end$to$end latency just using the MCU timers. In this paper, the problem of measuring the communication latency is addressed by means of an automated procedure based on a LabVIEW™ application. The collected results focus on the dependence of the communication latency on the payload size, network traffic conditions and number of hops. The basic components of the proposed experimental setup are shown in Figure 1 and include not only one sender$receiver pair of nodes, but also a random traffic generator, namely a node that is explicitly used to emulate a known amount of traffic within the network, as it will be explained better