Published in IET Communications Received on 4th October 2007 Revised on 1st July 2008 doi: 10.1049/iet-com:20070495 In Special Issue on Optical Burst Switching ISSN 1751-8628 Dimensioning for in-band and out-of-band signalling protocols in OBS networks A. Pantaleo 1 M. Tornatore 1 A. Pattavina 1 C. Raffaelli 2 F. Callegati 2 1 Politecnico di Milano, Milano, Italy 2 University of Bologna, Bologna, Italy E-mail: tornator@elet.polimi.it Abstract: Most of the previous works on optical burst switching (OBS) assume in their analysis that signalling does not affect network performance. It is analysed here, under which conditions the effect of signalling is actually negligible, taking into account the effect of signalling in the evaluation of burst discard probability. First, analytical models for two different signalling approaches in an OBS network are presented: ‘out-of-band’ and ‘in-band’ techniques. The impact of these two signalling strategies in terms of the probability of burst discard are evaluated, identifying the component of bursts discarded as a consequence of control message losses or of excessive signalling delay. A new method is also discussed, based on the previous models, to assign the correct amount of resources to the control plane. To verify the accuracy of the analytical results, these are compared with results based on discrete-event simulationns: results are found to be in a highly satisfactory agreement with simulations. 1 Introduction Optical burst switching (OBS) [1] is a paradigm for optical transport networks that has been widely studied [2] and that has been proposed as compromise between optical circuit switching (OCS) and optical packet switching (OPS). The OBS transport architecture is based on a bufferless wavelength division multiplexing (WDM) network, where data bursts consisting of multiple packets are created at border or ingress nodes and switched by intermediate or core nodes along the network all-optically [3]. This is possible thanks to a control message or header, which is transmitted ahead of the burst and whose goal is to configure the switches along the path before the arrival of the corresponding data burst. Header and burst can be separated at the source node by a fixed time interval called offset time that allows the core nodes to be configured before the burst arrival. Headers undergo an O/E conversion in all nodes and are processed in the electronic domain, where decisions about reservation and routing of the incoming burst are taken. Bursts will cut-through the whole network optically with no O/E/O instead, and can be lost when there is no bandwidth, that is, no lambda channel is reserved at their arrivals. Current literature has investigated numerous aspects of this technique (e.g. assembling, scheduling, contention resolution, reservation) and their impact on burst loss [4–6]. A common consideration was that control plane (signalling) impact on general performance was reasonably negligible. Some papers have evaluated different signalling schemes and found no differences for common systems and minimal differences for dense systems [4]. In [7], the authors for the first time assess the possibility that headers could expire because of queuing time and they study by simulation the impact of the processing time on control plane performance. In [8], it is proposed to mitigate the effect of the headers losses indirectly, by deciding an appropriate minimum mean burst length. In our paper, we propose suitable analytical models to determine under which conditions the control plane is really negligible and we illustrate new procedures to 418 IET Commun., 2009, Vol. 3, Iss. 3, pp. 418–427 & The Institution of Engineering and Technology 2009 doi: 10.1049/iet-com:20070495 www.ietdl.org Authorized licensed use limited to: Politecnico di Milano. Downloaded on January 25, 2010 at 11:25 from IEEE Xplore. Restrictions apply.