JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 7, APRIL 1, 2009 841
An Experimental Validation of a Wavelength-Striped,
Packet Switched, Optical Interconnection Network
Assaf Shacham, Member, IEEE, and Keren Bergman, Fellow, IEEE, Fellow, OSA
Abstract—We experimentally validate a complete optical packet
switched interconnection network, implementing the SPINet
architecture. The scalable photonic integrated network (SPINet)
architecture capitalizes on wavelength division multiplexing
(WDM) to provide very large transmission bandwidths, simplify
network design, and reduce the network’s power dissipation. Con-
tention resolution is performed in the optical domain, and a novel
physical layer acknowledgement protocol is employed to mitigate
the associated latency and performance penalties. Moreover, the
SPINet architecture is specifically designed to enable on-chip
integration by not using any kind of optical delay lines. Exper-
iments presented include a complete functionality verification,
error-free routing of 80 Gb/s wavelength-striped optical packets
(8 wavelengths each modulated at 10 Gb/s) with a bit-error rate
(BER) better than , and novel performance-enhancement
techniques such as path adjustments and load balancing.
Index Terms—Multistage interconnection networks, multi-
processor interconnection, optical interconnections, photonic
switching systems.
I. INTRODUCTION
O
VER THE LAST four decades, progress in high-perfor-
mance computing (HPC) systems has been dominated by
remarkable advances in semiconductor technologies, namely
improved fabrication technologies, circuit design techniques,
and processor microarchitectures. These advances, manifested
in Moore’s law [1], have led to the extremely high performance
presented by today’s CMOS-based microprocessors. Large
scale distributed systems (e.g., HPC), while benefiting from
this progress, face a severe problem, exacerbated with every
generation: the interconnection network, whose performance
is crucial to the overall system performance, does not keep up
with Moore’s law. The physical laws governing the propagation
of signals in electrical transmission lines are beginning to limit
the performance of communication systems at high data rates
(several Gb/s and higher). Dielectric losses and losses caused
by the skin effect limit the transmission distance, requiring
large power and dedicated circuits to overcome inter-symbol
interference (ISI) [2]. The resulting systems are expensive,
power hungry, and suffer from a large latency that impacts the
overall system performance [3].
Manuscript received November 14, 2007. Current version published April
17, 2009. This work was supported in part by the National Science Foundation
under Grant CCF-0523771 and in part by the U.S. Department of Defense under
Subcontract B-12-664.
A. Shacham is with Aprius, Inc., Sunnyvale, CA 94085 USA (e-mail:
assaf@ee.columbia.edu).
K. Bergman is with the Department of Electrical Engineering, Columbia Uni-
versity, New York, NY 10027 USA (e-mail: bergman@ee.columbia.edu).
Digital Object Identifier 10.1109/JLT.2008.928541
Three critical parameters determine the performance of an
HPC interconnection network: latency, bandwidth, and power
dissipation. The latency is dominated by the propagation ve-
locity across the transmission lines (time of flight) and by the
queueing latency, which is inevitable for systems with large port
counts. As systems scale in port counts and clock speeds, the
latency grows in absolute terms and even more so in terms of
processor clock speeds [3]. Many HPC systems are currently
using latency hiding and masking techniques to overcome this
problem [4], but these techniques incur performance and power
costs and cannot be used in every application.
Modern processors further challenge network designers by
demanding ever-growing off-chip bandwidth requirements
(e.g., 512 Gb/s in the IBM Cell Broadband Engine processor
[5]). Providing this bandwidth for remote memory accesses
and for interprocessor communication become extremely chal-
lenging and power-consuming.
Finally, the power consumed by electronic interconnects in
HPC systems to meet the bandwidth and latency requirements
is becoming the most critical design constraint. The power ex-
pended in HPC systems on computation and communications
grows very quickly with the data rates [6], and the associated
cost and heat dissipation problems have become limiting fac-
tors in the design and deployment of many such systems [7].
The power dissipated by the interconnection network is a large
fraction of the total system power. As systems become spa-
tially larger, a larger amount of power is required to overcome
the losses in transmission lines. This is done either by periodic
regeneration or by sophisticated signal processing techniques
such as pre-emphasis and equalization [8]. Additional concerns
such as cabling density, bending radii, and cooling airflow also
present important and growing challenges to HPC interconnec-
tion networks designers.
Optical transmission technologies have the potential to miti-
gate or even eliminate most of these problems. The bandwidth of
optical fibers, nearly 32 THz [9], can be utilized through wave-
length division multiplexing (WDM) to carry very high data
rates, exceeding 10 Tb/s [10]. The low loss in optical fibers al-
leviates the need for regeneration and sophisticated signal pro-
cessing techniques. Bending radii and spatial volumes issues are
also alleviated when optical fibers are used [11]. These reasons
have led to an increasing trend towards using multimode fibers
as point-to-point links in local area networks, HPC, and server
systems [11].
Point-to-point optical links, however, provide only a partial
relief to the power consumption and bandwidth problems. In
order to properly address the power and latency challenges,
optical switching must be employed taking advantage of bit rate
transparency by offering end-to-end photonic paths across the
0733-8724/$25.00 © 2009 IEEE
Authorized licensed use limited to: IEEE Xplore. Downloaded on April 15, 2009 at 11:35 from IEEE Xplore. Restrictions apply.