C. Kaklamanis et al. (Eds.): Euro-Par 2012, LNCS 7484, pp. 701–715, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Topology Configuration
in Hybrid EPS/OCS Interconnects
Konstantinos Christodoulopoulos
1
, Marco Ruffini
1
, Donal O’Mahony
1
,
and Kostas Katrinis
2
1
School of Computer Science and Statistics, Trinity College Dublin, Ireland
2
IBM Research, Ireland
christok@tcd.ie
Abstract. We consider a hybrid Electronic Packet Switched (EPS) and Optical
Circuit Switched (OCS) interconnection network (IN) for future HPC and DC
systems. Given the point-to-point communication graph of an application, we
present a heuristic algorithm that partitions logical parallel tasks to compute re-
sources and configures the (re-configurable) optical part of the hybrid IN to ef-
ficiently serve point-to-point communication. We measure the performance of a
hybrid IN employing the proposed algorithm using real workloads, as well as
extrapolated traffic, and compare it against application mapping on convention-
al fixed, electronic-only INs based on toroidal topologies.
Keywords: Reconfigurable interconnection networks, optical circuit switching,
communication graph, application mapping, partitioning, topology configuration.
1 Introduction
High Performance Computing (HPC) systems and datacenters (DCs) are being built
with ever-increasing numbers of processors. Currently systems with tens of thousands
of servers have already been reported to be in operation, while their scale is expected
to grow to the order of millions of cores towards Exascale [1]. To obtain high system
efficiency, computation and communication performance need to be balanced. Given
the aggressive increase in compute density – thanks to the increasing number of
cores/node and the growing deployment of accelerators – it is of paramount impor-
tance to avoid having the interconnection network (IN) become a bottleneck [1-2];
instead, IN technologies and system software need to grow in hand with the evolution
in compute density to enable next generation HPC and DC systems.
Flagship supercomputers typically employ regular topologies of electronic switch-
es, such as hypercubes and toroidal structures. For instance, the Cray XT5 [3] utilizes
a 3D torus topology, while the K supercomputer [4] employs a 6D torus (although not
all dimensions are complete). Such low-degree regular topologies are adopted due to
their inherent ability to scale linearly with the number of compute nodes. Still, these
sparse topologies tend –for specific applications– to aggravate the mapping of appli-
cation communication to the underlying IN [5-10]. At the far end, many HPC clusters
and DCs adopt indirect routing IN, such as fat-trees [11] that provide for