Comput Sci Res Dev (2011) 26: 317–324
DOI 10.1007/s00450-011-0172-2
SPECIAL ISSUE PAPER
A system architecture supporting high-performance and cloud
computing in an academic consortium environment
Michael Oberg · Matthew Woitaszek · Theron Voran ·
Henry M. Tufo
Published online: 6 May 2011
© Springer-Verlag 2011
Abstract The University of Colorado (CU) and the Na-
tional Center for Atmospheric Research (NCAR) have been
deploying complimentary and federated resources support-
ing computational science in the Western United States since
2004. This activity has expanded to include other partners
in the area, forming the basis for a broader Front Range
Computing Consortium (FRCC). This paper describes the
development of the Consortium’s current architecture for
federated high-performance resources, including a new 184
teraflop/s (TF) computational system at CU and prototype
data-centric computing resources at NCAR. CU’s new Dell-
based computational plant is housed in a co-designed pre-
fabricated data center facility that allowed the university to
install a top-tier academic resource without major capital
facility investments or renovations. We describe integration
of features such as virtualization, dynamic configuration of
high-throughput networks, and Grid and cloud technologies,
into an architecture that supports collaboration among re-
gional computational science participants.
M. Oberg ( ) · M. Woitaszek · H.M. Tufo
National Center for Atmospheric Research, 1850 Table Mesa
Drive, Boulder, CO 80305, USA
e-mail: oberg@ucar.edu
M. Woitaszek
e-mail: mattheww@ucar.edu
H.M. Tufo
e-mail: tufo@cs.colorado.edu
T. Voran · H.M. Tufo
University of Colorado, Boulder, UCB 430, Boulder, CO 80309,
USA
T. Voran
e-mail: theron.voran@colorado.edu
Keywords High-performance computing · Data-centric
computing · Regional and federated supercomputing
initiatives
1 Introduction
Access to state-of-the-art computational facilities is essen-
tial for a wide range of computation-driven science disci-
plines and computational science research and education
programs. Often, the demands for high-performance com-
puting (HPC) resources quickly outstrip the ability of a sin-
gle project, group, or even organization to satisfy indepen-
dently. Moreover, as the resources, software applications,
and collaborative projects increase in size and complexity,
the ability for batch scheduling and manual data manage-
ment techniques to meet the diverse requirements dimin-
ishes, and advanced workflow technologies are needed to
appropriately map computational requirements to the avail-
able systems and infrastructure.
The development of computing consortiums among peer
institutions allows each institution to better support its re-
searchers through an increase in the diversity of available
resources and technical capabilities on those resources. By
dynamically coupling distinct resources, and then support-
ing data-centric and multi-resource workflows, the consor-
tium provides the foundation for large-scale computational
science and collaborative research. Consortium participants
can augment each other’s resources and technical expertise
while still retaining control over their individual resources,
thus establishing a continuum of resource availability and
infrastructure development and growth. The consortium en-
vironment also lays a common substrate for addressing the
technical hurdles common in running large computer sys-
tems in cross-organization collaborations. Additionally, a