16 THIS ARTICLE HAS BEEN PEER-REVIEWED. COMPUTING IN SCIENCE & ENGINEERING
D IGITAL
M ANUFACTURING
Although there’s a widespread belief that the effective application of high-performance
computing will dramatically increase industrial innovation, progress in this area has been
slow and limited because of a combination of technical and economic impediments. Here,
such impediments are outlined, along with efforts to address them.
Bringing HPC to
Engineering Innovation
I
t’s well recognized that US industry must
focus on innovation. A review of the current
Council on Competitiveness publication list
(see www.compete.org/publications) clearly
indicates that the application of simulation using
high-performance computing (HPC) is critical to
industrial innovation. Case studies demonstrate
the importance of HPC across all industrial sec-
tors. It’s also well recognized that taking advan-
tage of advances in nanotechnology is at the core
of many of the innovations possible in product
development and healthcare. However, the ability
to translate those advances into new products and
industries requires the transformation of exist-
ing modeling, analysis, and design methodologies
into ones that explicitly account for the interac-
tions of phenomena across the atomic, molecular,
microscopic, and macroscopic scales. The compu-
tational needs of such simulations are dramatically
higher than those of single-scale analyses, and the
software infrastructure needed is also much more
complex.
Some companies make extensive use of mas-
sively parallel simulation. What isn’t as obvious
is that in areas where computer-aided engineer-
ing (CAE) has been used for many years, the
level of computation being used for the majority
of simulations is far from what’s needed, and it’s
far below what current HPC systems can provide.
Closer examination of the engineering problems
being addressed indicates that, in most cases, the
resolution of the models and discretizations ap-
plied isn’t high enough for engineers to ensure the
simulation results’ reliability, and the simulations
being applied are at a single scale, ignoring the in-
novations made possible by performing multiscale
simulations. For example, in an April 2009 case
study,
1
a 168-processor system was applied to sup-
port a major manufacturer’s HPC needs. Although
this case study does demonstrate impressive gains,
168 cores is less than 1/1,000th of the 294,912 pro-
cessors used for a single simulation with tools
2
that
we’re applying to industrial problems. Addition-
ally, these massively parallel machines can support
concurrent execution of multiple simulations. This
capability for high throughput when applied to de-
sign optimization and parameter studies can result
in a dramatic reduction in time to completion.
You could argue that machines with hundreds of
thousands of processing cores are well beyond what
industry would obtain—however, industry will
easily be able to justify next-generation massively
parallel machines with more than 10,000 cores due
to the continued dramatic decreases in machine
costs and power requirements over that of cur-
rent systems. In addition, through opportunities
such as the US Department of Energy’s (DOE’s)
Innovative and Novel Computational Impact
on Theory and Experiment (INCITE) and the
National Science Foundation’s (NSF’s) Extreme
Mark S. Shephard, Cameron Smith, and John E. Kolb
Rensselaer Polytechnic Institute
1521-9615/13/$31.00 © 2013 IEEE
COPUBLISHED BY THE IEEE CS AND THE AIP
CISE-15-1-Smith.indd 16 12/12/12 11:54 AM