Large-scale Modeling of Cardiac Electrophysiology
JB Pormann , JA Board , DJ Rose , CS Henriquez
Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
Department of Computer Science, Duke University, Durham, NC, USA
Department of Biomedical Engineering, Duke University, Durham, NC, USA
Abstract
Simulation of wavefront propagation in the whole heart
requires significant computational resources. The growth
of cluster computing has made it possible to simulate
very large scale problems in a lab environment. In this
work, we present computational results of simulating a
reaction diffusion system of equations of various sizes on
a Beowulf cluster. To facilitate comparisons at different
spatial resolutions, an idealized ventricular geometry
was used. The model incorporates anisotropy, fiber
rotation, and realistic membrane dynamics to determine the
computational constraints for the most detailed situations
of interest. Three meshes with mesh spacings of ,
, and , corresponding to roughly ,
, and nodes in the computational domain,
were considered. The results show that good parallel
performance is possible on a cluster up to 32 processors.
1. Introduction
Sophisticated computer simulations are being used to
investigate the factors that generate and sustain life-
threatening heart rhythms such as ventricular fibrillation.
For models with domain sizes that approach the size of
the human heart, the computational resources required
to perform the simulation can exceed that found on a
typical workstation. A domain comprising only 16M nodes
with a membrane model of 5-8 state variables can use
over 8GB of memory. To overcome these computational
constraints, investigators have made use of commercial-
class supercomputers such as the Cray YMP/T90 series
and the IBM SP. While these supercomputers still provide
outstanding performance, they are expensive and not always
available to the average investigator.
An alternative to using commercial supercomputers is
a Beowulf cluster that involves several closely networked
workstations. Such clusters have become increasingly
popular, but have been viewed as being too experimental
to handle the algorithmic and communication demands
involved in solving the reaction-diffusion equations
used to model electrical dynamics in cardiac muscle.
Recent advances in hardware and software, however,
have made cluster computing attractive for large scale
simulations. In this paper, we describe the computational
performance of a model of wavefront propagation in
domain size approximating the human heart. To
investigate the computational needs and test the cluster
environment, an idealized ventricular geometry, with
non-uniform, rotational ansiotropy and various models
of cardiac membrane ionic fluxes was used. The
idealized geometry permitted different grids with the
same shape but different elements sizes to be studied.
The simulations were performed using CardioWave, a
modular simulation system for the Bidomain Equations
[1], developed in our laboratory. The results show
that when using a distributed memory parallel approach,
the computational and memory resources of multiple but
otherwise independent workstations can be used efficiently
up to 32 processors for a domain size of 16 million
computational nodes.
2. Methods
CardioWave was developed to solve the system of
bidomain equations on a parallel computer. The specific
have been described previously [1]. Unlike other simulators
of wavefront dynamics in the heart, Cardiowave is not a
single, monolithic program, but rather a set of program
modules related to time integration, output, membrane
kinetics, matrix solvers etc, from which the user selects to
create a custom executable for the problem of interest. In
the typical simulation described in this work, the explicit-
monodomain time-integrator module, the LR-1 membrane
dynamics module, the simple stimulus and output modules
were selected to create simulator. An additional module
was selected to instruct the simulator that the execution
would be performed in parallel. As such, the same set of
modules can be selected and compiled on any distributed
parallel computer without modification.
To test the parallel performance, an idealized geometry
with anisotropic properties was used such that diffferent
grid spacings could be considered while minimizing the
0276-6547/02 $17.00 © 2002 IEEE 259 Computers in Cardiology 2002;29:259-262.