Large-scale application of some modern CSM methodologies by parallel computation K.T. Danielson a,b, * , R.A. Uras c , M.D. Adley b , S. Li a a Mechanical Engineering and Army High Performance Computing Research Center, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208-3111, USA b US Army Engineer Research and Development Center, Waterways Experiment Station, 3909 Halls Ferry Road, CEERD-SD-R, Vicksburg, MS 39180-6199, USA c Reactor Engineering Division, Argonne National Laboratory, Argonne, IL 60439, USA Abstract In this paper, the authors demonstrate the significant benefits that High Performance Computing has provided for several large-scale applications of some modern Computational Structural Mechanics (CSM) methodologies. Large complex dynamic analyses, involving large strain/deformation and inelasticity, were reasonably performed by parallel processing with recent constitutive models and modern compu- tational techniques. The predictions were made with finite element and mesh-free method software developed by the authors, using Message Passing Interface on CRAY T3E and IBM SP platforms. Excellent scalability on hundreds of processors was attained, which demonstrated the large-scale viability of the methodologies and greatly improved the authors’ research and development productivity. 2000 Elsevier Science Ltd. All rights reserved. Keywords: Parallel computing; Finite elements; Meshfree methods; Reproducing kernel particle methods; Large deformation; Inelasticity; Explicit dynamics 1. Introduction The advent of effective and reliable parallel computing platforms and the creation of extensive communication soft- ware standards (e.g. Message Passing Interface, MPI) have increased the use of High Performance Computing (HPC). Several popular Computational Structural Mechanics (CSM) codes (e.g. ParaDyn [1], PRONTO 3D [2], LS-DYNA) have incorporated coarse grain strategies for efficient computation on massively parallel processing systems, making large-scale analysis (e.g. millions of degrees of freedom) by traditional finite element methods feasible. In this paper, the authors demonstrate the signifi- cant benefits that HPC has provided for large-scale analyses with some modern CSM methodologies. Large complex analyses with advanced constitutive models and modern computational techniques can be reasonably performed by parallel processing, which can greatly improve the produc- tivity of analysts and researchers. Application of three recent developments for large-scale dynamic problems involving large strain/deformation and inelasticity are presented. The predictions are made with finite element and mesh- free method software [3–5] developed by the authors using coarse grain parallelism and MPI calls. All analyses were conducted on CRAY T3E-1200 and IBM SP platforms at US Army Engineer Research and Development Center (ERDC) and the Army High Performance Computing Research Center (AHPCRC). The basic parallel implemen- tations of each of the methods are similar, but have some important differences. Explicit time integration was used in all analyses. Using METIS [6], separate preprocessing par- titioning software was created to distribute computations and minimize communications. Elements and integration points are uniquely defined on processors for finite element analysis and meshfree methods, respectively. Shared nodes/ particles are duplicated on processors for data locality. Additional software was written to gather individual proces- sor output files into a single database for postprocessing. To minimize communication costs, transmission of model partition boundary contributions to nodal/particle equations is overlapped with partition of interior computations by nonblocking MPI sends and receives. 2. Explosive detonation in a reinforced concrete wall The development of a microplane concrete constitutive Advances in Engineering Software 31 (2000) 501–509 0965-9978/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved. PII: S0965-9978(00)00033-8 www.elsevier.com/locate/advengsoft * Corresponding author. US Army Engineer Research and Development Center, Waterways Experiment Station, 3909 Halls Ferry Road, CEERD- SD-R, Vicksburg, MS 39180-6199. Fax: + 1-601-634-2211. E-mail address: danielk@wes.army.mil (K.T. Danielson).