StructMultidiscOptim23,24–33 Springer-Verlag2001 On the use of energy minimization for CA based analysis in elasticity P. Hajela and B. Kim Abstract There has been recent interest in exploring alternative computational models for structural analy- sis that are better suited for a design environment re- quiring repetitive analysis. The need for such models is brought about by significant increases in computer pro- cessing speeds, realized primarily through parallel pro- cessing. To take full advantage of such parallel machines, however, the computational approach itself must be revis- ited from a totally different perspective; parallelization of inherently serial paradigms is subject to limitations intro- duced by a requirement of information coordination. The cellular automata (CA) model of decentralized computa- tions provides one such approach which is ideally tailored for parallel computers. The proposed paper examines the applicability of the cellular automata model in problems of 2-D elasticity. The focus of the paper is in the use of a genetic algorithm based optimization process to derive the rules for local interaction required in evolving the cel- lular automata. Key words cellular automata, structural analysis, evo- lutionary methods 1 Introduction Improved efficiency of numerical simulations is a core issue in the development of the next generation of computer-based design systems. Not only will such sys- tems be used for design of very complex artifacts, but they will have to be tailored for a more effective interface Received August 28, 2000 P.HajelaandB.Kim Mechanical Engineering, Aeronautical Engineering and Me- chanics, Rensselaer Polytechnic Institute, Troy, NY 12180, USA e-mail: hajela@rpi.edu with the human designer. There is some agreement that for the human designer to be effective in their role, an- swers to “what- if” questions typically posed during a de- sign cycle must be available in an expeditious manner. Towards this end, there has been considerable activity in the MDO community in developing methods for function approximations (Guinta 1997) which provide surrogate models for use in design in lieu of more expensive “exact” calculations. In many instances, however, these surrogate models themselves require substantial numerical data to establish, and such data comes from either physical or nu- merical experiments. As design optimization moves into addressing practical scale problems, the computational costs associated with numerical experiments has gone up sharply. This bottleneck has continued to develop despite a rapid increase in computer processing speeds over the past decade. While such speeds continue to double every 18 months, there is a growing realization that the silicon technology based processor is rapidly approaching phys- ical constraints that would preclude further dramatic increases in processing speeds. The obvious answer lies in using massive arrays of processors, acting in parallel, to perform lengthy computations. The use of parallel processing, however, is not with- out its own challenges. Two distinct paradigms have re- ceived considerable attention in the literature. The first deals with optimization of existing code to make it more amenable to improving processing speeds on a parallel computer. This strategy was extensively pursued in com- putational fluid dynamics, and to a lesser extent in com- putational solid mechanics. A second approach is based on decomposition of the analysis domain into subdo- mains, with analysis of different subdomains being per- formed on different processors. The substructuring con- cept in finite element analysis (Zienkiewicz and Taylor 1989) is an example of such an approach, and has been reported with varying degrees of success. Almost all ap- plications of the approach, however, report the common problem of a decreasing rate of increase in processing speeds with additional processors. With an increase in processors, the overhead associated with coordination of