Information Processing Letters 70 (1999) 197–204 Two design patterns for data-parallel computation based on master-slave model Kuo-Chan Huang a,1 , Feng-Jian Wang b,* , Jyun-Hwei Tsai a,2 a National Center for High-Performance Computing, Taiwan, ROC b Department of Computer Science and Information Engineering, National Chiao Tung University, Hsinchu, Taiwan, ROC Received 18 March 1998; received in revised form 5 January 1999 Communicated by F.Y.L. Chin Abstract This paper presents two design patterns useful for parallel computations of master-slave model. These patterns are concerned with task management and parallel and distributed data structures. They can be used to help addressing the issues of data partition and mapping, dynamic task allocation and management in parallel programming with the benefit of less programming efforts and better program structures. The patterns are described in object-oriented notation, accompanied with illustrative examples in C++. We also provide our experience in applying these patterns to two scientific simulation programs simulating Ising model and plasma physics respectively. Since master-slave model is a widely used parallel programming paradigm, the design patterns presented in this paper have large potential application in parallel computations. 1999 Elsevier Science B.V. All rights reserved. Keywords: Software design and implementation; Parallel processing; Design pattern 1. Introduction Parallel programming is a necessary and compli- cated task to make use of computation power of par- allel computers. A classical, but largely unrealized, goal in the parallel programming community is that users write pure serial programs, and the compiler synthesizes automatically the good parallel programs based on analysis of the serial programs. Although very interesting results were obtained in this direc- tion, they all are far from a practical use yet. On the other hand several existing parallel languages, such as FORTRAN D, CM FORTRAN, C*, High Perfor- * Corresponding author. Email: fjwang@csie.nctu.edu.tw. 1 Email: c00kch00@nchc.gov.tw. 2 Email: c00jht00@nchc.gov.tw. mance FORTRAN (HPF), etc. [10], provide functional abstractions for specifying partitioning and distribu- tion strategies for static and regular data structures such as array. However, functional abstractions are in- sufficient for expressing partitioning and distribution strategies for irregular or dynamic data structures [13]. Currently, for these irregular and dynamic applications basic parallel programming tools are message-passing libraries, among which PVM [6] and MPI [7] are the most popular. Message-passing based parallel programming is a kind of programming at assembler level. Although it is possible to write highly efficient parallel programs in message-passing languages, most users find such programming to be incredibly tedious [11]. A more serious problem is that current parallel programming 0020-0190/99/$ – see front matter 1999 Elsevier Science B.V. All rights reserved. PII:S0020-0190(99)00057-5