Rapid Development of Application-Specific Network Performance Tests Scott Pakin lOS Alamos National Laboratory, Los Alamos, NM 87545, USA pakin@lanl.gov http://www.c3.lanl.gov/~pakin Abstract. Analyzing the performance of networks and messaging lay- ers is important for diagnosing anomalous performance in parallel appli- cations. However, general-purpose benchmarks rarely provide sufficient insight into any particular application’s behavior. What is needed is a fa- cility for rapidly developing customized network performance tests that mimic an application’s use of the network but allow for easier experi- mentation to help determine performance bottlenecks. In this paper, we contrast four approaches to developing customized network performance tests: straight C, C with a helper library, Python with a helper library, and a domain-specific language. We show that while a special-purpose library can result in significant improvements in functionality without sacrificing language familiarity, the key to fa- cilitating rapid development of network performances tests is to use a domain-specific language designed expressly for that purpose. 1 Introduction Parallel applications utilize the interconnection network in a variety of ways, including nearest-neighbor communication on a 2-D or 3-D mesh/torus (e.g., in ocean-modeling codes [1]); hierarchical communication (e.g., in molecular- dynamics codes [2]); and, master/slave communication (e.g., in Monte Carlo codes [3]). However, general-purpose network performance tests such as Net- PIPE [4], Mpptest [5], and those that appear in the Pallas MPI Benchmarks [6] and SKaMPI [7] suites, measure performance independently of any particular application’s usage of the network. For example, it is common to measure net- work bandwidth as the peak data rate achieved when sending a large number of messages back-to-back between two otherwise idle endpoints, even though few applications utilize such a communication pattern. General-purpose tests are nevertheless important to application developers because they indicate – in a standard format – upper bounds in network performance that developers can use to determine if application performance is being limited by the network. Special-purpose benchmarks targeted to a particular inquiry are an impor- tant complement to general-purpose benchmarks. For example, if an application runs significantly slower than a general-purpose test would indicate, it may be V.S. Sunderam et al. (Eds.): ICCS 2005, LNCS 3515, pp. 149–157, 2005. Springer-Verlag Berlin Heidelberg 2005