JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 18, 981-997 (2002) 981 Neko: A Single Environment to Simulate and Prototype Distributed Algorithms * PÉTER URBÁN, XAVIER DÉFAGO + AND ANDRÉ SCHIPER School of Computer and Communication Sciences Swiss Federal Institute of Technology in Lausanne EPFL, CH-1015 Lausanne, Switzerland E-mail: peter.urban@epfl.ch, andre.schiper@epfl.ch + Graduate School of Knowledge Science Japan Advanced Institute of Science and Technology Tatsunokuchi, Ishikawa 923-1292, Japan E-mail: defago@jaist.ac.jp Designing, tuning, and analyzing the performance of distributed algorithms and protocols are complex tasks. A major factor that contributes to this complexity is the fact that there is no single environment to support all phases of the development of a distrib- uted algorithm. This paper presents Neko, an easy-to-use Java platform that provides a uniform and extensible environment for various phases of algorithm design and per- formance evaluation: prototyping, tuning, simulation, deployment, etc. Keywords: simulation, prototyping, distributed algorithms, message passing, middleware, Java, protocol layers 1. INTRODUCTION Designing, tuning, and analyzing the performance of distributed algorithms and protocols are complex tasks. Because of the performance requirements and the timing constraints of modern systems, performance engineering is an important activity in the construction of complex systems. Distributed systems are no exception, and the constant need for better performance is a strong incentive for proper performance engineering and algorithm tuning. Performance engineering is based on a combination of three basic approaches to evaluating the performance of algorithms: (1) the analytical approach computes the per- formance of an algorithm based on a parameterized model of the execution environment; (2) simulation runs the algorithm in a simulated execution environment, usually based on a stochastic model; and (3) measurement runs the algorithm in a real environment. These three approaches have their respective advantages and limitations. Therefore, in order to increase the credibility and the accuracy of performance analysis, it is considered good practice to compare results obtained using at least two different approaches. Received August 27, 2001; accepted April 15, 2002. Communicated by Jang-Ping Sheu, Mokoto Takizawa and Myongsoon Park. * Research supported by a grant from the CSEM Swiss Center for Electronics and Microtechnology, Inc., Neuchâtel. A preliminary version of this paper appeared in [1].