Future Generation Computer Systems 22 (2006) 849–851 www.elsevier.com/locate/fgcs Editorial Special section: iGrid 2005: The Global Lambda Integrated Facility Larry Smarr a , Thomas A. DeFanti a,b , Maxine D. Brown b,∗ , Cees de Laat c a California Institute for Telecommunications and Information Technology (Calit2), University of California, San Diego, CA, USA b Electronic Visualization Laboratory, University of Illinois at Chicago, IL, USA c Advanced Internet Research group, Universiteit van Amsterdam, Amsterdam, Netherlands Available online 19 May 2006 iGrid 2005 was the fourth community-driven biennial International Grid event, held on 26–30 September 2005 at the California Institute for Telecommunications and Information Technology (Calit2) building on the campus of the University of California, San Diego. iGrid events are coordinated efforts to accelerate the use of multi-gigabit international and national networks, to advance scientific research, and to educate decision makers, academicians and industry researchers on the resulting benefits. Attracting 450 participants, iGrid 2005 featured 49 real- time application demonstrations developed by multidisciplinary teams from 20 countries, as well as a symposium of 25 lectures, panels and master classes on the applications, middleware, and underlying cyberinfrastructure that was used. At its core, this cyberinfrastructure uses supernetworks rather than supercomputers as its central architectural element, which is constructed from multiple wavelengths of light (lambdas) on single optical fibers. New middleware technologies are enabling applications to dynamically manage these lambda resources just as they do any grid resource, creating a LambdaGrid of interconnected, distributed, high-performance computers, data storage devices, visualization displays and instrumentation. A world-scale LambdaGrid laboratory, driven by the demands of application scientists, engineered by leading network engineers, and enabled by grid middleware developers, is being created by the international virtual organization GLIF, the Global Lambda Integrated Facility. GLIF provided the persistent high-performance infrastructure that iGrid participants used, shown in Fig. 1, and iGrid provided the forum for global teams to demonstrate advancements in scientific collaboration and discovery that this infrastructure is enabling. GLIF held its annual meeting on the last day of iGrid 2005. Previous iGrids in 1998, 2000 and 2002, and GLIF’s organization that began with a Lambda Workshop held in ∗ Corresponding author. Tel.: +1 312 996 3002; fax: +1 312 413 7585. E-mail address: maxine@uic.edu (M.D. Brown). Amsterdam in 2001, have rapidly led to the worldwide establishment of dozens of interconnected 10-Gigabit lambdas. Since the last iGrid, there has been a global movement to support a wide range of e-science projects by adopting Service- Oriented Architectures for the middleware that rides on top of the physical infrastructure. iGrid 2005 demonstrated global “grass-roots” application experiments combined with collegial “best-of-breed” processes to develop a new generation of shared open-source LambdaGrid Services. These Services, most of which are documented in this Journal, supported: scientific instruments, high-definition- video and digital-cinema streaming, visualization and virtual reality, high-performance computing, data analysis, and the control of the underlying lambdas themselves. These Services were in support of very-large-scale e-science applications – such as astronomy, bioinformatics, ecology, geoscience, and high-energy physics – that study very complex micro- to macro-scale problems over time and space. Participating teams represented: Australia, Brazil, Canada, China, Czech Republic, Germany, Hungary, Italy, Japan, Korea, Mexico, Netherlands, Poland, Russia, Spain, Sweden, Taiwan, the United Kingdom, the United States and the international laboratory CERN (the European Organization for Nuclear Research). The process of building the LambdaGrid is reminiscent of the effort to build up a networked supercomputing infrastructure in the United States 20 years ago. At first there was hardly any real science being done. Rather, a few pioneering scientists restructured their codes to understand how to best take advantage of the high-performance hardware (e.g., vector processors), to create scientific visualizations, and to remotely control the supercomputer in real time. Gradually, as the supercomputing hardware and software matured, a second generation of homesteaders showed up and started using the infrastructure to do science. We are still in the pioneering phase of LambdaGrids, but by 2007, as new large-scale instruments come online, research should advance sufficiently enough to be all about homesteading science that can be done with the global 0167-739X/$ - see front matter c 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.future.2006.04.002