Voices of Artificial Life: On Making Music with Computer Models of Nature Eduardo Reck Miranda SONY Computer Science Laboratory Paris 6 rue Amyot, 75005 Paris, France miranda@csl.sony.fr Abstract We are investigating the potential of artificial life (Alife) models for composition and evolutionary musicology. This paper begins with a brief introduction to our research scenario and then revisits two systems of our own design that use cellular automata to control a sound synthesiser and to generate musical passages: Chaosynth and CAMUS. Next, we introduce a discussion on the potential and limitations of these two systems. Then, we present a new paradigm for Alife in music inspired by the notion of adaptive distributed-agent systems. We demonstrate how a small community of agents furnished with a voice synthesiser, a hearing system and a memory mechanism can evolve a shared repertoire of melodic patterns from scratch, after a period of spontaneous creation, adjustment and memory reinforcement. Keywords: Artificial life, cellular automata, evolutionary musicology, algorithmic composition, generative music, adaptive behaviour, collective machine learning. 1. INTRODUCTION Perhaps one the greatest achievements of Artificial Intelligence (AI) to date lies in the construction of machines that can compose music of incredibly high quality; e.g. (Cope, 1991). These AI systems (Miranda, 2000) are only any good, however, at mimicking well-known musical styles. They are either hard-wired to compose in a certain style or able to learn how to imitate a style by looking at patterns in a batch of training examples. Conversely, issues as to whether computers can create new kinds of music are much harder to study, because in such cases the computer should neither be embedded with particular models at the outset, nor learn from carefully selected examples. One plausible approach to address this problem is to program the computer with abstract models that embody our understanding of the dynamics of some compositional processes. Indeed, many composers have tried out mathematical models, which were thought to embody musical composition processes, such as combinatorial systems, stochastic models and fractals (Dodge, 1985; Worral, 1996; Xenakis, 1971). Some of these trials produced interesting music and much has been learned about using mathematical formalisms and computer models in composition. We believe Alife modelling techniques to be a natural progression to pushing this understanding even further. Also, as we shall see below, such models can be very useful for evolutionary musicology (Wallin et al., 2000). By ‘artificial life models’, we mean those computational models that display some form of emergent behaviour resembling natural phenomena; for example, cellular automata, genetic algorithms and adaptive games, to cite but a few (Kelemen and Sosik, 2001). We have been investigating the potential of a class of Alife modelling techniques called cellular automata (CA) for music composition for almost a decade, but we have recently shifted our attention towards models inspired by the notion of adaptive distributed-agent systems. This paper begins with a brief review of two representative systems resulting from our research into cellular automata: a software synthesiser called Chaosynth and a generator of musical passages called CAMUS. Then, we discuss the potential and limitations of these two systems based upon the experience that we have gained from using them in professional composition. In the light of this short discussion, we introduce one of the new models that we have implemented to study the evolution of melody in an artificial society of distributed agents. 2. CELLULAR AUTOMATA INVESTIGATION: FROM SOUND SYNTHESIS TO MUSICAL FORM Cellular automata (CA) are discrete dynamical systems often described as a counterpart to partial differential equations, which have the capability to describe continuous dynamical systems. The meaning of discrete is that space, time and properties of the automata can have only a finite, countable number of states. The basic idea is not to try to describe a complex system using difficult equations, but rather to simulate this system by the interaction of its components following simple rules. CA are implemented as an array of identically programmed automata, or cells, that interact with one another. The arrays usually form either a 1-dimensional string of cells, a 2-D grid, or a 3-D solid. Most often the cells are arranged as a simple regular grid or matrix, but other arrangements, such as a honeycomb, are sometimes used. The essential features of CA are: the state of the cells and their neighbourhood. The state of a cell can be either a number or a property. For instance if each cell represents part of a landscape, then the state might represent the number of animals at each location or the type of forest cover growing there. The neighbourhood is the set of cells that a single cell interacts with. In a grid these are normally the cells physically closest to the cell in question. Chaosynth is essentially a granular synthesiser (Miranda, 1998) Granular synthesis works by generating a rapid succession of very short sound bursts called grains (e.g. 35 milliseconds long) that together form larger sound events.