Creative Experiments Using a System for Learning High-Level Performance Structure in Ableton Live Aengus Martin, Craig T. Jin Computing & Audio Research Lab Sydney University NSW 2006, Australia {aengus.martin,craig.jin}@sydney.edu.au Ben Carey Faculty of Arts and Social Sciences University of Technology, Sydney NSW 2007, Australia benjamin.carey@uts.edu.au Oliver Bown Design Lab Sydney University NSW 2006, Australia oliver.bown@sydney.edu.au ABSTRACT The Agent Design Toolkit is a software suite that we have developed for designing the behaviour of musical agents; software elements that automate some aspect of musical composition or performance. It is intended to be accessi- ble to musicians who have no expertise in computer pro- gramming or algorithms. However, the machine learning algorithms that we use require the musician to engage with technical aspects of the agent design, and our research goal is to find ways to enable this process through understand- able and intuitive concepts and interfaces, at the same time as developing effective agent algorithms. Central to enabling musicians to use the software is to make available a set of clear instructional examples show- ing how the technical aspects of agent design can be used effectively to achieve particular musical results. In this pa- per, we present a pilot study of the Agent Design Toolkit in which we conducted two contrasting musical agent design experiments with the aim of establishing a set of such ex- amples. From the results, we compiled a set of four clear examples of effective use of the learning parameters which will be used to teach new users about the software. In ad- dition, we identified a range of improvements which can be made to the software itself. 1. INTRODUCTION One focus in the field of interactive computer music is on computational systems which are capable of autonomous musical performance in a way that is responsive to external musical factors. Such systems can engage in performance- time interactions in a wide variety of ways, among which are the emulations of roles traditionally filled by human performers, as well as in new ways made possible by the computational medium [1]. A number of authors have con- ceptualised the internal structure of these systems as a lis- tening module, which parses the incoming musical data; a decision-making module which makes musical decisions, influenced by the input; and an output module which gen- erates sound according to the decisions made [2–4]. In this Copyright: c 2012 Aengus Martin, Craig T. Jin et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License , which permits unre- stricted use, distribution, and reproduction in any medium, provided the original author and source are credited. work, we are concerned with the decision-making module and we will refer to it as a musical agent. Musical agents can be designed and implemented using a variety of programmable, real-time interactive music en- vironments such as Max, Pure Data and SuperCollider, in addition to lower level, general-purpose programming lan- guages, such as C++, Java and Python. However, no plat- forms exist which support the design of musical agents by musicians who are not proficient in algorithms and com- puter programming. In order to address this, we have de- veloped a software tool for designing musical agents. It is called the Agent Design Toolkit and it was first introduced in [5]. The Agent Design Toolkit (ADTK) is a software tool in- tended for the design of musical agents that perform with a collection of musical objects: software instruments, au- dio effects and low level algorithmic processes. Our agents make relatively high-level musical decisions (see next sec- tion). The software supports the following user stages in an iterative design process: 1. Record a set of example performances, in which the human performer controls the parameters of a soft- ware music system; 2. Configure a set of machine learning algorithms and run them to produce an agent; 3. Audition the agent; 4. Return to either Step 1 or Step 2, if the user seeks improvements or variations. The paradigm in which a designer iteratively improves the output of machine learning algorithms by adding and editing training data, is known as interactive machine learning (IML) [6]. The design paradigm that is supported by the ADTK incorporates IML, in that it allows a musi- cian to iteratively improve an agent by editing and sup- plementing the set of example performances. The IML paradigm was proposed as a way to avoid requiring the de- signer to perform feature selection. This is a phase of the traditional machine learning workflow that, in general, re- quires considerable technical expertise in the problem do- main (i.e. in the area in which the machine learning al- gorithms are being applied). However, we view feature selection as an essential way for a musician—as the indi- vidual most familiar with the specific musical context in which they are working—to incorporate his/her musical knowledge into an agent. In machine learning terms, this