Augmented Drums: Digital Enhancement of Rhythmic Improvisation Matteo Amadio 1 , Alberto Novello 2 1 Conservatory of music “C. Pollini”, Padova, Italy matteoamadio@outlook.com 2 Conservatory of music “C. Pollini”, Padova, Italy albynovello@gmail.com Abstract. This paper presents a set of real-time modules that digitally enhance the performance of a drummer. The modules extract rhythmic information from the multichannel audio acquired using simple microphones onto the different drum parts. Based on the predicted tempo, the modules generate complex patterns that can be manually controlled through high-level parameters or can be left automatic adapting to the drummer's specific style. Such an interactive system is intended mainly for an improvised solo performance confronting a human drummer with a computer; however, it could be effectively employed in improvisations with larger ensembles or installations. Keywords: Augmented instruments, drum set, music, interaction, improvisation, live electronics. 1 Historical Background and Related Work In the early years of electronic music, when the new technologies began to be considered as valid and innovative means for musical expression, percussion instruments had a fundamental role in the transition between the traditional, “keyboard-influenced” musical language and the “all-sound music of the future” (Cage 1937). Several composers considered percussion a great match with electronics because of their common characteristics. Both instrument families can generate a broad spectrum of unpitched sounds which encouraged composers to focus on timbre and rhythm. Another aspect is the modularity that characterises both setups of the percussionist and the electronic musician: the possibility to swap parts of an instrument chain has an impact on the creation process, and opens new possibilities in the performative approach (Cage 1937) (Varese 1966) (Stockhausen 1996). In the following years, the diffusion of digital technologies, computers and real time digital audio processing allowed the birth of new strategies for composition and live electronics. In the late 70’s Chadabe started experimenting with digital “interactive composing systems”: algorithms capable of responding in complex ways to the actions of the performer (Chadabe 1984). In his pieces Solo and Rhythms, for example, the performer provides high-level input data to the system that elaborates them in order to produce the whole musical output. The performer influences the final result but is unable to control each single event, placing himself in a position where he needs to listen and react to the gestures of the computer. In 1983 George Lewis was working at IRCAM developing an interactive software that could automatically generate instrumental music and also analyse the performance of human musicians in order to play along with them (IRCAM 1997) (Lewis 2018): this work established the base of his Voyager system. The idea of independence of the computer from the human performer led to the abolition of the “human leader/computer follower” hierarchy, in order to create the possibility to communicate only using the musical language (Lewis 2000). As a result, some intentions and emotions expressed by the human performer could also be found in the electronic performance, confirming the achievement of an authentic man-machine musical interaction. This last concept was pointed out also by Robert Rowe in the description of one of his early works in the field of human-machine musical interaction, Hall of Mirrors (Baisnee 1986). He describes the feedback loop generated by mutual imitation as two (or more) mirrors facing each other. Rowe developed his own interactive system named Cypher. In Cypher the user has to decide how the listener will interact with the player, allowing a high-level control on the actions of the software (Rowe 1990). In the ‘90s both Cypher and Voyager were modified to include the use of MIDI data as