Microinteraction in Music/Dance Performance Alexander Refsum Jensenius University of Oslo, Department of Musicology, fourMs lab PB 1017 Blindern, 0315 Oslo, Norway a.r.jensenius@imv.uio.no ABSTRACT This paper presents the scientific-artistic project Sverm, which has focused on the use of micromotion and microsound in artistic practice. Starting from standing still in silence, the artists involved have developed conceptual and experi- ential knowledge of microactions, microsounds and the pos- sibilities of microinteracting with light and sound. Author Keywords motion capture, microinteraction, artistic practice ACM Classification H.5.5 [Information Interfaces and Presentation] Sound and Music Computing, J.5 [Arts And Humanities] Arts, Fine and Performing 1. INTRODUCTION Music-related motion unfolds at many different spatial and temporal levels; from the tiniest and shortest actions found in, for example, the vibrato of a finger on a violin string, to the full-body actions of some percussionists [11]. This paper will refer to three different spatial levels when describing music-related motion: 1. Micro : the smallest controllable and perceivable ac- tions, happening at a millimetre scale (or smaller) 2. Meso : most sound-producing and sound-modifying actions on musical instruments, such as moving the fingers on a keyboard or MIDI controller, happening at a centimetre scale 3. Macro : larger actions, such as moving the hands, arms and full body, happening at a decimetre to metre scale. In the world of acoustic instruments, there are lots of ex- amples of micro-level interaction, or what will be referred to as microinteraction, such as the minute actions found in the mouth of wind performers, or in the fingering of string players. There are also some, but arguably fewer, exam- ples of what Wessel and Wright called “intimate” control of digital musical instruments (DMIs) [19]. There are probably several reasons why we (still) see quite few examples of microinteraction in the NIME community. It is, of course, possible to blame the MIDI protocol and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. NIME’15, May 31-June 3, 2015, Louisiana State Univ., Baton Rouge, LA. Copyright remains with the author(s). Figure 1: Picture from a standstill session during a Sverm workshop. Reflective motion capture mark- ers can be seen on the heads of the performers. its limitations [16], but we should remember that alterna- tives, for example Open Sound Control (OSC), has been with us for almost two decades [20]. Still, most commercial controllers and a lot of devices presented in the NIME com- munity are built around a meso-level button/knob/slider paradigm, even though it is technically possible to build things smaller and faster. An explanation for this may be that many developers and users perceive meso interaction to work (sufficiently) well for many applications. It appears that the focus on “gestural” controllers, 1 has led to an increased focus on macro interaction. Examples of such large-scale, and comparably slow, interaction are full-body motion capture performances bridging over to in- teractive dance [2, 17]. This trend may be explained by the availability of new technologies, for example the Wii and Kinect. Such motion tracking devices typically afford fairly large-scale and slow interaction, partly due to technical con- straints in the temporal speed and spatial resolution. How- ever, the more expensive inertial and optical motion track- ing systems are certainly capable of tracking human motion at both spatial and temporal micro-levels [10]. So the main reason for the seemingly lack of focus on microinteraction, may be a conceptual one rather than technical. The challenge, then, is to figure out how micro-level mo- tion could be used meaningfully in a DMI context. This paper explores how full-body motion at the micro-level can be used in the contexts of interactive music and dance. The case study to be presented is the scientific-artistic research project Sverm, 2 which explored micromotion from the start- ing point of standing still (Figure 1). 1 See [7] for a problematisation of gesture in a NIME context. 2 http://www.fourms.uio.no/projects/sverm/