The 21 st International Conference on Auditory Display (ICAD-2015) July 8-10, 2015, Graz, Austria INVESTIGATIONS IN COARTICULATED PERFORMANCE GESTURES USING INTERACTIVE PARAMETER-MAPPING 3D SONIFICATION ABSTRACT Spatial imagery is one focus of electroacoustic music, more re- cently advanced by 3D audio furnishing new avenues for ex- ploring spatio-musical structures and addressing what can be called a tangible acousmatic experience. In this paper we pre- sent new insights into spatial, temporal and sounding coarticu- lated (contextually smeared) gestures by applying interactive parameter-mapping sonification in three-dimensional high- order ambisonics, numerical analysis and spatial composition. 3D motion gestures and audio performance data are captured and then explored in sonification. Spatial motion combined with spatial sound is then numerically analyzed to isolate ges- tural objects and smaller coarticulated atoms in time, space and sound. The results are then used to explore the acousmatic coarticulated image and as building blocks for a composed dataset embodying the original gestural performance. This new data is then interactively sonified in 3D to create acousmatic compositions embodying tangible gestural imagery. 1. INTRODUCTION In electroacoustic composition composers record a wealth of sounds and use these as sources in their work: dissecting, trans- forming spectra, time and space to create the building blocks of composition. Rather than being concerned with refined instru- mental techniques, recording and its creative use are guided by physicality, acoustics and kinetic behavior. In this way, spatial imagery has developed hand in hand with electroacoustic com- position and more recently, composers’ interest in 3D sound. To gain greater insight into the potential of gesture in the formation of spatial-temporal images, we propose a new ap- proach relevant to a wide variety of performed sounds. 3D mo- tion gestures and audio performance data are captured and first explored with interactive parameter-mapping sonification in three-dimensional high-order ambisonics as a way to identify significant features. Spatial motion and spatial sound are then numerically analyzed to isolate gestural objects, smaller coar- ticulated gestural atoms and their connectivity rules. The results are sonified to verify the results and then used in the composi- tion of a new fictional dataset embodying the original spatial- gestural performance. This dataset is then explored in sonifica- tion as a performance and compositional tool. The work is soni- fied using 'Cheddar' [1], which has been developed over a num- ber of projects in conjunction with both scientific and artistic sonification needs. The method, results and further work de- scribed in this paper apply an analytical and rigorous approach to some ad hoc assumptions suggested in [2]. 2. METHODS 2.1. Source sounds and data recording Instrumental performers acquire a refined control over motion that is less obvious in a non-musician’s action-perception cycle. For this reason, our work focuses on ‘non-instruments’ more familiar in electroacoustic music, the performance of which also stimulates investigation and yields surprise. We chose a balloon as our non-instrument, where the action–sound language con- sists of a variety of spatial and spectral dynamics. Audio was recorded with five DPA 4015 cardioid response microphones, four arranged in a rectangle with diagonal of 80 cm and one elevated above the centre. The balloon motion oc- curred mainly inside this microphone array. Motion data was captured using the Qualisys optical motion-capture system and eight Oqus 300 cameras at a rate of 200Hz. Six markers were placed on the balloon and 27 detailing various points on the fin- gers, hands and upper body. Two contrasting recordings were chosen in developing our analysis method and to provide the first results presented in this paper: (a) ‘Bouncing’: involving balloon and body large motorics; (b) ‘Slip-Grip’: involving bal- loon and fingers micro-movements. 2.2. Sonfication Cheddar is an interactive parameter-mapping 3D spatial sonifi- cation program built in MaxMSP and described in [1]. Cheddar sonifies multiple 3D spatial datasets in high-order ambisonics (HOA) where the virtual listening position can be freely moved to explore the spatial world in real-time. Sound is transformed by the data with a flexible, user-defined mapping. Parameter mapping sonification is important in our work: data acts as a layer of detachment from the original sounding event, thus avoiding any multi-modal inferences that may mislead the in- vestigation, as well as allowing modulations in time and space which may clarify qualities hidden at the original tempo. In all sonification examples velocity is mapped to volume and vertical motion mapped to pitch shift. Accompanying examples are in binaural for headphones, originals are in 5th order 3D HOA (www.natashabarrett.org/ICAD2015/) 2.3. Data analysis In our study, we consider gestural-spatial images consisting of sound, excitation action and other performance motions that proceed and follow the sound. From this combined motion and audio image we are interested in isolating phrases (a number of small sound-spatial objects linked together), sub-phrases (dif- ferent phases in the phrase that may be separated in some way) and coarticulated elements (elements that contextually smear into the sub-phrase). [3] discusses coarticulation temporal frameworks and [4] analysis options. Although using these as a guide, our framework focuses on the temporal-spatial character- istics of our sound-source. Our phrases were selected aurally by evaluating the mix of data sonification with the original audio. Natasha Barrett University of Oslo, Department of Musicology natasha.barrett@imv.uio.no Kristian Nymoen University of Oslo, Department of Musicology kristian.nymoen@imv.uio.no This work is licensed under Creative Commons At- tribution – Non Commercial 4.0 International License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/4.0/ The 21 st International Conference on Auditory Display (ICAD 2015) July 8–10, 2015, Graz, Austria ICAD 2015 - 304