PAPERS Digital Fabrication of Acoustic Sonifications* STEPHEN BARRASS (stephen.barrass@canberra.edu.au) University of Canberra, Canberra, Australia Sonification is the design of sounds to provide useful information. Dataforms are physical objects constructed from digital datasets. Can we combine these ideas to create dataforms with acoustic properties that provide useful information? We will refer to this combination as an acoustic sonification. This paper explores and develops the idea of acoustic sonification through a series of experiments that map a head-related transfer function (HRTF) dataset measured from a Kemar dummy onto the shape of a bell constructed in three-dimensional CAD software and then digitally fabricated in stainless steel. The tones produced from the left and right HRTF bells are compared against each other and with a null bell. The pitch and timbre of the left and right bells are perceptibly different from each other, and from the null. The spectra of these bells have a double harmonic series that distinguishes them from the null. These results suggest that the HRTF bells could be used to compare and classify HRTF datasets. This conclusion supports the hypothesis that acoustic sonifications could provide useful information about a general range of datasets. 0 INTRODUCTION Recent advances in digital fabrication have enabled a new paradigm in data visualization in which data sets are rendered as physical dataforms. The visualization of digital data in a three-dimensional (3D) physical form may allow new insights and understanding of data relations. Examples of dataforms are described and analyzed in theoretical terms of metaphorical distance and embodiment by Zhao and Vanden Moere [1]. Can the idea of dataforms be extended into the auditory realm? To do this, we need to design the acoustics of a dataform to provide useful information about the dataset it is constructed from. This idea may not be so far fetched given that whenever we interact with an object it produces acoustic vibrations that carry information about shape and material, as well as the mode and energy of interactions with it. Altering the shape of an object should naturally alter its acoustics. An acoustic sonification could be small, mobile, and produce interactive sounds in realtime, without electrical power or speakers. This paper investigates this idea through some initial ex- periments that modulate the shape of a bell-like dataform in response to a dataset. The background section explains the initial choice of a head-related transfer function (HRTF) [2, 3] dataset for these experiments, and describes the dataset in detail. The next section describes the construction * This paper is part of the special issue on Auditory Display that began in the 2012 July/August issue. and fabrication of the HRTF bell from the dataset. Experi- ment 1 investigates the effect of the data on the acoustics of the bell. Experiment 2 then seeks to disentangle and quan- tify the effect of the fabrication process on the acoustics of the bell. The results are then discussed and conclusions drawn, followed by suggestions for further work to address questions raised by this work. 1 BACKGROUND The choice of the HRTF dataset for these experiments was motivated by the sonification challenge for the Inter- national Conference on Auditory Display (ICAD) in Bu- dapest in 2011. The sonification challenge has become a regular feature at ICAD conferences since the Listening to the Mind Listening challenge with EEG Data in 2004 [4], the Global Data by Ear challenge with Socioeconomic data at the ICA in London in 2006 [5], and the Expression through Sounds concert in Washington DC in 2010 [6]. The Sonification contest is an opportunity to express yourself and your creativity in the field of Sonification. A data set will be provided with a detailed description that has to be sonified. This year the data set is some two- channel (left ear–right ear) recording of a dummy-head containing the HRTFs. These transfer functions describe the transmission from the free field to the eardrums, and are the directional dependent filters for the outer ears. The data set and instructions can be downloaded below. It contains horizontal and median plane HRTFs. Your task is to sonify these “raw numbers.” You are welcome to use any software and idea, artistic, or musical performance [7]. J. Audio Eng. Soc., Vol. 60, No. 9, 2012 September 709