The 21 th International Conference on Auditory Display (ICAD–2015) July 8-10, 2015, Graz, Austria VIRTUAL REALITY PLATFORM FOR SONIFICATION EVALUATION Thimmaiah Kuppanda 1 , Norberto Degara 1 , David Worrall 1 , Balaji Thoshkahna 1 , Meinard M¨ uller 2 1 Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany 2 International Audio Laboratories, Erlangen, Germany thimmaiah.kuppanda@iis.fraunhofer.de ABSTRACT In this paper we propose a game-based virtual reality platform for evaluation of sonification techniques. We study the task of local- ization of stationary objects in virtual reality using auditory cues. We further explore sonification techniques and compare the per- formance in this task using the proposed platform. The virtual re- ality environment is developed using Unity3D (game engine) and an Oculus Rift, a head mounted virtual reality display. Parameter mapping sonification techniques are employed to map the position of the object in virtual space to sound. Hence, the framework de- fined here constitutes an auditory virtual reality environment. This auditory display interface is subjectively evaluated in stationary object localization task. A statistical analysis of the subjective and objective measures of the listening test is performed resulting in a robust and scientific evaluation of the sonification methods. 1. INTRODUCTION Sonification is an increasingly common approach in typical tasks like source localization, obstacle avoidance and navigation, hence its significance in the field of auditory display research. Sonifi- cation methods have potential application to navigation systems in vehicles and smartphones, assistive technology for the visually im- paired and other eyes-free applications. The aim of these technolo- gies is to deliver location-based information to support navigation through sound. This is a very challenging task. The main chal- lenge is to design a meaningful auditory display that is able to com- municate relevant aspects of complex visual scenes, where psy- choacoustics and aesthetics are important design factors [1]. The resulting sound must be accurate in terms of the location-based in- formation communicated, intuitive and as well as be acoustically attractive to the user. A number of different sonification methods for assisted navigation can be found in the literature [1]. In gen- eral, these methods scan the space for obstacles and synthesize the properties of the scene using various sound rendering techniques [2, 3, 4]. Hermann’s definition of sonification [5] implies that sonifica- tion is a data-dependent generation of sound using a systematic, objective and reproducible transform. According to this definition, sonification can be considered as a well-defined scientific method. Both subjective and objective evaluation are important steps in the design and implementation of auditory displays and the encom- passing sonification technique [6]. Nevertheless, a robust evalu- This work is licensed under Creative Commons Attribution Non Commercial 4.0 International License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/4.0 ation and scientific comparison of sonification methods is often neglected by auditory display researchers [7]. To address these limitations, we present a game-based virtual reality (VR) frame- work for a formal comparison of sonification methods for target localization. VR is a computer-based technology that provides vi- sual, aural and tactile stimuli of a virtual world generated in real time [8]. VR has developed from research to a tool for both en- tertainment and training. VR is a part of wearable technology that made a major break through with availability of Oculus Rift—a head mounted display device—for VR gaming. This paper reports on research aimed at demonstrating the use of a VR platform for the evaluation of certain simple sonification techniques. We focused on the task of localizing stationary objects in a VR environment using auditory cues. We further explored sonification techniques and compared the performance in the lo- calization task using the platform. The remainder of the paper is organized as follows: Section 2 describes the task that formed the basis for the proposed model. We describe the proposed model in Section 3. Section 4 explains the experimental setup facilitated by the proposed model. Section 5 presents the results of the evaluation. Finally, Section 6 draws some conclusions and considers the possible future work. 2. TASK DESCRIPTION Our aim was to evaluate the performance of different sonification methods in the task of object localization. The test subject was required to find a stationary object placed in a virtual space using an auditory cue. Figure 1 depicts this task. We placed the subject at a fixed position in virtual space and the subject was able to turn 360 degrees at this fixed position. The sound conveyed information about the position of the object. Precisely, the subject was required to bring the object within the (FOV). Once localized, the subject was required to respond using a mechanical clicker, which was recorded as an objective measure of response time. 3. PROPOSED MODEL This section describes the game-based VR framework. Section 3.1 gives an overview of the model, followed by the details of visual and auditory components in Section 3.2 and Section 3.3 respec- tively. 3.1. System Description Figure 2 shows the block diagram of the proposed model. The model is comprised of the following components: