Proceedings of the 2 nd International Workshop on Interactive Sonification, York, UK, February 3, 2007 CoRSAIRe – Combination of Sensori-motor Rendering for the Immersive Analysis of Results Brian FG Katz (1) , Olivier Warusfel (2) , Patrick Bourdot (1) , and Jean-Marc Vezien (1) (1) LIMSI-CNRS BP 133 91403 Orsay, France brian.katz@limsi.fr {first}.{lastname}@limsi.fr (2) IRCAM 1, place Igor-Stravinsky 75004 Paris, France Olivier.Warusfel@ircam.fr ABSTRACT The CoRSAIRe project (« Combinaisons de Rendus Sensori- moteurs pour l'Analyse Immersive de Résultats », or Combination of sensori-motor rendering for the immersive analysis of results) aims at significantly enhancing currently existing interfaces in scientific applications by introducing multiple sensori-motor channels, so that the user will be able to see, hear, and touch the data itself. The project focuses on two well-defined application areas: Fluid Mechanics and Bio- informatics. Stemming from an in-depth confrontation between current observation methods in these fields and existing Virtual Environment interaction techniques, new interaction concepts, paradigms, and concrete solutions are being designed and tested on real-world cases. Such an effort is inherently interdisciplinary and gathers experts from Virtual Reality with knowledge in distributed computing, graphics, audio, and haptic, together with application specialists and experts in ergonomics. 1. INTRODUCTION The goal of the CoRSAIRe project is to develop new ways of interacting with large or complex digital worlds. The project aims at significantly enhancing currently existing interfaces by introducing multiple sensori-motor channels, so that the user will be able to see, hear, and touch the data itself (or objects derived from the data), thus redefining conventional interaction mechanisms. Such a research effort involves a paradigm shift, because many well-established visualization-oriented software packages exist to analyze the large spectrum of available data types: thus, creating a completely innovative sensori-motor interface would seem a daunting task. This project focuses on two well-defined application areas (Fluid Mechanics and Bio- informatics) with which collaborations are put into place with end-user partners. A major facet of the project regards how the scientist is able to explore, analyze, and understand large complex datasets. The complexity of the representation in the two disciplines mentioned lies on the one hand in the number of correlated variables to analyze simultaneously and on the other hand, in the many parameters the user must control to successfully drive his analysis. In the CoRSAIRe project, one identified goal is to allow the user to interact with the virtual data in real-time on the evolution of the studied phenomena (to correct, target, modify, annotate...), according to information that he/she perceives. The multimodal management of the interaction is based on the exploitation of stereoscopic visualization, three- dimensional audio, and haptic feedbacks. When considering multimodality, an important and non-trivial issue is the relation between the rules and principles allowing the decision of how the information should be distributed on the different modality channels and how user commands are made available on the different interaction modalities. On the other hand, in contrast to the visual or haptic modalities, the sonification of data brings a more global comprehension of the information through full 3D reproduction, while also providing an improved representation of the aspects of temporal dependence. Through this method of information presentation, the human capacity for auditory analysis, able to extract the periodicities or the sophisticated constructions over long durations, is particularly well adapted. Development of the sonic representation of large data structures involves several key points of research and investigation. The current methodologies for interactive sonification in the two domain applications are very different. The basis for each shall be discussed in the following sections. 2. MULTI-MODAL DISTRIBUTION The implementation of multimodal rendering is considered in two parts: the analysis of the end-users’ needs and the conception of appropriate multimodal interactions; and the design of hardware and software technologies allowing these interactions. Rather than attempt designing a fully independent multimodal distribution engine [3], the technological approach used is centered on a “multimodal supervisor.” Interactions are designed by the developer of the application, who chooses the paradigms for all the tasks. When the user wants to perform a specific task, the application proposes an interaction to the supervisor which is then in charge of validating, modifying, or specifying it. This choice depends on: a set of distribution rules, the context of the task, the state of the current rendering and the command of the user. The criteria that could influence the rendering must be known at all time. The dynamic context is observed by a module that translates various state variables (tracking data, etc.) into meaningful parameters: where the user is and what he is doing, how the scene is organized, etc. Another module is in charge of observing the static and dynamic rendering capacities of the application (available modalities in the application and charge of these modalities). Regarding user commands, when he has decided to do a task with a specific paradigm, the supervisor must take into account this high level command: which means it should validate the interaction, even if it is not the best choice, unless it is impossible or dangerous.