ScoreGraph: dynamically activated connectivity among parallel processes for interactive computer music performance Insook Choi Human-Computer Intelligent Interaction Laboratory Beckman Institute, University of Illinois at Urbana-Champaign 405 N Mathews, Urbana, IL 61801 ichoi@ncsa.uiuc.edu Alex Betts, Beckman Institute, UIUC Robin Bargar, NCSA and Beckman Institute, UIUC Abstract The structural specification and modeling of time critical real-time systems has become a major area for recent research topics. This is particularly relevant for computer music when sound computation is realized involving multiple methods of synthesis algorithms, simulations, input devices, and display systems. Such sound computation requires a parallel processing for real-time computation 1) to execute its own algorithm, 2) to receive a state change instruction, and 3) to display the changes of its state. In our system the synthesis algorithms reside as open systems in a connectivity configured to support multi-modal performance. Performers generate performance events by interacting with simulations through various input devices, in turn the changes of states in simulations are reflected in changes of states in sound and graphic synthesis algorithms. We note the deliberate placement of indirection between performers and synthesis algorithms in order to enhance a performability. ScoreGraph incorporates recent advances in graph-based architectures to enable us to manage multiple tasks in parallel continuity with computational efficiency. Dynamical activation of nodes and edges are achieved through a structural definition of connectivity. Efficiency is managed by local activation of graph-organized processes, where the depth of a locality is redefined interactively over time. In this paper we present details of the implementation and case studies of interactive computer music and Virtual Reality compositions realized in ScoreGraph. 1: Introduction Computer music and performance practice have been sustaining a marginal relationship while changing presentational appearances on stage. The stage presentation falls broadly into two categories: 1) tape and instruments, and 2) live processes and instruments. The context for “live process” depends on the configuration of technology by which the term changes its meaning and scope. The main conceptual challenge is to establish a relation between human and machine involving sensory and intelligent interpretation in a way the interpretations are conceivable not only to the machine and performers on stage, also to third observers, the audience. The current system provides a multi-modal performance environment where divisions of labor and communication protocols can be efficiently configurable. We refer the system as ScoreGraph. The ScoreGraph evolved within a project for research in Human-machine performance, which we describe as an active observation task performed by a human observer in an environment where 1) divisions of labor between human and machine are well defined, and 2) interaction is assisted with various means, and 3) the preceding configurations aim at enhancing comprehension of the behavior of mechanisms under exploration [Choi and Bargar 1995, 1998]. Since we are concerned with the performances of both human and machine, the system maintains and configures two different communications frameworks. By multidimensional frameworks we refer to a consistent framework for mapping the affordance of human movement capacity to n control parameters in a computation. In multi-modal framework on the other hand, selected groups of control parameters are identified with modalities of interaction. Multi-modality in our usage is specific to the delivery of sensory information to an observer, mediated by various input devices and display functions.