Human-Computer Intelligent Interaction, Beckman Institute, and NCSA * National Center for Supercomputing Applications ABSTRACT This paper presents a human-machine performance configuration for multidimensional and multi-modal interaction in virtual environments. A performer is described as an observer interacting with a virtual environment to extract information in a time-critical manner. In the present research a performers multi-modal capacity is supported by time scheduling techniques for parallel processing of sensors and displays to provide synchronous perceptual feedback. This modality is coupled to multidimensional numerical simulations. A software architecture facilitates a temporal framework for interactivity, complementary to static spatial organization of geometry for graphical display. Mutually applicable design criteria include the management of computing resources, a configuration of an observation space, and VR authoring. Accordingly the following system designs are introduced: 1) criteria for bounded synchronization among parallel processes; 2) an interface paradigm implemented with the criteria to facilitate spatio-temporal articulation of multidimensional control signals extracted from continuous real-time gestures of observers; 3) a run- time configuration protocol implemented to support an efficient VR authoring capacity. Parallel processes are represented as nodes in a directed graph, with intelligent edges that determines how services between nodes are to be managed. 1: Introduction Human-machine performance is an active observation task instantiated by a human observer in an environment where 1) divisions of labor between human and machine are well defined, and 2) interaction is assisted with various means to enhance comprehension of the behavior of mechanisms under exploration. Along this definition we differentiate two aspects of interactions: those that are multidimensional and those that are multi-modal in human-machine performance. By multidimensional interaction we refer to the interaction between human and machine involving a consistent framework for mapping the affordance of human modal capacity to n control parameters in a computation. Since an interaction implies observations in time, multidimensional interaction assumes a temporal organization of instructions where the instructions are applied concurrently to multiple control parameters of a computation. Thus we note the extension of the term in our usage from the term multidimensional which is frequently applied to measurements taken by arrays of sensors to depict spatio-temporal properties of a signal. In a multi-modal interaction on the other hand, selected groups of control parameters are identified with modalities of interaction. Multi-modality in our usage is specific to the delivery of sensory information to an observer, mediated by various input devices and display functions. In our application input devices are typically linked to the n-dimensional systems under exploration. We will call such n-dimensional systems simulations. The further distinction of simulations and numerical models is discussed in Section 3. The display functions involve graphics and auditory display engines to represent the state changes of the simulations consequent to an interactive exploration. We consider two varieties of parameters, those attached to modality and those defined by simulations. The composition of groupings and mappings between them plays an important part for constructing an effective interactive environment. The term, human-machine performance has a precedent in a term, human-machine intelligent interaction with the two following emphasis in our project. The first emphasis is on the multi-modal capacity of a performer. This capacity is supported by parallel processing computing power, various input devices, gesture input time scheduling techniques, and the configurations of graphics and sound engines to provide perceptual feedback to a performer. The second is on the method for coupling a performers modality to a multidimensional simulated environment. The environments under development are comprised of hardware sensors, audio-visual displays and asynchronous parallel computation processes. The capacity to manage multiple tasks in parallel opens many possibilities for integrating alternative resources. Mostly the desirable implication is that the simulation of complex dynamics no longer yields the computational accuracy at the cost of real- time processing in an interactive mode when they are managed in a well-architectured environment. We propose Human - Machine Performance Configuration for Multidimensional and Multi-modal Interaction in Virtual Environments Insook Choi and Robin Bargar * Beckman Institute, 405 N. Mathews, Urbana, IL 61801, USA email: ichoi@ncsa.uiuc.edu