ISMAR 2003 1 Mixed Fantasy: An Integrated System for Delivering MR Experiences Charles E. Hughes Christopher B. Stapleton J. Michael Moshell Computer Science Dept. Inst. For Simulation & Training Digital Media Program and Digital Media Program and Computer Science Dept. Paulius Micikevicius Darin E. Hughes Peter Stepniewicz Computer Science Dept. Computer Science Dept. Digital Media Program University of Central Florida Orlando, FL 32816 {ceh@cs.ucf.edu, cstaplet@ist.ucf.edu, moshell@cs.ucf.edu, pmicikev@cs.ucf.edu, darin@cs.ucf.edu, peter@dm.ucf.edu} Abstract This paper describes the underlying science and technologies employed in a system we have developed for creating and delivering multimodal Mixed Realities. We also present several aspects of a number of experiences that we have developed and delivered, when such aspects help understanding of algorithmic requirements and design decisions. The technical contributions include a unique application of retro-reflective material that results in nearly perfect chroma-key, even in the presence of changing illumination, and a component-based approach to scenario delivery. The user interface contribution is the integration of 3-D audio, hypersonic sound and special effects with the visuals typical of an MR experience. 1. Introduction Mixed Reality, the landscape between the real and the virtual, is an area of research with challenges in science, technology, systems integration and the application of artistic convention. The latter area is the topic of a companion paper [4]. Here we focus on our scientific, technological and system integration contributions. The scientific results presented here include object selection techniques based on a laser beam emitted from a user-controlled device and a unique application of retro- reflective material to object registration. The technologies include 3D and hypersonic sound, and show control software systems and interfaces we have developed. The integration contribution is a component-based approach to scenario delivery, the centerpiece of which is a scenario engine that integrates the story components provided by compelling 3-D audio, engaging special effects and the realistic visuals required of a multimodal MR entertainment experience [5]. Section 2 presents the interesting aspects of our MR Graphics Engine, focusing on our contributions to the areas of object selection and registration. Section 3 describes the audio systems we have developed, looking at our 3D audio design and delivery system and our experiments with hypersonic sound. Section 4 describes the technologies of show control devices (called macrostimulators here) and the creative uses we have of them in MR experiences. Section 5 describes our scenario authoring and delivery system. Section 6 ends the paper with a brief discussion of directions that our current research efforts are taking. 2. Rendering and Registration While most of our work transcends the specific technology used to capture the real scene and deliver the mixed reality, some of our work in rendering and registration is highly dependent upon our use of a video see-through head mounted display (HMD). The specific display/capture device used in our work is the Canon Coastar video see-through HMD [6]. In our MR Graphics Engine, the images displayed in the HMD are rendered in three stages. Registration is performed during the first and the third stages and includes properly sorting the virtual and real objects in the images displayed to the user, as well as detecting when a user selects a real/virtual object. 2.1. Rendering Rendering is done in three stages. In the first stage, images are captured by the HMD cameras, scaled, and