Analyses of Human Sensitivity to Redirected Walking Frank Steinicke * , Gerd Bruder † Visualization and Computer Graphics Group Department of Computer Science WWU M ¨ unster, Germany Jason Jerald ‡ Effective Virtual Environments Group Department of Computer Science UNC at Chapel Hill, USA Harald Frenz § , Markus Lappe ¶ Psychology Department II WWU M ¨ unster, Germany Abstract Redirected walking allows users to walk through large-scale im- mersive virtual environments (IVEs) while physically remaining in a reasonably small workspace by intentionally injecting scene motion into the IVE. In a constant stimuli experiment with a two- alternative-forced-choice task we have quantified how much hu- mans can unknowingly be redirected on virtual paths which are different from the paths they actually walk. 18 subjects have been tested in four different experiments: (E1a) discrimination between virtual and physical rotation, (E1b) discrimination between two successive rotations, (E2) discrimination between virtual and phys- ical translation, and discrimination of walking direction (E3a) with- out and (E3b) with start-up. In experiment E1a subjects performed rotations to which different gains have been applied, and then had to choose whether or not the visually perceived rotation was greater than the physical rotation. In experiment E1b subjects discrimi- nated between two successive rotations where different gains have been applied to the physical rotation. In experiment E2 subjects chose if they thought that the physical walk was longer than the vi- sually perceived scaled travel distance. In experiment E3a subjects walked a straight path in the IVE which was physically bent to the left or to the right, and they estimate the direction of the curvature. In experiment E3a the gain was applied immediately, whereas the gain was applied after a start-up of two meters in experiment E3b. Our results show that users can be turned physically about 68% more or 10% less than the perceived virtual rotation, distances can be up- or down-scaled by 22%, and users can be redirected on an circular arc with a radius greater than 24 meters while they believe they are walking straight. CR Categories: H.5.1 [INFORMATION INTERFACES AND PRESENTATION]: Multimedia Information Systems—Artificial, augmented, and virtual realities 1 Introduction Walking is the most basic and intuitive way of moving within the real world. Keeping such an active and dynamic ability to navi- gate through large-scale immersive virtual environments (IVEs) is of great interest for many 3D applications demanding locomotion, ∗ e-mail:fsteini@math.uni-muenster.de † e-mail:g brud01@math.uni-muenster.de ‡ e-mail:jjerald@email.unc.edu § e-mail:frenzh@uni-muenster.de ¶ e-mail:mlappe@uni-muenster.de such as in urban planning, tourism, or 3D entertainment. Many do- mains are inherently three-dimensional and advanced visual sim- ulations often provide a good sense of locomotion, but exclusive visual stimuli cannot address the vestibular-proprioceptive system. Real walking through IVEs is often not possible [Whitton et al. 2005]. However, an obvious approach is to transfer the user’s tracked head movements to changes of the virtual camera in the virtual world by means of a one-to-one mapping. This technique has the drawback that the users’ movements are restricted by a lim- ited range of the tracking sensors and a rather small workspace in the real world. Therefore, concepts for virtual locomotion methods are needed that enable walking over large distances in the virtual world while remaining within a relatively small space in the real world. Various prototypes of interface devices have been developed to prevent a displacement in the real world. These devices include torus-shaped omni-directional treadmills [Bouguila and Sato 2002; Bouguila et al. 2002], motion foot pads, robot tiles [Iwata et al. 2006; Iwata et al. 2005] and motion carpets [Schwaiger et al. 2007]. Although these hardware systems represent enormous technologi- cal achievements, they are still very expensive and will not be gen- erally accessible in the foreseeable future. Hence there is a tremen- dous demand for more applicable approaches. As a solution to this challenge, traveling by exploiting walk-like gestures has been pro- posed in many different variants, giving the user the impression of walking. For example, the walking-in-place approach exploits walk-like gestures to travel through an IVE, while the user remains physically at nearly the same position [Usoh et al. 1999; Schwaiger et al. 2007; Su 2007; Williams et al. 2006; Feasel et al. 2008]. How- ever, real walking has been shown to be a more presence-enhancing locomotion technique than other navigation metaphors [Usoh et al. 1999]. Cognition and perception research suggests that cost-efficient as well as natural alternatives exist. It is known from perceptive psy- chology that vision often dominates proprioceptive and vestibular sensation when they disagree [Dichgans and Brandt 1978; Berthoz 2000]. When, in perceptual experiments, human participants can use only vision to judge their motion through a virtual scene they can successfully estimate their momentary direction of self-motion but are much less good in perceiving their paths of travel [Lappe et al. 1999; Bertin et al. 2000]. Therefore, since users tend to un- wittingly compensate for small inconsistencies during walking it is possible to guide them along paths in the real world which dif- fer from the path perceived in the virtual world. This redirected walking enables users to explore a virtual world that is consider- ably larger than the tracked working space [Razzaque 2005] (see Figure 1). In this paper we present a series of experiments in which we have quantified how much humans can be redirected without observing inconsistencies between real and virtual motions. The remainder of this paper is structured as follows. Section 2 summarizes previous work related to locomotion and perception in virtual reality (VR) environments. In Section 3 we present a taxonomy of redirected walking techniques as used in the experiments that are described in Section 4. Section 5 summarizes the results and discusses implica- tions for the design of virtual locomotion user interfaces. Finally, we give an overview about future work.