Air Transport and Operations Symposium 2015 1 Gaze-coupled Perspective for Enhanced Human-Machine Interfaces in Aeronautics N. Masotti 1 and F. Persiani 2 University of Bologna, Bologna, Italy In aeronautics, many Virtual/Augmented Reality (V/AR) facilities, such as flight simulators, control-tower simulators, remote towers and flight reconstruction software, rely on the assumption that the viewer will most likely stay still in a pre-defined position. For this reason, they can be dubbed Desktop Virtual/Augmented Reality (D-V/AR) interfaces, in contrast with ‘gaze-coupled’ V/AR interfaces, which take into account the viewpoint position within the rendering pipeline. Surprisingly, in spite of the rough perspective model being used, D-V/AR is often well accepted by both designers and final users. Indeed, in some cases, it yields to credible results. However, when the viewer’s eyes move far away from their ‘default’ position, the rendering outcome will be affected by significant error, resulting in a poorly immersive and/or unrealistic experience. This paper discusses gaze-dependent visual interfaces as a means to enhance Human- Machine Interaction (HMI) and visual perception in V/AR based aeronautical facilities. Within the dissertation, a classification of leading V/AR display techniques is given, including D-V/AR, Off-axis V/AR (O-V/AR), Generalized V/AR (G-V/AR), Stereoscopic V/AR (S-VAR), Head-coupled V/AR (H-V/AR) and Fish-Tank V/AR (F-V/AR). For each technique, benefits, downsides and constraints have been exposed. Also, a set of suitable applications for gaze-dependent HMI has been identified, including, but not limited to, flight simulation, flight reconstruction, air navigation services provision and unmanned aerial system governance. I. Introduction Virtual Environments (VE) may be defined as computer-based facilities capable of recreating sensory experiences, including taste, sight, smell, sound and touch. Nevertheless, most applications primarily focus on the visual component. Often, synthetic information is shown directly – i.e. the VE uses Virtual Reality (VR) as a medium –, but, in some cases, information might be superimposed to the physical world as well. In the latter case we would better talk about Augmented Environments (AE) rather than VE. Nowadays, both VE and AE are profitably used in many fields, including entertainment, product design, automotive, navy and aeronautics. Historically, several display techniques have been developed for Virtual/Augmented Reality (V/AR) facilities. Many of these have been used in aeronautical tools and equipment, such as flight simulators, control tower simulators, synthetic vision systems, head-up displays, flight-reconstruction software and manned/unmanned aircraft avionics. As a compromise between correctness and viability, most of these techniques merely approximate the viewer’s perspective model, when, in fact, a much more complex algorithm should be used 2,3 . Furthermore, VEs often rely on specific equipment to be used. Therefore, V/AR developers are not only concerned with computer graphics content creation – namely modelling, texturing and animation – but also with software programming, artificial intelligence design, and I/O peripherals management. Moreover, for these systems to perform effectively, applied science, such as interface design, needs to draw upon basic sciences, such as computer vision, anthropometry, physiology and cognitive ergonomics 1 . The intertwining of these disciplines is sometimes referred as Human Factors and Ergonomics (HF&E). 1 Ph.D. Student, Department of Industrial Engineering, nicola.masotti@unibo.it 2 Full Professor, Department of Industrial Engineering, franco.persiani@unibo.it