Insect Inspired Visual Control of Translatory Flight Titus R. Neumann and Heinrich H. Bülthoff Max Planck Institute for Biological Cybernetics, Spemannstraße 38, 72076 Tübingen, Germany titus.neumann@tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Abstract. Flying insects use highly efficient visual strategies to control their self-motion in three-dimensional space. We present a biologically inspired, min- imalistic model for visual flight control in an autonomous agent. Large, special- ized receptive fields exploit the distribution of local intensities and local motion in an omnidirectional field of view, extracting the information required for attitude control, course stabilization, obstacle avoidance, and altitude control. In open- loop simulations, recordings from each control mechanism robustly indicate the sign of attitude angles, self-rotation, obstacle direction and altitude deviation, respectively. Closed-loop experiments show that these signals are sufficient for three-dimensional flight stabilization with six degrees of freedom. 1 Introduction Advances in Artificial Life - Proceedings of the 6th European Conference on Artificial Life ECAL 2001, Eds. J. Kelemen and P. Sosik, LNCS/LNAI 2159, pp. 627–636, c Springer-Verlag, Berlin (2001) Experimental results from insect biology suggest that flying insects use a variety of highly efficient visual strategies for flight control and navigation (e.g. [3],[10]). Consid- ering the extremely small size – less than one cubic millimeter in many insects – as well as the low weight and energy consumption of insect brains, they outperform any existing technical system. It is assumed that the highly specialized, parallel feed-forward infor- mation processing in the insect visual system is essential for the speed and robustness of these behaviors. Modeling these strategies on artificial agents can improve performance compared to traditional approaches while reducing the computational effort. Previous studies of biologically motivated visual control of self-motion and obstacle avoidance in artificial systems were limited to motion in a horizontal or vertical plane with one or two degrees of freedom. Mura and Francheschini (1994) simulated vertical obstacle avoidance and altitude control behavior assuming pure forward motion in the vertical plane with fixed attitude angles [7]. Huber and Bülthoff (1997) demonstrated the simulated evolution of two-dimensional obstacle avoidance and tracking behavior in an artificial agent inspired by the visual system of the fly [5]. Srinivasan et al. (1999) applied several principles of insect vision such as rangefinding by ”peering”, centering behavior, obstacle avoidance, and visual odometry to robot navigation on the ground plane [9]. However, motion in three-dimensional space has six degrees of freedom which cannot be controlled independently from each other due to the anisotropy of the en- vironment determined by gravity [4]. Body rotations about the vertical axis and altitude 627