Water Detection From Downwash-induced Optical Flow for a Multirotor UAV Ricardo Pombeiro, Ricardo Mendonc ¸a, Paulo Rodrigues, Francisco Marques, Andr´ e Lourenc ¸o, Eduardo Pinto, Pedro Santana and Jos´ e Barata Abstract—Terrains of different nature exhibit distinct struc- tural dynamics when exposed to wind due to their own intrinsic material composition. This translates into peculiar optical flow patterns that can be used to identify the terrain type. In this sense, this paper proposes an active vision-based water detection model that exploits the predictable optical flow patterns induced by the downwash effect of vertical take-off and landing Unmanned Aerial Vehicles (UAV). To determine whether a water surface is below the UAV, the system tracks the optical flow in the video feed captured by a downward-looking camera. Then, an histogram of optical flow orientation is built and compared against a model histogram, given the expected effect induced by the downwash effect. The histograms are compared and similarity between both histograms is used as the likelihood of the terrain as being covered by water The resulting classification can be used to guide the landing of the UAV or to produce cost maps supporting ground vehicles’ safe navigation. The model was successfully validated on 20 videos acquired with an hexacopter while hovering above sandy, grassy, and water-covered terrains. I. I NTRODUCTION Water detection is a crucial capability for a wide range of autonomous vehicles, either for ground vehicles to avoid water bodies, aerial vehicles to determine suitable landing areas, or surface vehicles to detect safe passageways. Laser scanners have proven to be useful to detect water as no-return situations usually hint the presence of a water body [1], [2], [3]. However, down to a certain depth, like on small puddles or shallow waters, some light may be reflected off the underwater ground surface. This creates a false indication that a dry and solid surface may be present where a potentially hazardous water body may be, rendering the approach, by itself, insufficient. Several vision-based solutions have been proposed as well. Stereo vision methods for water detection rely to a large extent on the fact that object reflections are reconstructed as 3D points below the surface level [4], [5], [6]. However, this technique is limited to the near-field and suffers from common stereo-vision pitfalls, such as sensitivity to lighting and weather conditions. Monocular vision cues have also been proposed for water detection. For instance, symmetry operators can be used to detect reflections on the water surface [7], [8]. Textureless and high brightness image patches could also be indicative of water presence [5]. Detecting sky reflections R. Pombeiro, R. Mendonc ¸a, P. Rodrigues, F. Marques, A. Lourenc ¸o, E. Pinto, and J. Barata are with CTS-UNINOVA, Universidade Nova de Lisboa (UNL), Portugal. E-mails: {rjp, rmm, pmr, fam, afl, emp, jab}@uninova.pt P. Santana is with ISCTE - Instituto Universit´ ario de Lisboa (ISCTE- IUL), Portugal and Instituto de Telecomunicac ¸˜ oes, Portugal. E-mail: pedro.santana@iscte.pt (a) (b) (c) Fig. 1. The UAV’s propulsion system used as an active perception mechanism to identify water bodies from their susceptibility to the vehicle’s propeller whirlwind. (a) Downwash effect caused by the UAV hovering above a water body. (b) Video frame captured by the UAV’s downward-looking camera. (c) Optical flow induced by the UAV’s downwash effect (depicted by the blue lines on the captured image). [9] and colour variations as the vehicle moves [10] are also interesting cues. These cues rely on the assumption that the camera is roughly pointing towards the horizon line, which is considerably different from the bird’s-eye view usually available to aerial vehicles. Rather than relying on a set of known cues, complex sets of perceptual features can also be employed by learning image classifiers from offline datasets [1], [8], [11]. However, reflections, foliage or other objects will impair the classi- fiers’ accuracy. To mitigate this problem, we have proposed a method for a surface vehicle to supervise, based on its own sensors, the online learning of water/land aerial image classifiers to be used by an Unmanned Aerial Vehicle (UAV) teammate [12]. However, the aerial vehicle’s dependency on a surface robotic teammate renders this approach mostly limited to marsupial robotic systems [13]. Motionless water surfaces do not exhibit a detectable texture if there is no natural source of disturbance. However, having the UAV approach the water body its rotor blades’ whirl