Applying Daytime Colors to Multiband Nightvision Imagery Alexander Toet TNO Human Factors Soesterberg, The Netherlands toet@tm.tno.nl Abstract - We present a method to give (fused) multiband night-time imagery a natural day-time color appearance. For input, the method requires a false color RGB image that is produced by mapping 3 individual bands (or the first 3 principal components) of a multiband nightvision system to the respective channels of an RGB image. The false color RGB nightvision image is transformed into a perceptually decorrelated color space. In this color space the first order statistics of a natural color image (target scene) are transferred to the multiband nightvision image (source scene). To obtain a natural color representation of the multiband night-time imagery, the compositions of the source and target scenes should be similar to some degree. The inverse transformation to RGB space yields a nightvision image with a day-time color appearance. Keywords: Keywords: Image fusion, infrared, false color, nightvision, intensified imagery, pyramid. 1 Introduction Modern night-time cameras are designed to expand the conditions under which human observers can operate. A functional piece of equipment must therefore provide an image that leads to good perceptual awareness in most environmental and operational conditions (to “Own the weather” or “Own the night”). The two most common night-time imaging systems either display emitted infrared (IR) radiation or reflected light, and thus provide complementary information of the inspected scene. A suitably combined or fused representation of IR and (intensified) visual imagery may enable an observer to construct a more complete mental representation of the perceived scene, resulting in a larger degree of situational awareness [17]. A false color representation of fused night-time imagery that closely resembles a natural daylight color image will help the observer by making scene interpretation more intuitive. The rapid development of multi-band infrared and visual nightvision systems has led to an increased interest in color fused ergonomic representations of multiple sensor signals [1-3, 5, 7, 12, 13, 18-21]. Simply mapping multiple spectral bands of imagery into a three dimensional color space already generates an immediate benefit, since the human eye can discern several thousand colors, whereas it can only distinguish about 100 shades of grey at any instance. Combining bands in color space therefore provides a method to increase the dynamic range of a sensor system [4]. Experiments have convincingly demonstrated that appropriately designed false color rendering of night-time imagery can significantly improve observer performance and reaction times in tasks that involve scene segmentation and classification [5, 14, 16, 18, 24]. However, inappropriate color mappings may hinder situational awareness [8, 16, 18]. One of the main reasons seems to be the counter intuitive appearance of scenes rendered in artificial color schemes and the lack of color constancy [18]. Hence, an ergonomic color scheme should produce night vision imagery with a natural appearance and with colors that are to some extent invariant for changes in the environmental conditions (i.e. the image should always have more or less the same appearance). Reinhard e.a. [10] recently introduced a method to transfer one image’s color characteristics to another. The method was designed to give synthetic images a natural appearance. Here we show that this method can be applied to transfer the natural color characteristics of daylight color imagery to fused multiband nightvision images. The method employs a transformation to a principal component space that has recently been derived from a large ensemble of hyperspectral images of natural scenes [11]. In this decorrelated color space the first order statistics of natural color images (target scenes) are transferred to the multiband nightvision images (source scenes). The only requirement of the method is that the composition of the source and target scenes is similar to some extent. Hence, the depicted scenes need not be identical; they merely have to resemble each other. For surveillance systems, that usually register a fixed scene, a daylight color image of the same scene that is being 614