Illumination Invariant Texture Retrieval Michal Haindl and Pavel V´ acha Institute of Information Theory and Automation Academy of Sciences CR, 182 08 Prague, Czech Republic {haindl,vacha}@utia.cas.cz Abstract Two fast illumination invariant image retrieval methods for scenes comprising textured objects with variable illumi- nation are introduced. Both methods are based on texture gradient modelled by efficient set of random field models. We developed the illumination insensitive measures for tex- tured images representation and compared them favorably with steerable pyramid and Gabor features in the illumina- tion invariant BTF texture recognition. 1. Introduction Content-based image retrieval systems typically query image databases based on some colour and textural fea- tures. Optimal robust features should be geometrically and illumination invariant. Although image retrieval has been an active research area for many years this difficult prob- lem is still far from being solved. Simpler methods based only on colour features achieve illumination invariance by normalizing colour bands or using colour ratio histogram. However colour based methods rarely perform sufficiently in natural visual scenes because they cannot detect similar objects in different location, illumination or backgrounds. Textures are important clues to specify objects present in a visual scene. Unfortunately the appearance of natural rough textures is highly illumination dependent. As a con- sequence most recent rough texture based classification or segmentation methods require multiple training images cap- tured under a full variety of possible illumination condi- tions for each class. Such learning is obviously clumsy and very often even impossible if required measurements are not available. Authors [4] allow a single training image per class, but they require surfaces of uniform albedo, smooth and shallow relief, the illumination sufficiently far from the texture macro-normal and most seriously the knowledge of illumination direction for all involved (trained as well as tested) textures. Although it was demonstrated in [7],[3] that for an ob- ject with Lambertian reflectance there are no discriminative functions that are invariant to illumination, the article [3] empirically verified that the direction of the image gradient is reasonably insensitive to changes in illumination direc- tion. Our proposed methods built on these results by in- troducing simple parametric measure robust to illumination changes. We present two methods that do not require nei- ther mutual texture registration nor the knowledge of illu- mination direction. They can be applied for textured objects retrieval if only single illumination training textured image for each class is available. 2. Texture Representation We use the gray-scale approximation of coloured bidi- rectional texture function (BTF) textures. Although we ne- glect spectral information, this representation is still suf- ficient for the retrieval application while simultaneously speeds up the proposed methods. Both our methods us- ing either causal simultaneous autoregressive model (CAR) or Gaussian Markov Random Field (GMRF) texture rep- resentation utilize the multiscale decomposition using the Gaussian pyramid. The Gaussian pyramid is a sequence of images Y (k) in which each one is a low-pass down- sampled version of its predecessor. The weighting func- tion (FIR generating kernel) is chosen subject to the sep- arability, normalization, symmetry and equal contribution constrains (for details see [2]). This multiscale approach allows to use both models with smaller contextual neigh- bourhood and consequently also smaller and more robust parameter sets. We assume single-scale factor texture gra- dient Y (k) r =[ ∂Y (k) r r1 , ∂Y (k) r r2 ] T of the scene to be locally modelled using either CAR or GMRF model, respectively. Both proposed methods use the corresponding random field model to estimate the factor gradient model parameters. 2.1. CAR Factor Model We assume texture gradient components of the single resolution factor to be locally modelled by an adaptive CAR 0-7695-2521-0/06/$20.00 (c) 2006 IEEE