Gaining Colour Stability in Live Image Capturing Guy K. Kloss, Napoleon H. Reyes and Ken A. Hawick Computer Science Institute of Information & Mathematical Sciences Massey University at Albany, Auckland, New Zealand {G.Kloss | N.H.Reyes | K.A.Hawick}@massey.ac.nz ABSTRACT Digital colour cameras are dramatically falling in price, mak- ing them affordable for ubiquitous appliances in many ap- plications. An attempt to use colour information reveals a significant problem that usually escapes our awareness. Due to the adaptive nature of the human visual system in most cases we do not recognise most changes in illumination characteristics, a camera however will measure scenes un- der changing illumination differently. Attempts to deduce object colour from the images will need to cope with the influence of the illumination and the camera’s characteris- tics. Furthermore, a large variety of colour spaces are avail- able to describe colour. Differences between them and their fitness to quantify colour are discussed. This paper tries to establish a basic understanding of the intricacies behind the processes involved in capturing images and recognising colour—from light as a stimulus to the colour sensed values in cameras. The goal is to outline a novel approach fusing common industrial best practices with dynamic adaptation capabilities needed for robustly measuring colour using cam- eras in real–time. First positive results towards improving colour based reasoning on adaptable colour spaces are stated as an outlook for further development directions. Categories and Subject Descriptors I.4 [Computing Methodologies]: Image Processing and Computer Vision; I.4.8 [Image Processing and Computer Vision]: Scene Analysis—Colour ; I.2.10 [Computing Methodologies]: Artificial Intelligence—Vi- sion and Scene Understanding ; G.1.2 [Mathematics of Computing]: Numerical Analysis—Approximation Keywords Chromatic adaptation; colour management; colour vision; colour constancy; colour spaces. 1. INTRODUCTION In most cases pictures of scenes do not exactly match the original capture. The pictured scene does not match from colour measurements or even from an appearance stand- point. Besides the common usage of photography and filming, with increasing computing capabilities digital cameras are in- creasingly used for analytical image capturing. Images are processed to extract information. This information may be based on shapes, (3D) objects (through multiple cameras or frames), colour, etc. Analyses of the artistic or aesthetic image content is of lesser interest, but rather precision is wanted. Colour is an important factor in organising dayly life. Beyond grey scale imaging, colour image analysis pro- vides extended channels of information for many tasks. A technical system accounting for colour could—just as hu- mans do—benefit largely of this additional value. People at airports waiting for their suitcases, for example, will dis- criminate the approaching bags first by colour, and only on a good match expand into feature comparison. Also tech- nical systems can just as well often gain significant benefits from fast colour processing. The following scenario will provide an impression of what an image processing system may need to be able to cope with: A scene during the day is observed by a camera in a room with a window (no artificial light) and an overcast sky (quite “neutral” daylight illumination). The weather clears up to a spotless blue sky, and the light changes to a more bluish shade and is brighter. In the evening sun rays directly fall in through the window onto the scene to increase the light in- tensity further and “paint the scene” in yellow/orange shades while adding hard shadows. During dusk, as the sun sets, a person turns on fluorescent light in the room with yet an- other shade and light intensity; additionally the spectrum of the fluorescent light is composed differently. Ideally a scene’s colour composition is detected properly (after digital image processing), regardless of the camera images’ bright- ness, dynamic range, and colour shifts due to the different lights’ colour compositions. Our eyes tend to present a “fil- tered” representation by adapting to the conditions quite well, the raw camera images do not. We perceive a white piece of paper within the scene as white, regardless whether it is viewed in direct sun or candle light. Currently most video capturing systems used for colour based reasoning work fairly well under defined and constant conditions. Depending on the required precision any change in illumination condition, as well in a change in hardware (one camera to another), will quickly render the results less usable. By increasing the quantitative colour perception ro- This paper was published in the proceedings of the New Zealand Computer Science Research Student Conference 2008. Copyright is held by the author/owner(s). NZCSRSC 2008, April 2008, Christchurch, New Zealand.