Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from permissions@acm.org.
VRST 2014, November 11 – 13, 2014, Edinburgh, Scotland, UK.
Copyright © ACM 978-1-4503-3253-8/14/11 $15.00
http://dx.doi.org/10.1145/2671015.2671119
Illumination Independent Marker Tracking using Cross-Ratio Invariance
Vincent Agnus
*
, St´ ephane Nicolau, Luc Soler
IRCAD
1 place de l’hopital
67091 STRASBOURG
FRANCE
Abstract
Marker tracking is used in numerous applications. Depending on
the context and its constraints, tracking accuracy can be a crucial
component of the application. In this paper, we firstly highlight
that the tracking accuracy depends on the illumination, which is
usually not controlled in most applications. Particularly, we show
how corner detection can shift of several pixels when light power
or background context change, even if the camera and the marker
are static in the scene. Then, we propose a method, based on the
cross ratio invariance, that allows to re-estimate the corner extrac-
tion so that the cross ratio of the marker model corresponds to the
one computed from the extracted corners in the image. Finally, we
show on real data that our approach improves the tracking accuracy,
particularly along the camera depth axis, up to several millimeters,
depending on the marker depth.
CR Categories: I.4.7 [Image Processing and Computer Vi-
sion]: Feature Measurement—Invariants I.4.8 [Image Processing
and Computer Vision]: Scene Analysis—Tracking;
Keywords: Marker tracking, Illumination conditions, Cross-ratio,
Augmented reality
1 Introduction
Marker (or tag) tracking based on 4 corners extraction is extremely
common in many augmented reality application. Although for
some of them, absolute accuracy is not important (relative accu-
racy only has to be acceptable), there are many applications which
need millimeter accuracy for increased realism, eye comfort or for
guidance [Navab 2004; Nicolau et al. 2011].
Several works have been performed to increase the tracking accu-
racy [Yoon et al. 2006; Uematsu and Saito 2007; Fiala 2005; Nico-
lau et al. 2005; Eggert et al. 2014] (particle filter, more points on
the tag, subpixel corner detection and additional camera).
As far as we know, none of them mention the supplementary track-
ing error that light conditions (changing or not) can imply. Indeed,
the light condition on the tracked marker influences the marker pose
estimation. More precisely, if the marker is underexposed or over-
exposed to light, the corner extraction result is different, up to few
pixels (cf. Fig. 1). This shifting is mostly due to non-linear map-
ping between scene radiance[Mitsunaga and Nayar 1999] and mea-
sured brightness and its rasterisation (spatial and intensity) on CDD
∗
e-mail:vincent.agnus@ircad.fr
White
Black
Scene Radiance
Radiometric
Response
Measured Brightness Image
Rasterisation
Spatial & Intensity
Figure 1: two first rows) We illustrate a corner extraction evolution
when light power is increased. Top left image shows several mark-
ers in an image. Top right and bottom zoomed images correspond
to the green rectangle in the top left image, under different light
conditions. Each zoomed image displays the corner extraction (in-
tersection of red lines), performed using subpixel corner detection
from openCV (cf. section 4). We also superimpose with blue dots
the results of the same corner extracted under various illumination
conditions. When under-exposed (resp. over-exposed), the marker
shape is expanding (resp. shrinking) in all directions (corner shift
is over 1 pix.). bottom row) Illustration of over-exposure on black
& white edge localization : Non linear mapping of scene radiance,
measured brightness and camera rasterisation cause corner shift-
ing.
sensors of the camera. Moreover, if illumination condition are ex-
treme or aperture size and/or duration are badly set then the sig-
nal can be clipped leading to a supplementary shift. Thus corner
localization using standard detection is biased depending on light
conditions. Note that Circular markers are also influenced by illu-
mination change. Indeed detected circles will be shrunk/expanded
anisotropically and their gravity center will be shifted.
Several authors have designed detectors which are robust against il-
lumination change [Triggs 2004; Gevrekci and Gunturk 2009]. But
the robustness is related to the detection not the localization. Indeed
to prove the efficiency of their detectors, they compute the detection
rate of the same corner under different illumination configurations.
They define two detected corners to be the same if they are close
(typically less than 1 pixel) which means they did not focus their
work on localization accuracy.
Since the 4 corner extraction are consistently impaired, either to-
69