Managing Visual Clutter: A Generalized Technique for Label Segregation
using Stereoscopic Disparity
Stephen Peterson
∗
Department of Science and Technology
Link ¨ oping University
Magnus Axholt
†
Department of Science and Technology
Link ¨ oping University
Stephen R. Ellis
‡
Human Systems Integration Division
NASA Ames Research Center
ABSTRACT
We present a new technique for managing visual clutter caused by
overlapping labels in complex information displays. This tech-
nique, “label layering”, utilizes stereoscopic disparity as a means
to segregate labels in depth for increased legibility and clarity. By
distributing overlapping labels in depth, we have found that selec-
tion time during a visual search task in situations with high levels
of overlap is reduced by four seconds or 24%. Our data show that
the depth order of the labels must be correlated with the distance
order of their corresponding objects. Since a random distribution
of stereoscopic disparity in contrast impairs performance, the ben-
efit is not solely due to the disparity-based image segregation. An
algorithm using our label layering technique accordingly could be
an alternative to traditional label placement algorithms that avoid
label overlap at the cost of distracting motion, symbology dimming
or label size reduction.
Keywords: Label placement, user interfaces, stereoscopic dis-
plays, augmented reality, air traffic control.
Index Terms: H.5.2 [Information Systems]: User Interfaces; I.3
[Computing Methodologies]: Computer Graphics
1 I NTRODUCTION
As information systems convey more and more data in confined
spaces such as computer screens, care must be taken in the user in-
terface to manage the resulting visual clutter. In cluttered displays,
information may be obscured, fragmented or ambiguous, negatively
affecting system usability.
Labels, textual annotations containing object data, are one im-
portant source of visual clutter, as they overlay background layers
containing their associated objects. Since legible labels need to oc-
cupy a certain minimum screen space, they may occlude or obscure
other information, including other labels.
Because labels are generally associated with objects or features
in the background, their placement is linked to the spatial projection
of their corresponding objects on the display plane. In certain cases,
such as some information visualization applications, the underlying
data can be spatially or temporally rearranged to simplify labeling
and data interpretation. However, in applications like see-through
Augmented Reality (AR), the background normally consists of real
objects directly observed by the system user; accordingly all un-
derlying display elements cannot be adjusted freely to simplify the
labeling task.
The application domain explored below is an AR display for Air
Traffic Control (ATC) towers, in which tower and apron controllers
operate to maintain safe aircraft separation at the airport. In our
∗
e-mail: stepe@itn.liu.se
†
e-mail: magax@itn.liu.se
‡
e-mail: sellis@mail.arc.nasa.gov
environment a Head-Up Display (HUD) system could use AR tech-
niques to process position data and overlay controlled aircraft with
labels, “data tags”, presenting vital flight information such as call-
signs. This type of display could minimize controllers’ head-down
time and attention shifts required to scan traditional radar displays.
Despite the elevation of the control tower cab, typically about 50
meters above ground level, the lines of sight to controlled aircraft
towards the local horizon are greatly compressed due to their rel-
atively large distance from the tower, which could surpass 3 km.
Therefore, the associated overlaid aircraft labels will frequently be
subject to visual clutter in a HUD as they would likely overlap other
aircraft and labels, especially at busy airports with distant taxiways
and runways.
Traditional label placement algorithms evaluate available 2D
screen space to find optimal label locations without overlap, e.g.
in cartography [7, 24], scientific illustration [10] and ATC radar in-
terfaces [6, 9, 16]. This approach to label placement is not limited
to a 2D presentation medium, since it includes AR and virtual envi-
ronment interfaces [2, 3, 25, 21]. While these techniques generally
avoid visual overlap, they introduce another interface design issue:
which label belongs to which object? Despite the fact that a label
may be connected to its background object with a line, there may
be confusion as labels move according to the motion of their cor-
responding objects. Such confusion occurs especially if label lines
intersect or are forced to overlap due to imperfect performance of
label placement algorithms. Moreover, motion from automatic re-
arrangement of label positions can disturb or distract the user [1].
Other approaches aim at reducing visual clutter without spatial
rearrangement, e.g. information filtering [15] or symbology dim-
ming [12, 13] of data unimportant to the current task. However,
automated importance classification and subsequent display sup-
pression can entail a safety risk. Furthermore, declutter algorithms
generally do not totally avoid the confusing overlap; they merely
reduce it.
We propose an alternative approach to reduce the visual clutter
associated with label overlap: label layering. This approach does
not rearrange labels in 2D screen space, nor does it filter or dim any
information. Instead it extends the design space and utilizes the
depth dimension, available in e.g. stereoscopic AR displays. More
specifically, our technique entails placing labels in a certain number
of predetermined depth layers located between the observer and the
observed objects, with droplines connecting each label to its corre-
sponding object in depth. While the general technique of reducing
visual clutter using stereoscopic disparity is not novel in itself, as
discussed later on, it is to our knowledge the first application and
rigorous evaluation of the technique concerning the specific prob-
lem of label placement. In this work the label layering technique is
instantiated in a HUD for control towers; however, it could poten-
tially be applied to any user interface equipped with a stereoscopic
display device.
The human vision system interprets depth through a series of
depth cues, which combine to give the observer both relative and
absolute object depth information [5]. One of these cues, retinal (or
binocular) disparity, interprets the difference (disparity) in binocu-
lar parallax on the retinas to trigger depth perception. The sensation
169
IEEE Virtual Reality 2008
8-12 March, Reno, Nevada, USA
978-1-4244-1971-5/08/$25.00 ©2008 IEEE