Depth Energy Image for Gait Symmetry Quantification
Caroline Rougier, Edouard Auvinet, Jean Meunier, Max Mignotte and Jacques A. de Guise
Abstract— This paper introduces a new quantification
method for gait symmetry based on depth information acquired
from a structured light system. First, the new concept of Depth
Energy Image is introduced to better visualize gait asymmetry
problems. Then a simple index is computed from this map
to quantify motion symmetry. Results are presented for six
subjects with and without gait problems. Since the method is
markerless and cheap, it could be a very promising solution in
the future for gait clinics.
I. INTRODUCTION
Gait analysis systems are important for helping diagnostic
of abnormal gait patterns. For simplicity, gait symmetry has
been often used to characterize gait problems [5]. Indeed,
the lower limbs are supposed to evolve symmetrically for
a normal walker. This statement is controversial for some
researchers as the gait can be influenced for example by
limb dominance [6]. However, a quantification tool for gait
symmetry could be useful for clinicians to evaluate walking
dysfunctions, for example for stroke and amputee patients,
or to analyze the recovery after a knee surgery.
One commonly used method for gait analysis is motion
capture (MOCAP) [8], [10] which consists in tracking in-
frared (IR) reflective markers using multiple IR cameras.
Such systems have been used to analyze gait symmetry [8],
[10], as well as acceleration signals [9], with walkway
systems [3] or laterally placed cameras [5]. In this paper,
a new gait analysis system is proposed based on a treadmill
associated with a cheap depth sensor placed at the back of
the treadmill. The advantages of our system compared with
MOCAP systems are that no markers are needed and its low
cost price, which makes the system well adapted for clinical
use.
For our experiments, six young male adults were asked
to walk on a treadmill (Life Fitness F3). After a period
of habituation of 5min, their normal walk speeds were
determined and used for further testing. Three tests were
done:
• Normal walk which served as a reference.
• Right leg problem which was simulated with a heel
cup (height of 2.5cm) placed inside the right shoe.
This work was supported by the Fonds Qu´ eb´ ecois de la Recherche sur
la Nature et les Technologies (FQRNT).
C. Rougier, E. Auvinet, J. Meunier and M. Mignotte
are with the D´ epartement d’Informatique et de Recherche
Op´ erationnelle (DIRO), Universit´ e de Montr´ eal, Montr´ eal, Canada
rougierc,auvinet,meunier,mignotte@iro.umontreal.ca
J.A. de Guise is with the Laboratoire de Recherche en
Imagerie et Orthop´ edie, Centre de recherche du Centre Hospitalier
de lUniversit´ e de Montr´ eal (CRCHUM), Montr´ eal, Canada
jacques.deguise@etsmtl.ca
• Left leg problem which was simulated with a heel cup
(height of 2.5cm) placed inside the left shoe.
The heel cup is used here to generate a limping walk which
will produce an unbalanced gait with asymmetric character-
istics. For each test, after another period of habituation on
the treadmill (2-3 min), a three-minute video was recorded
with the depth camera (see Section II) placed at the back of
the treadmill (back view of the person). Ethical approbation
was obtained from the research ethics board (REB) of our
university for this project.
II. DEPTH SENSORS
Depth maps, which show the different depths of a scene,
can be obtained in several ways:
• Stereo vision [13] The 3D view of a scene can be
reconstructed with a calibrated binocular system. How-
ever, to obtain precise depth maps, such systems require
to be well calibrated and to have a textured scene.
Moreover, stereo reconstruction algorithms are often
computationally expensive.
• Time-of-Flight (TOF) camera [14] Accurate depth
images can be obtained with a TOF camera, but this
technology is very expensive and currently limited to
low image resolution (e.g. image size of 176x144 pixels
in [7], [14]).
• Structured light With a known artificial texture pro-
jected on the scene, a depth map can be obtain from a
monocular system. The Kinect sensor [11] is based on
this method with an infrared structured light (IR dots)
projected in the scene and observed with an infrared
camera. Such systems can acquire bigger images than
a TOF camera at a lower price (e.g. image size of
640x480 pixels at 30 fps for the Kinect sensor which is
currently fifty times cheaper than a TOF camera).
For clinical gait analysis, a low-cost and easy-to-install
system is more suitable, which encouraged us to choose the
Kinect sensor [11] to acquire depth images. The resulting
images are disparity maps where far objects are represented
with higher Kinect disparity values (within the depth range
used in our study). The disparity values can be converted
in depth values after a calibration step, which consists in
moving a plane along a rail at known depths and acquiring
corresponding disparity values. Then, a set of disparity-depth
pairs is obtained and used to compute the relation between
disparity and depth:
Depth =1/(−0.0032936 Disparity +3.5463) (1)
An attempt to use depth images for gait analysis has
previously been done using a TOF camera [7]. However,
978-1-4244-4122-8/11/$26.00 ©2011 IEEE 5136
33rd Annual International Conference of the IEEE EMBS
Boston, Massachusetts USA, August 30 - September 3, 2011