Research Article
Optical Enhancement of Exoskeleton-Based Estimation of
Glenohumeral Angles
Camilo Cortés,
1,2
Luis Unzueta,
1
Ana de los Reyes-Guzmán,
3
Oscar E. Ruiz,
2
and Julián Flórez
1
1
eHealth and Biomedical Applications, Vicomtech-IK4, Mikeletegi Pasealekua 57, 20009 San Sebasti´ an, Spain
2
Laboratorio de CAD CAM CAE, Universidad EAFIT, Carrera 49 No. 7 Sur-50, 050022 Medell´ ın, Colombia
3
Biomechanics and Technical Aids Department, National Hospital for Spinal Cord Injury, SESCAM,
Finca la Peraleda s/n, 45071 Toledo, Spain
Correspondence should be addressed to Juli´ an Fl´ orez; jforez@vicomtech.org
Received 15 January 2016; Revised 1 April 2016; Accepted 26 April 2016
Academic Editor: Qining Wang
Copyright © 2016 Camilo Cort´ es et al. Tis is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy
efcacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the
Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances.
Moreover, the exoskeleton posture does not accurately refect limb posture, as their kinematic models difer. To address the said
limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation
exoskeleton. Ten, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward
Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise
joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. Tis paper presents the formulation,
simulation, and accuracy quantifcation of the proposed method with simulated human movements. In addition, a sensitivity
analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifs, has been
carried out. Te results show that, even with signifcant errors in the marker position estimation, method accuracy is adequate for
RAR.
1. Introduction
Te application of robotics and Virtual Reality (VR) to
motor neurorehabilitation (Figure 1) has been benefcial for
patients, as they receive intensive, repetitive, task-specifc,
and interactive treatment [1–4].
Te assessment of (a) patient movement compliance
with the prescribed exercises and (b) patient long-term
improvement is critical when planning and evaluating the
efcacy of RAR therapies. In order to obtain the patient
motion data to conduct the said assessments, one has to
estimate patient posture (i.e., the joint angles of the limbs).
Patient posture estimation methods need to be practical and
easy to set up for the physician, so that the said assessments
can indeed be an integral part of the therapy.
Current methods for estimating patient posture are either
cumbersome or not accurate enough in exoskeleton-based
therapies. In order to overcome such limitations, we pro-
pose a method where low-cost RGB-D cameras (which
render color and depth images) are directly installed in
the exoskeleton and colored planar markers are attached to
the patient’s limb to estimate the angles of the GH joint,
thereby overcoming the individual limitations of each of these
systems.
2. Literature Review
Optical, electromagnetic, and inertial MOCAPs have been
used in many rehabilitation scenarios for accurate posture
estimation [5]. However, the use of the said MOCAPs in
exoskeleton-based rehabilitation is limited by the factors
discussed below:
(1) Optical marker-based systems (e.g., Optotrak,
CODA, Vicon) are considered the most accurate for
Hindawi Publishing Corporation
Applied Bionics and Biomechanics
Volume 2016, Article ID 5058171, 20 pages
http://dx.doi.org/10.1155/2016/5058171