Robot Guidance utilizing 3D Sensor Data
P. Gsellmann, M. Melik Merkumians and G. Schitter
Automation and Control Institute
TU Wien
Vienna, Austria
gsellmann@acin.tuwien.ac.at
I. I NTRODUCTION
In the recent years, the use of robots in several industrial
sectors increased. Next to classic manufacturing processes,
also fields such as building construction considered the uti-
lization of robot manipulators, in order to release human
employees from physically demanding tasks. Within these
different and often changing environments, 3D scanners, such
as time-of-flight cameras or lidars [1], offer a robust solution
to gain important information on the overall work space and
the robot manipulator itself.
Thus three robotic applications, showing the advantages of
utilizing 3D scanner data, are presented.
II. VISUAL SERVOING OF I NDUSTRIAL ROBOTS
The visual servoing approach presented uses depth images
for robot-pose estimation utilizing a marker-less solution [2].
By matching a predefined robot model to a captured depth
image for each robot link, utilizing the Iterative Closest Point
(ICP) algorithm, the robot’s joint pose can be estimated.
The a-priori knowledge of the robot configuration, alignment,
and its environment enables a joint pose manipulation by a
visual servoed system with potential to collision detection
and avoidance. The modeled links are coupled as a kinematic
chain by the Denavit-Hartenberg convention. The required
joint orientation of the robot is calculated by the ICP algorithm
to perform a pose correction until its point cloud aligns with
the associated robot model again. The implemented method
leads to accurate results for static pose detection of an ABB
IRB 120, with an RMS deviation of the joint angles of 6
◦
for
a deviation of the TCP in a spherical volume with a radius of
r =5mm.
III. PATH PLANNING OF I NSULATION MATERIAL
DISTRIBUTION BASED ON 3D CAMERA DATA
For the robotic distribution of granular-fill insulation ma-
terial, a path planning strategy is required. The initial coarse
manual distribution of the material leads to an uneven surface
with areas of excessive or insufficient material. In order to
uniformly distribute the bulk material, first the worked area is
captured as point cloud with an 3D camera, and afterwards
these irregularities are located via agglomerative hierarchical
clustering. Subsequently, their volumes are estimated provid-
ing weights for the path calculation. A path planning method,
inspired by the usual working method of human construction
workers, is developed and applied [3]. The proposed method
is subsequently examined in a test scenario, where the total
path length and processing sequence is analyzed, yielding that
the presented path planning algorithm is well suited for the
described application, showing the best results with a larger
blade size and a quadratic distance-to-goal behavior.
IV. TOWARDS VISION- BASED ROBOT WORK SPACE
SURVEILLANCE
In working spaces, where human operators and industrial
robots cooperate, safety is of importance. Regarding this
matter, two approaches are popularly used: vision-based ap-
proaches such as 3D scanners surveillance via motion, color
and texture analysis, and inertial sensor-approaches via capture
suits. Though, the latter method could be problematic for
industrial tasks and may not be feasible for certain areas of
application.
Therefore, the proposed concept suggests the use of 3D
cameras for the purpose of work space surveillance. This
method utilizes already acquired know-how mentioned in
Section II and Section III.
Although external eye-to-hand configurations offer a better
overview of the considered scene, this method focuses on
the use of 3D cameras placed on the robot system. In order
to enable the best view on the surroundings of the robot
manipulator, the 3D cameras are placed in the robot’s base.
After filtering the points generated by the robot manipulator
via its CAD model from the point cloud, external objects,
humans, or other robots entering the work space are detected.
Subsequently, actions such as stopping the manipulator or
avoiding the obstacle can be taken.
V. CONCLUSION
The application of 3D vision in robotic tasks constitutes as a
versatile solution: From the pose detection of the manipulator
and the perception of the present work space towards a safe
environment, where the cooperation between human operators
and industrial robots is enabled.
REFERENCES
[1] H. W. Yoo, N. Druml, D. Brunner, C. Schw¨ arzl, T. Thurner, M. Hennecke,
and G. Schitter, “MEMS-based lidar for autonomous driving,” E&I
Elektrotechnik und Informationstechnik, vol. 135, 2018.
[2] T. Varhegyi, M. Melik-Merkumians, M. Steinegger, G. Halmetschlager-
Funek, and G. Schitter, “A visual servoing approach for a six degrees-of-
freedom industrial robot by RGB-D sensing,” 10 2017.
[3] P. Gsellmann, M. Melik-Merkumians, M. Hurban, and G. Schitter,
“Heuristic path planning approach for a granular-fill insulation distribut-
ing robot,” 2020, 21th IFAC World Congress.
2021 IEEE/ASME International Conference on
Advanced Intelligent Mechatronics (AIM)
978-1-6654-4139-1/21/$31.00 ©2021 IEEE 761