Human-Robot Collision Avoidance Scheme for Industrial
Settings Based on Injury Classification
Mustafa Mohammed
University of Illinois at Chicago
Chicago, IL, United States
mmoham70@uic.edu
Heejin Jeong
University of Illinois at Chicago
Chicago, IL, United States
heejinj@uic.edu
Jae Yeol Lee
Chonnam National University
Gwangju, Republic of Korea
jaeyeol@chonnam.ac.kr
ABSTRACT
The objective of this paper is to develop a real-time, depth-
sensing surveillance method to be used in factories that require
human operators to complete tasks alongside collaborative
robots. Traditionally, collision detection and analysis have been
achieved with extra sensors that are attached to the robot to
detect torque or current. In this study, a novel method using 3D
object detection and raw 3D point cloud data is proposed to
ensure safety by deriving the change in distance between
humans and robots from depth maps. By not having to deal with
any potential delay associated with extra sensor-based data, both
the likelihood and severity of collaborative robot-induced
injuries are expected to decrease.
CCS CONCEPTS
• Human-centered computing → Human-computer interaction
(HCI) → Interaction paradigms → Collaborative interaction;
KEYWORDS
Collaborative robot; point cloud; computer vision; collision
detection; safety; injury prevention
ACM Reference format:
Mustafa Mohammed, Heejin Jeong and Jae Yeol Lee. 2021. Human-Robot
Collision Avoidance Scheme for Industrial Settings Based on Injury
Classification. In Companion of 2021 ACM/IEEE International Conference
on Human-Robot Interaction (HRI’21 Companion), March 8-11, 2021,
Boulder, CO, USA. ACM, New York, NY, USA, 3 pages.
https://doi.org/10.1145/3434074.3447232
1 Introduction
The emergence of collaborative robots into human workspaces
has induced a complete reevaluation of the way service tasks are
handled. The ingenuity of humans is freed from its limitations by
robots that operate with precision and speed. Combining the two
entities into industrial settings has allowed for not only a surge
in efficiency but also a drop in the occurrence of adverse effects
from physical labor. However, to take full advantage of the
initiative, it is critical that any added harm must not be inflicted
upon factory workers during manufacturing and assembling. To
date, an abundance of injury prevention methods relying on
marker-based sensors has been tested [1]. These sensors are
installed into the robot; thus, the approach can introduce delay
into the rate at which the robot is stopped, which can pose an
immediate threat to workers [1]. A non-marker-based approach
is therefore imperative.
There have been numerous proposed algorithms that utilize
active image-based technologies (both 2D and 3D) to solve the
issue of collaborative robot safety. Saveriano and Lee [2]
suggested a real-time path planning algorithm for reactive
avoidance of multiple moving obstacles with computer vision.
From their simulations, they were able to achieve a significantly
shorter and collision-free travel path. In regard to environment
mapping, Rusu et al. [3] proposed a voxel-based sensing method
that created obstacle maps with the application of point clouds.
Himmelsbach et al. [4] presented a safer approach to speed and
separation monitoring with 3D Time-of-Flight (TOF) cameras
that were directly mounted on a robot arm. Moreover, an
efficient method to determine the distance between obstacles in
the path of a robot was developed by Flacco et al. [5]. With the
use of multiple depth cameras, they generated repulsive vectors
that guided the robot controller through a task. Lešo et al. [6]
focused on developing 2D safety zones from projected optical
barriers that, if crossed, would lead to the stoppage of the robot.
Though not as common, Cherubini et al. [7] opted for a
multimodal approach, in which both traditional and depth
cameras were simultaneously utilized to trigger safety stops in a
manufacturing cell. Most importantly, experiments done by
Stetco et al. [8] clearly demonstrate the advantage of using 3D
techniques over all others.
Furthermore, many have examined methods for quantifying
injury risk, especially with collaborative robots. Matthias et al.
[9] worked on understanding low-level injury risk assessment by
matching the area of human-robot contact with injury type.
Robla-Gómez et al. [10] reviewed a framework that used impact
energy density. In the chance of a collision, an altered trajectory
for the robot is generated. Marvel and Norcross [11] tested an
algorithm that takes the angle of impact and contextualizes it
with the separation distance between the human operator and
the robot. In addition to this, Haddadin et al. [12] explored injury
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by
others than ACM must be honored. Abstracting with credit is permitted. To copy
otherwise, or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from Permissions@acm.org.
HRI ’21 Companion, March 8-11, 2021, Boulder, CO, USA
© 2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-8290-8/21/03…$15.00
DOI: https://doi.org/10.1145/3434074.3447232
Late-Breaking Report HRI ’21 Companion, March 8–11, 2021, Boulder, CO, USA
549