Assessing the Suitability and Efectiveness of Mixed Reality
Interfaces for Accurate Robot Teleoperation
Francesco De Pace
Politecnico di Torino
Torino, Italy
francesco.depace@polito.it
Gal Gorjup
The University of Auckland
Auckland, New Zealand
ggor290@aucklanduni.ac.nz
Huidong Bai
The University of Auckland
Auckland, New Zealand
huidong.bai@auckland.ac.nz
Andrea Sanna
Politecnico di Torino
Torino, Italy
andrea.sanna@polito.it
Minas Liarokapis
The University of Auckland
Auckland, New Zealand
minas.liarokapis@auckland.ac.nz
Mark Billinghurst
The University of Auckland
Auckland, New Zealand
mark.billinghurst@auckland.ac.nz
(a) (b) (c) (d)
Figure 1: Comparison of mixed and virtual reality interfaces: (a) the łpure" virtual interface (VR_S), (b) the łpure" point cloud
interface (MR_S), (c) the point cloud and the virtual robot interface (MRR_S), and (d) the real robot in the laboratory space.
ABSTRACT
In this work, a Mixed Reality (MR) system is evaluated to assess
whether it can be efciently used in teleoperation tasks that require
an accurate control of the robot end-efector. The robot and its
local environment are captured using multiple RGB-D cameras,
and a remote user controls the robot arm motion through Virtual
Reality (VR) controllers. The captured data is streamed through
the network and reconstructed in 3D, allowing the remote user to
monitor the state of execution in real time through a VR headset.
We compared our method with two other interfaces: i) teleoperation
in pure VR, with the robot model rendered with the real joint states,
and ii) teleoperation in MR, with the rendered model of the robot
superimposed on the actual point cloud data. Preliminary results
indicate that the virtual robot visualization is better than the pure
point cloud for accurate teleoperation of a robot arm.
CCS CONCEPTS
• Human-centered computing → Mixed / augmented reality.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
VRST ’20, November 1–4, 2020, Virtual Event, Canada
© 2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-7619-8/20/11.
https://doi.org/10.1145/3385956.3422092
KEYWORDS
Mixed Reality, Virtual Reality, Robot Teleoperation
ACM Reference Format:
Francesco De Pace, Gal Gorjup, Huidong Bai, Andrea Sanna, Minas Liarokapis,
and Mark Billinghurst. 2020. Assessing the Suitability and Efectiveness of
Mixed Reality Interfaces for Accurate Robot Teleoperation. In 26th ACM
Symposium on Virtual Reality Software and Technology (VRST ’20), Novem-
ber 1–4, 2020, Virtual Event, Canada. ACM, New York, NY, USA, 3 pages.
https://doi.org/10.1145/3385956.3422092
1 INTRODUCTION
There has been an increased research interest in developing meth-
ods that allow operators to use Virtual Reality (VR) and Mixed Real-
ity (MR) technologies to remotely control [5, 7] and/or collaborate
[6, 9] with robotic platforms. For example, Sun et al. [8] developed
two types of control modes to tune the position, orientation, and
force of an industrial manipulator in MR. Similarly, Whitney et
al. described a remote teleoperation system [10, 11] to control a
robotic arm in MR in a pick-and-place task. The results show that
direct manipulation outperforms the MR teleoperation in terms of
completion time and workload. To the best of our knowledge, no
studies have been conducted to thoroughly analyze MR interfaces’
efectiveness and accuracy in more complex path following tasks.
In this work, we evaluate our MR robot teleoperation system for
tasks that require highly accurate control of the end-efector posi-
tion and velocity, such as remote surgery [6, 9] or welding [5, 7].
This is facilitated by the RGB-D sensors that allow for real-time 3D
reconstruction of the physical surroundings.