Robot Trajectories When Approaching a User with a Visual
Impairment
Jirachaya "Fern" Limprayoon
jlimpray@andrew.cmu.edu
Carnegie Mellon University
Pittsburgh, Pennsylvania, USA
Xiang Zhi Tan
zhi.tan@ri.cmu.edu
Carnegie Mellon University
Pittsburgh, Pennsylvania, USA
Prithu Pareek
ppareek@andrew.cmu.edu
Carnegie Mellon University
Pittsburgh, Pennsylvania, USA
Aaron Steinfeld
steinfeld@cmu.edu
Carnegie Mellon University
Pittsburgh, Pennsylvania, USA
ABSTRACT
Mobile robots have been shown to be helpful in guiding users in
complex indoor spaces. While these robots can assist all types of
users, current implementations often rely on users visually ren-
dezvousing with the robot, which may be a challenge for people
with visual impairments. This paper describes a proof of concept for
a robotic system that addresses this kind of short-range rendezvous
for users with visual impairments. We propose to use a lattice graph-
based Anytime Repairing A* (ARA*) planner as a global planner
to discourage the robot from turning in place at its goal position,
making its path more human-like and safer. We also interviewed an
Orientation & Mobility (O&M) Specialist for their thoughts on our
planner. They observed that our planner produces less obtrusive
trajectories to the user than the ROS default global planner and
recommended that our system should allow the robot to approach
the person from the side as opposed to the front as it currently does.
In the future, we plan to test our system with users in-person to
better validate our assumptions and fnd additional pain points.
CCS CONCEPTS
• Human-centered computing → Accessibility technologies.
KEYWORDS
rendezvous, robot navigation, people with visual impairments, ap-
proach trajectory
ACM Reference Format:
Jirachaya "Fern" Limprayoon, Prithu Pareek, Xiang Zhi Tan, and Aaron
Steinfeld. 2021. Robot Trajectories When Approaching a User with a Visual
Impairment. In The 23rd International ACM SIGACCESS Conference on Com-
puters and Accessibility (ASSETS ’21), October 18–22, 2021, Virtual Event, USA.
ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3441852.3476538
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
ASSETS ’21, October 18–22, 2021, Virtual Event, USA
© 2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8306-6/21/10.
https://doi.org/10.1145/3441852.3476538
1 INTRODUCTION
Navigating alone in unfamiliar, complex, indoor spaces like air-
ports, shopping malls, and university buildings can be challenging
for people with visual impairments. Since a sighted guide might
not always be available in all public indoor spaces when requested,
many researchers have investigated the use of mobile service robots
for providing navigational assistance [1, 3, 6, 7]. One of the chal-
lenges is how the user will rendezvous with these robots. Many
approaches rely on the users spontaneously adjusting their trajecto-
ries based on their visual knowledge of the robot’s trajectory when
rendezvousing with it [2, 9], but this method might be unsuitable
for users who are blind or have low vision. In order to help a user
with a visual impairment rendezvous with the robot in a seamless
way without relying on the user’s visual knowledge of the robot’s
orientation and position, the robot should perform most of the
localization to pinpoint exactly where the user is in the space and
drive up close to them. Furthermore, the robot needs to approach
the user in a way such that the user can easily rendezvous with it
without startling or creating additional difculty for the user.
Recent participatory design research suggests that one efective
way to initiate an interaction with a mobile navigation robot is to let
a user with a visual impairment summon it through a smartphone
application when they need escort assistance upon an arrival to
a new indoor space [1]. Then, the robot should respect the user’s
autonomy and independence by allowing them to have ultimate
control of the overall interaction [1].
Our paper investigates the interaction when a mobile robot ap-
proaches a user with a visual impairment. While prior work has
explored how robots can detect target users [10], one of the most
important design questions that has yet to be answered is how the
robot should travel to the person when they are within a few meters
of the robot.
2 APPROACH
We propose a system that allows the user to summon the robot via
their mobile phone when they arrive at a public indoor space. Our
approach is to ask the user to position their smartphone screen
displaying a fducial marker facing outward at their desired robot
handle position when the robot is within a few meters of them. The
marker is then used to detect the position and orientation of the