V.S. Sunderam et al. (Eds.): ICCS 2005, LNCS 3515, pp. 339 – 342, 2005.
© Springer-Verlag Berlin Heidelberg 2005
SACARI: An Immersive Remote Driving Interface for
Autonomous Vehicles
Antoine Tarault, Patrick Bourdot, and Jean-Marc Vézien
LIMSI-CNRS, Bâtiments 508 et 502bis, Université de Paris-Sud,
91403 Orsay, France
{tarault,bourdot,vezien}@limsi.fr
http://www.limsi.fr/venise/
Abstract. Designing a remote driving interface is a really complex problem.
Numerous steps must be validated and prepared for the interface to be robust,
efficient, and easy to use. We have designed different parts of this interface: the
architecture of the remote driving, the mixed reality rendering part, and a simu-
lator to test the interface. The remote driving interface is called SACARI (Su-
pervision of an Autonomous Car by an Augmented Reality Interface) and is
mainly working with an autonomous car developed by the IEF lab.
1 Introduction
The aim of the project is to develop a Mixed Reality system for the driving assistance
of an autonomous car. The applications of such a system are mainly teleoperation and
management of vehicles fleet. To realize such an application, we have an immersive
device called MUSE (Multi-User Stereoscopic Environment) (see Fig. 1), in the
LIMSI-CNRS, and an autonomous car, PiCar [1], developed by the IEF lab.
Two major concepts were used in this project: telerobotics, and telepresence. Tel-
erobotics is a form of teleoperation in which a human acts in an intermittent way with
the robot [2]. He communicates information (on goals, plans…) and receives others
(on realizations, difficulties, sensor data…). The aim of telepresence is to catch
enough information on the robot and its environment to be communicated to the hu-
man operator in such a way that he should feel physically present on the site [3].
We took two existing interfaces as a starting point. In [4], Fong and al. define a
collaborative control between the vehicle and the user. Queries are send to the robot,
which executes them or not, depending on the situation. The robot can also send que-
ries to the user, who can take them into account. This system can be adapted to the
level of expertise of the user. The depth parameter of the scene is given by a multisen-
sor fusion of a ladar, monochrome camera, a stereovision system, an ultrasonic sonar,
and an odometer. They have developed two interesting driving interface: “gesture
driver”, allows the user to control the vehicle with a series of gesture. Unfortunately,
this driving method is too tiring for long distances. PDAdriver, enables to drive a
robot with a PDA. In [5], McGreevy describes a virtual reality interface for efficient
remote driving. His goal was to create an explorer-environment interface instead of a
classical computer-user interface. All the operations, objects, and contexts must be
comparable to those met in a natural environment.