Projection-based Localization and Navigation Method for Multiple Mobile Robots with Pixel-level Visible Light Communication Takefumi Hiraki 1 , Shogo Fukushima 1 and Takeshi Naemura 1 Abstract—We propose a novel method for the localization and navigation of multiple mobile robots. Our method uses coded light superimposed onto a visual image and projected onto the robots. Robots localize their position by receiving and decoding the projected light, and can follow a target using the coded velocity vector field. Localization and navigation information can be independently conveyed in each pixel, and we can change this information over time. The entire system only requires a projector to navigate the robot swarm; thus, it can be used on any projection surface. To navigate the robots, they only need to be placed within the projection area. We experimentally assess the localization accuracy of our system for both stationary and moving robots. To further illustrate the utility of our proposed system, we demonstrate the navigation of multiple mobile robots in vector fields that vary both spatially and temporally. I. I NTRODUCTION Multi-robot applications that exploit the physical properties of mobile robots have attracted increasing attention in different areas. In human-computer interaction, robots are used as tangible interfaces, and they cooperatively work with computer-generated visual images by changing their state (e.g., their position and rotation), which is changed either by a human or themselves. Because robots are tangible and physically manipulatable, they are more intuitive than a conventional graphical user interface [1], [2]. In robotic pattern formation, methods for creating artistic visual expressions using multiple robots have been proposed [3], [4], [5]. These methods can create various dynamic images using a large number of small robots with colored lights as mobile pixels, and they can be utilized in many areas such as entertainment. These methods need to determine the position and state of the robots accurately. They also need to be able to navigate the robots easily and instantly on various physical surfaces to express visual information. There remain two challenges in the localization and navigation of mobile robots. First, many existing methods employ external measurement systems using computer vision for localization. They recognize markers either with infrared light emitting diodes (LEDs) [1], [3], [4], [6], characteristic patterns [5], or retro-reflective materials [7]. However, it is necessary to fix the position of the cameras, calibrate them, and calculate the spatial location of robots in the camera images. Localization methods without computer vision may rely on lasers [8], [9], sonar [10], [11], or 1 T. Hiraki, S. Fukushima and T. Naemura are with the Department of Information and Communication Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan hiraki@nae-lab.org Fig. 1. Each pixel of the projected image contains various information such as the velocity vector field. The localization and navigation of robots is performed by projection only so that positional deviation of the images and robots does not occur in principle. visible light communication [12]; however, these approaches have limited accuracy because of each sensor’s resolution. Second, because we often require independent control signals via wireless or wired communication in conventional methods [6], the system load increases in proportion to the number of robots; thus, robot navigation has a scalability problem. Other approaches, such as a simple direction navigation using multiple light sources [5], can navigate robots but are not able to guide them to an exact position. Thus, achieving a responsive navigation system for a large number of mobile robots without camera calibration or a heavy traffic load is not a trivial task. In this paper, we propose a method that allows multiple mobile robots to be localized and navigated by projecting light with embedded information. The principle of pixel-level visible light communication (PVLC) [13] is utilized to embed information. The PVLC is a data communication method that uses human-imperceptible high-speed flicker from a digital light processing (DLP) projector. Utilizing this, we can project two types of information at the same location: visible images for humans and invisible data patterns for mobile robots. Hidden data patterns can contain information such as coordinates, a velocity vector field for navigation, and other types of information. Thus, the system does not require measurement devices such as cameras, nor does it incur a high communication load, because we implement the localization and navigation of the robots through projection. Further, the spatial deviation between the images and robots does not occur in principle. Fig. 1 shows the concept of our proposed method. The technical contribution of this paper is two-fold: First, we propose the structure of the projection pattern for embedding data. This structure contains coordinates, control instructions, and switching mode. Second, we suggest a light receiving circuit that operates at high speeds and consumes