On-body multi-input indoor localization for dynamic emergency scenarios: fusion of magnetic tracking and optical character recognition with mixed-reality display Jason Orlosky, Takumi Toyama, Daniel Sonntag German Research Center for Artificial Intelligence Kaiserslautern, Germany takumi.toyama@dfki.de, sonntag@dfki.de, orlosky@lab.ime.cmc.osaka-u.ac.jp Andras Sarkany, Andras Lorincz Eötvös Loránd University Dept. of Software Technology And Methodology Budapest, Hungary andras.sarkany@ik.elte.hu, lorincz@inf.elte.hu Abstract—Indoor navigation in emergency scenarios poses a challenge to evacuation and emergency support, especially for injured or physically encumbered individuals. Navigation systems must be lightweight, easy to use, and provide robust localization and accurate navigation instructions in adverse conditions. To address this challenge, we combine magnetic location tracking with an optical character recognition (OCR) and eye gaze based method to recognize door plates and position related text to provide more robust localization. In contrast to typical wireless or sensor based tracking, our fused system can be used in low-lighting, smoke, and areas without power or wireless connectivity. Eye gaze tracking is also used to improve time to localization and accuracy of the OCR algorithm. Once localized, navigation instructions are transmitted directly into the user’s immediate field of view via head mounted display (HMD). Additionally, setting up the system is simple and can be done with minimal calibration, requiring only a walk-through of the environment and numerical annotation of a 2D area map. We conduct an evaluation for the magnetic and OCR systems individually to evaluate feasibility for use in the fused framework. Keywords— indoor localization; navigation; tracking; emergency; imformation presentation; head mounted display I. INTRODUCTION In an emergency, evacuees and rescue teams are faced with a number of challenges when navigating a building or indoor environment. Unfamiliar building layouts, smoke, the absence of lighting, disorientation, or a combination of factors can often prevent an individual from completing navigation tasks in a timely manner, resulting in the need for additional rescue operations or increased risk to the individual. Due to the recent development of smartphones and other sensing systems, researchers have begun build new ad hoc solutions for localization and navigation of indoor environments. Localization of outdoor environments are typically achieved by a combination of sensors such as GPS and compass, but these have limited functionality or usefulness when in an enclosed area, making other means necessary indoors. As such, other types of methods such as sonar and network localization have been implemented with some success [2][9][16][17][22]. However, many of these methods depend on consistent network access or detailed 3D models of the intended environment for navigation. With the limitations of these methods in mind, we set out to develop a lightweight system that can achieve indoor localization despite loss of power or impaired vision due to smoke or dim lighting. After considering numerous possibilities, we chose a combination of optical character recognition (OCR) and magnetic tracking to implement our localization and navigation algorithms. Simply speaking, we use OCR to recognize position relevant text in a user’s environment when visual data is available, such as room numbers or door plate information, and determine a relative position on a 2D floor map of the building. This allows us to determine location without a complex model of the environment and despite sudden changes to the scene. Magnetic tracking via tablet is used when lighting conditions such as darkness or smoke do not allow for computer-vision based localization. Additionally, OCR localization and magnetic tracking can be used interchangeably to compensate for changing environmental conditions, and can be used simultaneously by choosing whichever system has higher confidence. We also conduct and present results of two pilot experiments to determine accuracy for the magnetic and OCR systems. The magnetic system is tested on a variety of data, including localization estimates for an individual in a wheelchair. The OCR system is then tested in different lighting conditions, including nighttime, daytime, and simulated smoke. From this data, we estimate how the fused system would improve tracking in an emergency. Lastly, navigation information is presented to the user through an HMD, which allows for hands-free operation. An image through the HMD viewing screen showing a user’s position localized from a doorplate is shown in Figure 1. An injured person, firefighter that must use rescue tools, or physically handicapped individual can navigate without the use of his or her hands using our system, which is not true for most localization methods that utilize a hand-held device. This intelligent fusion of methods and hardware gives us a number of advantages over other systems in terms of usability, robustness, and simplicity of implementation. Figure 1. View through HMD screen showing localization (A) on a 2D floor map. The OCR algorithm recognizes door numbers to determine position.