Multi-touch 3D Navigation for a Building Energy Management System Martin Naef, Ettore Ferranti ABB Corporate Research, Switzerland ABSTRACT This poster presents a multi-touch navigation interface for a building energy management system with a three-dimensional data model. It extends well established “rubber-band” 2D interaction gestures to work with a 3D world-in-hand paradigm with the help of a navigation widget to select the active manipulation axis. A nested, semi-transparent display of the data hierarchy requires careful selection of the manipulation pivot. A hit-testing scheme is introduced to select the most likely object within the hierarchy. KEYWORDS: Multi-touch, 3D navigation. INDEX TERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces – Input Devices and Strategies; I.3.6 [Computer Graphics]: Methodology and Techniques – Interaction techniques. 1 INTRODUCTION AND RELATED WORK This publication presents the 3D navigation concept for a system to display and manage energy consumption and production for a building complex. The display system is designed for installations in public without mouse or keyboard. It supports technology presentations and demonstrations to a diverse, non-specialist audience. It has to fulfill the design goals of being easy to use, intuitive and visually appealing. It is used on display systems of different sizes, starting from an 11” tablet computer ranging to 42” wall-mounted displays with touch input capability. A single touch point was not considered sufficient to handle the complexity of the navigation task without cluttering the interface with a large number of widgets, hence the choice to implement a multi-touch navigation system. Figure 1. Application overview showing 2D and 3D data structures. The central element of the GUI (Figure 1) is an abstract 3D visualization of the building site. It presents a simplified model including building wings, floors, offices down to the individual devices and measurement points. As a nested structure, a semi- transparent rendering mode was chosen to visualize all hierarchy levels at the same time. This poses some challenges for hit-testing and the selection of the navigation pivot. Multi-touch interaction has become a widely deployed means of 2D navigation thanks to the broad adoption of mobile devices such as phones and tablets with touch sensitive displays. Affordable desktop computing systems have gained touch capabilities, with mainstream operating systems (e.g. Windows 7) offering built-in support for interaction gestures similar to mobile devices. Instead of coming up with completely new paradigms, this project fuses proven research results aiming towards a single, consistent, and visually appealing interface. Although created from scratch, the interface borrows concepts from previous work such as the bi-manual interaction gestures in [1]. Instead of just using gestures, the concept uses widgets to explicitly select navigation options, similar to the tracking menus approach used in [2]. The work was also inspired by the Navidget [3] in terms of the navigation pivot selection and some visual presentation aspects. The devices used here only report two touch points, more complex gestures such as [4] were therefore not applicable. 2 THE NAVIGATION WIDGET Figure 2. Control widget zones. A) Pivot and pan. B) Up/down pan. C) Scale. D) Pitch and E) heading. F) Sphere rotation. The common 2D multi-touch interaction paradigm as found on recent mobile phones, tablets or as provided by Windows 7 relies completely on two-finger touch gestures. The interpretation of the gesture is based on the (relative) motion of the fingers only. For 3D interaction, those gestures must be enriched with some sort of axis selection. The solution chosen is based on a control widget with active zones to select the desired operation. The first touch point activates the navigation widget if left on the surface, and the second touch point selects the operation and magnitude. The delay in showing the widget is introduced to enable tap selection and double-top zoom-into gestures without visually flashing the widget. This explicit action selection approach arguably lacks the elegance of the pure gesture-based solutions for 2D interaction. However, we found it easy to understand for the casual user thanks to the explicit visual guidance. The widget also offers extra opportunities to provide valuable feedback, in particular the confirmation that the gesture was recognized as expected. E-mail: [martin.naef ¦ ettore.ferranti]@ch.abb.com