Augmented Exhibitions Using Natural Features Quan Wang, Jonathan Mooser, Suya You and Ulrich Neumann CGIT Lab, University of Southern California {quanwang, mooser, suyay, uneumann}@usc.edu ABSTRACT In this paper, we propose an augmented reality application for museum exhibitions using natural features instead of calibrated fiducials to recognize paintings and recover their pose. The proposed system utilizes an adapted Multiple View Kernel Projection (MVKP), which combines a multiple view training stage for geometric invariance and kernel projection based on Walsh-Hadamard kernels for feature description. We demonstrate that its real-time performance and robustness to lighting and viewpoint changes make it ideal for AR applications like AR exhibition systems. After obtaining the painting’s index, the system retrieves related information from a remote server and displays it as virtual content overlaid on top of the painting image. Experimental results on a real-world painting exhibition have demonstrated the effectiveness of the proposed approach. Keywords Augmented Reality, virtual exhibition, multiple view training, Walsh-Hadamard kernel projection, object recognition. 1. INTRODUCTION For augmented reality systems, it is essential to establish a link from the objects in the physical world to the desired displays in the augmented world. In our case, this entails recognizing an object so that we can retrieve its associated data and recover its 3D pose so that virtual objects can be accurately rendered. Most previous systems rely on tagged IDs or markers [1, 2, 3, 4]. While maker-based methods have demonstrated excellent speed and reliability, it is often difficult, if not impossible, to display a marker alongside every exhibit in an entire museum. Moreover, markers generally do not work well in the presence of occlusion. Another line of research uses vision-based methods to determine an object’s physical location and 3D pose. While some traditional single view based recognition techniques are robust and accurate enough for AR requirement, the majority is too slow for real-time applications. In recent years, multiple view image matching approaches [15, 16] received growing interests due to their real- time performance. However, to achieve the same robustness and accuracy, a large number of training views are generally needed and thus demands powerful and expensive hardware. This paper proposes the use of natural features generated by Multiple View Kernel Projection [5]. Using Walsh-Hadamard kernels projection [6], real-time MVKP has demonstrated both effectiveness and robustness for planar objects such art paintings using a small number of training views. Additionally, as an image matching method based on local features, it naturally handles complex conditions such object occlusion and cluttered foreground or background, both typical challenges for an art museum with a large number of visitors. The MVKP approach first builds a feature database for each painting based on a multiple view training stage. Given one input image for each painting, MVKP generates a number of synthesized affine transformed training views, detects and selects interest points, and describes local image patches around those interest points with Walsh-Hadamard kernel projection. After the training stage, Faster Filtering Vector Approximation [7] is used to establish feature correspondences between a query image and the painting feature database. Based on the object recognition result, complementary information can be retrieved from a remote server and displayed accordingly. We also introduce several important adjustments of the original MVKP method so that it will work better for the augmented exhibition system. The remainder of this paper is organized as follows: Section 2 briefly summarizes related works. Section 3 provides an overview of our virtual exhibition system. Section 4 describes the adapted MVKP method as well as client/server information retrieval. Section 5 is the real-world painting exhibition experiment, followed by the conclusions. 2. RELATED WORKS Augmented Reality is a natural platform on which to build an interactive museum guide. Rather that relying solely on printed tags or prerecorded audio content to aid the visitor, an AR system can overlay text and graphics on top of an image of an exhibit and thus provide interactive, immersive annotations in real-time. Graffe, et al., for example, designed an AR exhibit to demonstrate how a computer works [8]. Their system relies on a movable camera that the user can aim at various parts of a real computer. A nearby screen then displays the camera image annotated with relevant part names and graphical diagrams. Schmalstieg and Wagner presented a similar system using a handheld device [1]. As the user walks from place to place, AR content provides information not only about the current exhibit, but also acts as a navigational tool for the entire museum. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Conference’04, Month 1–2, 2004, City, State, Country. Copyright 2004 ACM 1-58113-000-0/00/0004…$5.00.