WEB-BASED EXPLORATION OF MULTI-VIEW CONTENT Wolfgang Weiss, Werner Bailer, Christian Schober, Georg Thallinger DIGITAL – Institute for ICT, JOANNEUM RESEARCH, Graz, Austria, {firstname.lastname}@joanneum.at Abstract We present a light-weight Web client for exploring multi-view content sets in post-production. 1 Introduction With the increasing amount of multimedia data to be handled in production and post-production, there is growing demand for more efficient ways of supporting exploration and navigation of multimedia data. Multi-view content, that is used in 3D productions, increases the problem as even more (redundant) content needs to be handled. Furthermore, newly shot material is typically sparsely annotated, therefore we can only rely on automatically extracted features. On the other hand, media production workflows are becoming increasingly flexible and distributed, involving many contributors located at different sites. This requires content management tools capable of dealing with remote and distributed content collections. A survey [1] of advanced content management tools for post-production analyses different approaches for browsing and interactive search as well as investigates the user interfaces of commercial digital asset management systems. Most commercial systems rely on key frames displayed in result lists, as well as low-resolution proxies for preview in a simple video player. Browsing is mostly restricted to using directory structures or links to different versions of content. Only few systems provide storyboard or light table views of entire content sets in their user interfaces, while support for multi-view content is largely missing. 2 Architecture and Workflow We developed a Web-based video browsing tool using the back end services from an existing desktop application (see also [2]). We adapted the back end to be used as a server and access it via SOAP web services. Videos are streamed via the Real Time Messaging Protocol to the client. When content is ingested into the tool automatic content analysis is performed. Currently, camera motion estimation, visual activity estimation, extraction of global color features and estimation of object trajectories are implemented. The central component of the browsing tool’s user interface is a light table that shows the current content set and cluster structure using a number of representative frames for each of the clusters which are visualized by colored areas around the images. To visualize the temporal context as well as corresponding content of other views, a pop-up is shown on demand containing separate time lines of temporally adjacent key frames. The history automatically records all clustering and selection actions done by the user. The user can jump back by selecting an entry in the history and then continue exploring from this point using alternative cluster features. A result list is available to memorize video segments and to extract segments of videos for further video editing, e.g. as edit decision list (EDL). Furthermore, the application allows to execute a similarity search based on following features: camera motion, motion activity, color layout, multi-view media item and multi-view camera. 1 Figure 1: Screenshot of the Web-based video browsing tool. Acknowledgements The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under the contract FP7- 215475, “2020 3D Media – Spatial Sound and Vision” (http://www.20203dmedia.eu/). References [1] W. Bailer, F. Hopfgartner, and K. Sch¨ offmann. Tools for Content Management in TV Post-Production. In Y. Kompatsiaris and B. M´ erialdo and S. Lian (eds): TV Content Analysis (to appear). 2011. [2] W. Bailer, W. Weiss, G. Kienast, G. Thallinger, and W. Haas. A video browsing tool for content management in postproduction. International Journal of Digital Multimedia Broadcasting, 2010. 1 Visit http://20203dmedia.joanneum.at/ for an online demonstration.