Automatic and User-Centric Approaches to Video Summary Evaluation Cuneyt M. Taskiran and Frank Bentley Motorola Labs, Applications Research Center 1295 Algonquin Road, Schaumburg, IL 60196 {cuneyt.taskiran, f.bentley}@ motorola.com ABSTRACT Automatic video summarization has become an active research topic in content-based video processing. How- ever, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain. Keywords: automatic video summary evaluation, user centered design 1. INTRODUCTION Thanks to the proliferation of personal video recording devices, such as hand-held cameras, webcams, and video enabled cell phones, users today are able to easily generate video content. This has led to the creation of large Internet video repositories that are growing at a rapid rate, e.g., 65,000 video clips were being added daily to the popular Internet video site YouTube in August 2006. On the other hand, developments in the delivery of television programming, such as digital video recorders, bundled video and data services, and the advent of Internet Protocol television (IPTV), have started to radically alter traditional television viewing patterns. Deriving compact representations of video sequences that are intuitive for average users and let them efficiently browse large collections of video data has become more important than ever. Due to the factors outlined above, automatic video summarization has become an active research topic in content-based video processing and various summarization algorithms and summary visualization schemes have been proposed. However, the majority of work done on video summarization has two important issues: first, there is no commonly used video data collection to train and test proposed summarization algorithms; second, generally little attention is given to developing rigorous summary evaluation methodologies. Without these two components, not only it is hard to compare the performance of different algorithms, but it also becomes difficult to make statements about the quality of the summaries produced with respect to human judgment. Recently more researchers have started to address these problems in video summary evaluation. We believe that are two important issues that need to be addressed in the evaluation of automatically generated summaries: Automatic summary evaluation. Without well-defined methods to automatically evaluate summary quality, improvement of summarization algorithms becomes a cumbersome and bias-prone process, since each time the output needs to be evaluated by human judges. User-centric design. The generated video summaries are meant to be consumed by users. Therefore, summarization systems designed without a clear understanding of what users need and how they interact with summaries will have a low probability of adoption in practical applications. These issues are concerned with approaches that attack the problem of summary evaluation from opposite ends of the spectrum. The development of practical video summarization systems strongly depends both on the rapid development and comparison of algorithms based on automatic evaluation methods as well as a clear understanding of what users want from such a system.