Event-based Content Management by Spontaneous Metadata Generation and Diffusion Khandaker Tabin Hasan ** , Md. Saddam Hossain Mukta * and Mir Tafseer Nayeem * ** American International University of Bangladesh (AIUB), Dhaka, Bangladesh * Islamic University of Technology (IUT), Dhaka, Bangladesh tabin@aiub.edu , mukta944@gmail.com , mtnayeem@yahoo.com Abstract Our memories are best preserved with the evidence of events. In today’s world, we have both physical and digital artifacts as evidence where the digital artifacts are machine process-able contents. The opportunity of generating and obtaining event related content in many forms (e.g. photos, videos, audios, texts) became pervasive and affordable, but generating metadata and maintaining interoperability among them have not yet become a seamless user activity on the verge. This paper proposes a single platform interaction model that coherently helps users to annotate their contents event-wise with maximum effectiveness and minimum effort where the event will serve as the container and all related contents are to be put inside it and interact among them for metadata diffusion and enrichment. Our simple interface and interaction technique proved its potential for spontaneous metadata generation with every user input as the user keeps playing with the system. Index TermsEvent based content organization, metadata annotation and diffusion, user interaction. I. INTRODUCTION Assuming photos are one of the most profuse event related contents and has the intrinsic complexity of annotating them, much of the related work described exemplifying photo annotations. Work done by A. Chakravarthy et al. has implicitly focused on event identification by cross-media document annotations [7]. The problem lies basically in involving individual in metadata generation activity and our solution addresses the problem with the key notion of ―event”. Since most of the contents are generated in or around an event, this approach considers ―event‖ a logical artifact that helps organize the contents within it and allows spontaneous metadata generation with little user effort. Later, this enriched metadata serves as the key for the content retrievals. Stepping little away from its philosophical belief, we took the notion of events as they take place in human life; important occurrences to be remembered that holds evidence of digital artifacts. This perspective is strongly associated with our memories that essentially have the notion of periods/intervals (a significant space of time), location and other entities (e.g., photos, videos, people, and objects) and their properties. Therefore, events are those happenings considered (before, during or later) to be significant incident in human life. The memories of event persist in our memory (volatile, personalized, illusive, and evasive) and with other artifacts (persistent, we term them evidence). These artifacts come in both form physical and digital. The memory of event survives with the survival of its evidence. In our research, we have considered only the digital evidences to be collectively put together that would describe the event. In this sense, the entity ―event‖ acts as a container where the contents are abundantly produced before, during and after the event, and put into the container. The contents are generally texts (emails, documents, and notes), people, photos, videos, audios and more. While the simplest of them could be a structured document describing the event, other content like a poem or a painting may well be the evidence. This is already understood that involving individual users in metadata annotation for any given content type is at test as being a very labor intensive task [8]. On top of supporting technologies, intuitive UIs and entertaining interaction keeps users in metadata generation for a time, but it also fails unless short-term and long-term benefits are envisaged [9]. Keeping this goal in mind several popular applications had been developed for specific content type annotation with considerable success (see the next section). Here forth, our goal is to design and develop the appropriate method and tool that allows annotating contents related to an event and helps propagating/sharing/integrating the annotations when new contents are added to the event. For instance, persons, generically identified in mails or extracted from message body [5], could be constructively tagged in the photos which in turn make both content types coherently connected for the event. Similarly, a photo with location tag added to an event consequently annotates the event location without any extra effort. There are however, some risks of automatic diffusion of annotations that we have discussed in problem and approach section. Section 2 describes the related works that inspired our drive toward an integrated solution for event related content annotation, organization and retrieval. II. RELATED WORKS Here in this section, we mention few of many interaction techniques for generating metadata for different content types. In the paper [1], Crandall, et al. proposed a method of automatically finding the location where a photo was taken based on the analysis of visual features, temporal information and textual tags of images. The novelty of this system is propagating annotations to similar contents.