1 Goldman-Segall, R. (1993). Interpreting video data. Journal for Educational Multimedia and Hypermedia 2(3), pp. 261–282. Interpreting Video Data: Introducing a "Significance Measure" to Layer Descriptions Ricki Goldman-Segall Faculty of Education University of British Columbia This paper proposes a new approach for researchers who analyze video data, recommending that data be layered in as many ways as possible as they are selected, coded, and annotated. Although video has become an important source of data over the past decade, the problem facing researchers is that interpreting video is fundamentally different from interpreting text. Given its multi-grained nature, how do we put together and then make sense of the chunked clusters of video? Furthermore, how do we share our views about our video data with our colleagues who may be using part or all of the same set of data? To address these questions, I will: 1) examine some of the theoretical issues underlying the inherent complexity of working with video data, 2) describe the video ethnography of a particular graduate student user who is working with video data, and then, 3) explain the use of a tool called a "Significance Measure" which allows users to layer or weigh the relative importance of topics. In other words, as researchers rate the significance of their data within a particular video data analysis environment called Learning Constellations 2.0, they add layers or thickness to raw data. The conclusion of this paper is that layered structures will enable us " to see" which data are most significant, and for whom, to the total body of data. Using a Significant Measure will support the work of colleagues who share the same video (and text) by building layered collaborative interpretations. The Problem: Video Data Seem Slippery Over the past eight years, I have been actively pursuing a theoretical approach for making sense of video data. 1 Most recently, I have turned my attention to how we researchers are able to rely upon small slices or moments of what we select within individual chunks to be representative of the entire body of collected video or text. In other words, does a wink or a shrug inform us about something meaningful or is it merely "noise" — the term used by quantitative researchers for moments not to be paid attention to. Until we find a way to agree 1 In Goldman-Segall (1989b, & 1990) a theoretical model of building layers of video data to thickly describe the meaning of videotaped events was first recommended. These works describe the video analysis tool, Learning Constellations 1.0 (LC 1) which I originally designed in 1989 with David Greschler and Vivian Orni Mester at the MIT Media Lab to analyze video data of children in a Boston elementary school. Learning Constllations 2.0, (LC 2), a generic tool for video analysis, is under development in MERLin, the Multimedia Ethnographic Research Lab in the Faculty of Education at UBC. The focus of MERLin is to develop tools and methods for an emerging community of video ethnographers.