CCNews Page 14 Newsletter of the California Council on Teacher Education CCNews Volume 24, Number 4, Winter Issue, December 2013, Section 4 Jo Birdsell & Judy Mantle, Co-Editors (National University) Prepared for CCTE by Caddo Gap Press, 3145 Geary Boulevard, PMB 275, San Francisco, CA 94118 By Jenny Rankin Training programs for teachers have generally not ad- dressed data skills and data-informed decision-making (US- DEOPEPD, 2011). Many teachers and administrators do not know fundamental analysis concepts, and 70% have never taken a college or post graduate course in educational mea- surement (Zwick et al., 2008). Few teacher preparation pro- grams cover topics like state data literacy (Halpin & Cauthen, 2011). In fact, most people responsible for analyzing data have received no training to do so (DQC, 2009; Few, 2008). The Food and Drug Administration (FDA) requires over- the-counter medication to be accompanied by textual guid- ance proven to improve its use, deeming it negligent to do otherwise (DeWalt, 2010). With such guidance, patients may take over-the-counter medication with the goal of improving wellbeing while a doctor is not present to explain how to use the medication. No or poor medication labels have resulted in many errors and tragedy, as people are left with no way to know how to use the contents wisely (Brown-Brumield & DeLeon, 2010). Labeling conventions can translate to improved under- standing on non-medication products, as well (Hampton, 2007; Qin et al., 2011). Thus, in the way over-the-counter medicine’s proper use is communicated with a thorough label and added documentation, a data system used to analyze stu- dent performance can include components to help users better comprehend the data it contains. Yet data systems display data for educators without suficient support to use their contents— data—wisely (Coburn, Honig, & Stein, 2009; Data Quality Campaign [DQC], 2009, 2011; Goodman & Hambleton, 2004; National Forum on Education Statistics [NFES], 2011). Label- ing and tools within data systems to assist analyses are uncom- mon, even though most educators analyze data alone (U.S. Department of Education Ofice of Planning, Evaluation and Policy Development [USDEOPEPD], 2009). Essentially, data systems and reports do not commonly present data in an “over- the-counter” format for educators, whose primary option for using data to treat students is thus akin to ingesting medicine from an unmarked or marginally marked container. Unfortunately, the resultant data analyses are lawed. Educators often do not use data correctly, and there is clear evidence many users of data system reports have trouble understanding the data (Hattie, 2010; Wayman, Snodgrass Rangel, Jimerson, & Cho, 2010; Zwick et al., 2008). For example, in two national studies of districts known for strong data use, teachers achieved only 48% accuracy when making data inferences involving basic statistical concepts (USDEO- PEPD, 2009, 2011). Methodology The purpose of the experimental, quantitative study was to facilitate causal inferences concerning the degree to which including different forms of data usage guidance within a data system reporting environment can improve educators’ understanding of the data contents, much like including different forms of usage guidance with over-the-counter medication is needed to improve use of contents. The study’s primary independent variables included brief, cautionary ver- biage in (a) report footers, (b) report-speciic abstracts, and (c) report-speciic interpretation guides. These three data analysis supports, which can be gener- ated within a data system, were each framed in two differ- ent formats. The dependent variable was accuracy of data analysis-based responses, measured by a survey with data analysis questions. 211 elementary and secondary educators in California answered these questions while viewing one of seven report sets of student data (see Figures 1-7). The study was pilot-tested irst, subscribed to all Institutional Review Board (IRB) and ethical guidelines, and relected precautions to avoid or overcome threats to external and internal validity. Sample A priori two-tailed t-test (effect size d = 0.5, error of probability = 0.05, power = 0.95), rendered a recommended sample size of at least 210 participants. A priori F-test linear multiple regression analysis (effect size f² = 0.15, error of probability = 0.05, power = 0.95, predictors based on inde- pendent variables = 7) rendered a recommended sample size of at least 153 participants. The study employed a random, Jenny Rankin is a former teacher and administrator who received her Ph.D. in education at Northcentral University. This article offers information she initially presented at the poster session at the CCTE Fall 2013 Conference. —continued on next page— Remedying Educators’ Data Analysis Errors with Over-the-Counter Data