Rubric Marking “Out of the Box”: Saving Time & Adding Value to Teaching & Learning Lesley Gardner, Donald Sheridan and Nina Andreas, Department of Information Systems, University of Auckland Business School, University of Auckland, Auckland, New Zealand Email: l.gardner@auckland.ac.nz d.sheridan@auckland.ac.nz nina.andreas@googlemail.com Abstract: This paper describes how Google Docs can be used to create a rubric marking system with ‘out-of-the-box’ features a simple form with radio buttons or Likert scale to record the student’s marks for each assessment item, and the total marks. Comments are optional. The data can be exported as a spread sheet that is suitable for uploading to an LMS or sending out by email to the students. In this study our hypothesis that it takes less time using rubrics was supported. The data recorded within the Google Docs application was a ‘bonus’ and produced many surprises when analysed. The paper discusses visualisation of the rubrics using Google Analytics ‘off-the-shelf’ and the possibility of further analysis such as inter-rater reliability and item response theory. Introduction Using rubrics for marking is a discussion that has been raging for some time. Issues regarding the granularity and effectiveness of such rubrics are interesting and pertinent to this discussion. This paper describes some initial experiments using standard “out of the box” software, and some “out of the square” thinking to facilitate rapid turnaround in marking and, as a bonus an insight into our teaching and learning processes. Using Google Docs and embedded timing mechanisms, a simple investigative experiment was conducted to look at the performance and accuracy indicators which were possible ‘off-the-shelf’. Staff members were on one level amazed at the performance indicators and on the other shocked at the time taken to mark both normally and also using the rubric. The use of Google Docs allowed a reasonably secure time stamped rubric. The Google environment provides for data analysis and analytics as an additional bonus. This article describes the process, the outcomes and the findings of the experiment with recommendations for further research. Rubrics in General: A Literature Review Grading and assessment are influenced by a range of theories, legislation, funding guidelines, and policies. Public universities acknowledged these forces and started using criteria-based grading and reporting, not only by reason of effective grading and assessment but to provide evidence of effective teaching and learning processes. The most important arguments are that students have the right to be graded solely on the quality of their work, no matter how other students performed and without considering their previous performance. Secondly, students have the right to know the criteria by which they will be judged (R. D. Sadler, 2005). In addition to that, upcoming national standards support the trend of using criteria-based training (Cooper & Gargan, 2009) often assured through accreditation by agencies (Anglin, et al, 2008). Rubrics address these changes. They use a set of categories, also called assessment criteria, for evaluating students’ performance and providing feedback. Each of these criteria is displayed on a scale including a quality description, so that the outcomes can be interpreted easily. Such criteria, for instance, could be ‘paragraph structure’ or ‘spelling’(Cooper & Gargan, 2009; Easton, 2007; Reddy & Andrade, 2009). Using rubrics is useful for many disciplines and purposes in higher education, for instance in information literacy, nursing and management, to grade literature reviews, oral presentations and exams (Reddy & Andrade, 2009). - 391 -