Peer Assessment for Action Learning of Data Structures and Algorithms Philip Machanick School of ITEE University of Queensland, St Lucia Qld 4072, Australia philip@itee.uq.edu.au Abstract This paper describes an experience with use of peer as- sessment in tutorials as a tool to promote deep learning from early stages of a course on Data Structures and Algo- rithms. The goal was to improve the utility of tutorials in encouraging more efficient learning habits. Since assess- ment forms a key part of the actual curriculum, tutorial exercises were for credit, but the emphasis was on forma- tive assessment. The novelty in this approach is that peer assessment has not been extensively studied in Computer Science Education for content of the kind covered in this course. Evaluation is limited by the fact that other details of the course were changed. Two surveys were conducted, one soon after the first assignment, the other soon after the second assignment. Of various aspects of the course sur- veyed, the tutorial quizzes were the least popular, but im- proved in popularity between the two surveys. The overall effect based on general observation of the class appeared to be positive. Results were closer to a normal distribu- tion than for the previous 2 years. Performance in the first assignment, which required understanding of how the the- ory is applied in a practical situation, suggested that deep learning had taken place. 1 Introduction The irony does not escape him: that the one who comes to teach learns the keenest of lessons, while those who come to learn learn nothing. — JM Coetzee, Disgrace 1 There is a discontinuity between techniques in under- graduate teaching and knowledge formation in research. In research, peer review is common, with the assump- tion that all participants are equals – if reviewers are more equal than than authors. Peer review is not an uncom- mon approach in the workplace, though perhaps it is more common that review is by an immediate superior. An un- stated assumption is that the employee will “graduate” to the level of their superior through a process of learning on the job and occasional review. Given that both the ordinary workplace and academia see a role for peer review or at least review by the commu- nity within which one works, there seems to be a case for a similar process in learning in formal education. This paper reports on an experiment in introducing “peer review” in the form of peer assessment into teach- ing of a classical area of Computer Science, data struc- tures and algorithms. Peer review fits the action learning Copyright c 2005, Australian Computer Society, Inc. This paper ap- peared at the Australasian Computing Education Conference 2005, New- castle, Australia. Conferences in Research and Practice in Information Technology, Vol. 42. Alison Young and Denise Tolhurst, Eds. Repro- duction for academic, not-for profit purposes permitted provided this text is included. 1 Random House, London, 1999. p 5. paradigm (Bunning 2001) well, as it introduces a strong aspect of reflection. Through understanding the assess- ment process, students should build a clearer notion of their educational goals, and be able to plan better for fu- ture assessment. Further, the process of learning should be closer to practices in the real world where design reviews, for example, are common. While this general notion is reasonably well accepted in education, there is relatively little literature on peer as- sessment in Computer Science, and it is generally focused on a narrow range of areas – software projects (Ruehr & Orr 2002) and design of Internet services (Brookes & Indulska 1996) are the two most common areas. What lit- tle work there has been on peer assessment in algorithms (H¨ ubscher-Younger & Narayanan 2003) is in a broader context, and is not specifically focused on peer assessment as a tool. The aim of this paper therefore is to present an inves- tigation into the value of peer assessment specifically in data structures and algorithms, but even more specifically as an aid for making tutorials more effective. Peer assess- ment is potentially useful for tutorials for several reasons. Tutorials without assessment are seldom taken seriously because students can be expected to focus their energies on assessable activities (Biggs 1999). Since tutorials are the most regular interaction with the class where assess- ment could take place, using them for formative assess- ment (Brown 1999) is useful. Tutorials are a natural place to do peer assessment because of the formative aspects in- herent in evaluation of the work of others. The approach which was adopted in this research was to introduce peer assessment into tutorials, with a specific goal of introducing formative assessment. Given that the course was run on a different basis with different lectur- ers as compared with immediate predecessors, compari- son with past versions of the course is difficult. However, evaluation based on general levels of student comprehen- sion of concepts known to be difficult, and surveys of stu- dent attitudes to the approach provide some measurement of outcomes. In addition, general observation of the be- haviour of the class is some indication of the success of the intervention, if lacking in rigour. Finally, evidence of deep learning (Biggs 1999, Ramsden 1988) could be found by the use of assignment questions or examination questions which could only be answered if the students had formed their own model of the key concepts in the course. Most specifically, the alignment of intended learning outcomes with assessment (Biggs 1999) was measured through performance in the first of two assignments, in which students were presented with a problem from a sec- tion of the course they hadn’t seen yet, and had to under- stand the theory to be able to solve the problem. The remainder of this paper is structured as follows. Section 2 contains a brief review of background and re- lated work. In Section 3, more detail of the approach to the problem is supplied, with results in Section 4. The pa- per ends with a concluding section containing reflections on the approach taken and the results.