Evaluating Student Understanding of Core Concepts in Computer Architecture Leo Porter Department of Mathematics and Computer Science Skidmore College Saratoga Springs, NY, USA Saturnino Garcia Department of Mathematics and Computer Science University of San Diego San Diego, CA, USA Hung-Wei Tseng Computer Science and Engineering Department UC San Diego La Jolla, CA, USA Daniel Zingaro Department of Computer Science University of Toronto Toronto, ON, Canada ABSTRACT Many studies have demonstrated that students tend to learn less than instructors expect in CS1. In light of these studies, a natural question is: to what extent do these results hold for subsequent, upper-division computer science courses? In this paper we describe our work in creating high-level con- cept questions for an upper-division computer architecture course. The questions were designed and agreed upon by subject-matter and teaching experts to measure desired min- imum proficiency of students post-course. These questions were administered to four separate computer architecture courses at two different institutions: a large public univer- sity and a small liberal arts college. Our results show that students in these courses were indeed not learning as much as the instructors expected, performing poorly overall: the per-question average was only 56%, with many questions showing no statistically significant improvement from pre- course to post-course. While these results follow the trend from CS1 courses, they are still somewhat surprising given that the courses studied were taught using research-based pedagogy that is known to be effective across the CS cur- riculum. We discuss implications of our findings and offer possible future directions of this work. Categories and Subject Descriptors K.3.2 [Computer Science Education]: Computer and In- formation Science Education Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ITiCSE’13, July 1–3, 2013, Canterbury, England, UK. Copyright 2013 ACM 978-1-4503-2078-8/13/07 ...$15.00. Keywords curriculum, assessment, computer architecture 1. INTRODUCTION The CS1 research community continues to demonstrate that students do not learn what is expected in typical CS1 courses [5, 13]. Post-CS1, students continue to struggle with writing or explaining code that researchers and teachers feel should be well within reach of these students. By virtue of being the first CS course taken by many students, the rea- sons for this poor performance are unclear. Is it our inability to properly teach CS1? Is it related to fixed characteristics of successful and unsuccessful students? To what extent does this pattern continue through upper-level courses, in which the “strong” students are likely enrolled? In this paper, we begin an investigation into what stu- dents learn in an upper-division introduction to computer architecture course. Through an examination of final exam questions, we developed questions designed to test student high-level understanding of core course concepts. The ques- tions were also designed with expert architects in mind. What questions would an architect view as trivial and some- thing“every student should get correct”? For example, from the view of an architecture instructor, every student should know that pipelining improves overall instruction through- put, not individual instruction latency. Questions were designed for pre/post course usage. In one course, they were used as pre/post test questions in an online quiz. It was found that for many questions students were too unfamiliar with the material to hazard a guess on the pre-test (on which most students selected “Don’t Know”). For a number of questions students learned from the course. Unfortunately, on some questions no statistically significant improvement was found between pre- and post-test results. Following these perplexing results, we suspected students were not taking the test seriously. Therefore, we asked the same questions as a post-test in an in-class final exam study session. Again, students were seen to do quite poorly on a number of concepts. We then asked the post-questions,