In-Game Assessments Increase Novice Programmers’
Engagement and Level Completion Speed
ABSTRACT
Assessments have been shown to have positive effects on learning
in compulsory educational settings. However, much less is known
about their effects in discretionary learning settings, especially in
computing education and educational games. We hypothesized
that adding assessments to an educational computing game would
provide extra opportunities for players to practice and correct
misconceptions, thereby affecting their performance on
subsequent levels and their motivation to continue playing. To test
this, we designed a game called Gidget, in which players help a
robot find and fix defects in programs that follow a mastery
learning paradigm. Across two studies, we manipulated the
inclusion of multiple choice and self-explanation assessment
levels in the game, measuring their impact on engagement and
level completion speed. In our first study, we found that including
assessments caused learners to voluntarily play longer and
complete more levels, suggesting increased engagement; in our
second study, we found that including assessments caused learners
to complete levels faster, suggesting increased understanding.
These findings suggest that including assessments in a
discretionary computing education game may be a key design
strategy for improving informal learning of computing concepts.
Categories and Subject Descriptors
K.3.2 Computer Science Education: Introductory Programming,
D.2.5 Testing and Debugging.
Keywords
Programming, assessment, engagement, speed, debugging, serious
game, educational game.
1. INTRODUCTION
Recent press about code.org and other efforts to increase computing
literacy have begun to attract millions of people to learn computer
programming. Many of these individuals are turning to
discretionary online resources such as Codecademy, Kahn
Academy, Coursera, and CodeHS, and research environments such
as Alice and Scratch, to learn. Although research on these learning
materials is still sparse, learners report that they enjoy these
informal resources more than traditional classes because they allow
for flexibility in how they learn, they give learners a better sense of
retaining the material [5], and they are more motivating, engaging,
and interesting than traditional classroom courses [10]. Some of
these attitudes can be attributed to these resources’ use of game
mechanics such as scaffolded materials, structured mastery learning,
concrete goals, and extrinsic incentives such as badges [39].
Unfortunately, many of these resources struggle to keep learners
engaged [12] and few of them involve explicit evaluations of
learning, making it unclear how much learners actually learn or
retain. Therefore, as these resources increase in popularity, a
significant design challenge will be improving engagement, while
also demonstrably improving understanding.
One way to potentially improve both understanding and
engagement is to use assessments [29]. Assessments, which directly
tests learners’ knowledge by asking them to explicitly answer
questions about the material, are widely used in compulsory settings
not only to measure learners’ progress and what they know [6], but
also to improve students’ learning itself [4]. Assessments improve
learning and understanding partly by helping students practice
course material and by clearing up misconceptions [8,20].
Unfortunately, there is a lack of research about how including
assessments might affect learners’ use of discretionary learning
resources [5]. Moreover, there is reason to believe that assessments
could actually harm engagement, even if they improve learning. For
example, assessments can lead to test-anxiety, negatively affecting
engagement [34], especially if they get the wrong answer or
feedback is lacking [6]. Including assessments in educational games
or resources that use game mechanics may be even more harmful, as
they may interfere with a player’s enjoyment of the game, creating a
“testing” mode that is poorly integrated with the rest of the game,
leading the learner to disengage or even quit the activity.
To begin exploring the role of assessments in discretionary
computing education games, we investigated the effect of integrated
learning assessments on both engagement and speed across two
online controlled experiments where learners played Gidget [22,23],
a debugging game in which learners play through a series of levels,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
ICER’13, August 12–14, 2013, San Diego, California, USA.
Copyright © 2013 ACM 978-1-4503-2243-0/13/08…$15.00.
http://dx.doi.org/10.1145/2493394.2493410
Figure 1. Does providing in-game assessment questions help
discretionary learners playing an educational programming
game increase engagement and level completion speed? This
figure shows a multiple choice assessment in such a game.
Michael J. Lee
1
, Andrew J. Ko
1
, and Irwin Kwan
2
1
University of Washington
Information School
{mjslee, ajko}@uw.edu
2
Oregon State University
School of EECS
kwan@eecs.oregonstate.edu