262 IEEE TRANSACTIONS ON EDUCATION, VOL. 45, NO. 3 AUGUST 2002
Patterns in Student–Student Commenting
Roy Rada and Ke Hu
Abstract—The virtual classroom under consideration supports
students submitting exercise answers online and comments on ex-
ercise answers that include numerical scores. Automated quality
control procedures track the student scores. The patterns of scores
vary as a function of the pragmatic import of these student scores.
Clear consequences of commenting must be enforced in the class-
room before students will engage in fruitful commenting. The man-
agerial problems that arise in courses that rely on extensive stu-
dent–student commenting can be partially solved with automated
tools that guide students to comment in fair and flexible ways.
Index Terms—Bar charts, classrooms, commenting, com-
puter-supported collaborative learning, peer–peer assessment,
quality control.
I. INTRODUCTION
S
OME organizations are utilizing the information highway
to augment and deliver education [1]. They want increased
access to students, a better return on the investment dollar, and
improved quality of education. To achieve these goals the edu-
cational organization should implement quality control proce-
dures [2].
Learning in small steps with timely feedback acts as rein-
forcement when answers are good and as a corrective measure
when the answers are poor [3]. However, it may not be practical
for an instructor to give fast, personal feedback on every small
step to each student in the class. One possible solution is to uti-
lize peer–peer evaluations.
Students are not authorities in the field of the subject matter
being critiqued. If students make comments on other students’
work, are they placed inappropriately in an authoritative po-
sition? Experience suggests that peer–peer interaction can be
valuable and economic. Pedagogic reasons are presented next.
In the 19th century, the monitorial method of teaching in
England was the dominant innovation in public education [1].
The monitorial method was based on students in a higher grade
monitoring or teaching students in a lower grade. Not enough
adult teachers existed. However, as wealth increased, budgets
to train and employ professional teachers grew, and the mon-
itorial method disappeared. Yet, the demise of the monitorial
method was not a full reflection of the merits or demerits of the
approach.
When students write in teams, they learn to write better than
when they write alone [4]. Numerous studies have shown that
student–student learning can help students [5]. The theoretical
explanation is that the collaborators help provide insight about
Manuscript received December 19, 1997; revised February 11, 2002.
The authors are with the Department of Information Systems, Univer-
sity of Maryland, Baltimore County, Baltimore, MD 21250 USA (e-mail:
rada@umbc.edu).
Publisher Item Identifier S 0018-9359(02)05046-X.
the audience and help the writer develop a model of the audi-
ence. Students learn by developing a model of some domain. To
improve their current model, they benefit from incremental im-
provements that can come from peers whose models are similar
to, but not the same as, those of their peers. Given that peer–peer
interaction is provably advantageous for various learning objec-
tives, the challenge becomes how to manage such interaction
without taking too much teacher time. Computer-supported col-
laborative learning tools can be the solution [6].
Finally, peer–peer feedback supports learning how to work
together. Of two metaphors for the conceptualization of learning
(acquisition and participation), the first considers learning as the
process of gaining possession of knowledge, and the second, as
a process of becoming a participant in a particular community
[7]. Through peer–peer interaction, students’ collaborative work
skills may be improved [8].
Separate from peer–peer interaction, computerized methods
have been introduced to take advantage of datasets about stu-
dent performance. For instance, a fuzzy grading system has been
tested that compares a student’s grades with the grades of others
and makes adjustment in the final assigned letter grade based on
rules encoded in fuzzy logic [9]. Another teacher has used statis-
tical regression techniques to analyze patterns of student grades
and to predict when a student deserves special honors [10]. Al-
though courseware exists that can automatically grade certain
types of homework submissions [11], peer–peer interaction has
broad applicability.
Research has been done on courseware that tracks how stu-
dents use a system [12]. Goldberg [13] developed a method
that tracks such data as first and last access date, how many
times course comments were accessed, history of pages visited,
and several other measures. Other assessment techniques have
tracked log-files on hypermedia systems to evaluate the process
of learning. Lawless and Kulikowich [14] used cluster analysis
for interpreting individual log files and identified three patterns
of navigation: knowledge seekers, feature explorers, and apa-
thetic, hypertext users. Barab et al. [15] represented and com-
pared different students’ navigation paths through a hypermedia
system. Such information helps identify students who are not
using the courses to the best advantage or are lagging in use.
Peer assessment can be used for either groups or individuals,
and can be qualitative as well as quantitative [16]. The Many
Using and Creating Hypermedia System was used in classrooms
and supports peer–peer assessment. Results with that system
show that structured assessments from one student to another
can substitute for teacher feedback [17].
Our high-level hypothesis is that quality control methods
in the virtual classroom can reduce teacher load and improve
the quality of learning. More specifically, the hypothesis is
that some methods of pattern analysis of student–student
0018-9359/02$17.00 © 2002 IEEE