Teacher’s Corner
Six Online Statistics Courses: Examination and Review
Jorge LARREAMENDY-JOERNS, Gaea LEINHARDT, and Javier CORREDOR
We extend George W. Cobb’s evaluative framework of statistics
textbooks to six online instructional materials that exemplify
the diversity of introductory statistics materials available on the
Internet. Materials range from course Web sites with limited in-
teractive capabilities to courseware and electronic textbooks that
make extensive use of interactive learning objects and environ-
ments. Instructional materials are examined in light of recent
cognitive research that underscores the robustness of learning
from examples, the importance of authentic problem solving in
promoting knowledge in use and skill acquisition, and the use of
feedback to maximize learning opportunities. Selected units that
focus on statistical tools (measures of central tendency, simple
linear regression, and one-way analysis of variance) are analyzed
in terms of authenticity and diversity of examples, authenticity
and cognitive complexity of exercises, and use of interactive
learning objects and feedback. General conclusions and sug-
gestions for future directions for online statistics instruction are
presented.
KEY WORDS: Distance learning; Evaluation; Web-based in-
struction.
1. INTRODUCTION
In 1987, George W. Cobb examined 16 introductory statis-
tics textbooks in the Journal of the American Statistical Asso-
ciation. Cobb laid out an evaluative framework that considered
technical level and quality of exposition, topics covered, and
quality of exercises. He selected four standard topics: sample
mean, sample standard deviation, normal distribution, and sam-
pling distribution of the mean. He characterized explanations
by identifying the extent to which the expositions relied on for-
mulas and derivations. Cobb estimated the breadth and depth
of explanations by comparing the content covered and the level
of detail within an additional set of topics (regression, analysis
of variance, exploratory data analysis, and computers). Finally,
he addressed the quality of exercises by estimating the authen-
Jorge Larreamendy-Joerns is Associate Professor, Department of Psychol-
ogy, Universidad de los Andes, Bogota, Colombia. Gaea Leinhardt is Professor,
and Javier Corredor is Doctoral Student, University of Pittsburgh, PA (E-mail:
gaea@pitt.edu). Support for this article was provided by a grant to the second
author from Carnegie Mellon University via The William and Flora Hewlett
Foundation. The opinions expressed do not necessarily reflect the position or
the policy of the foundation and no official endorsement should be inferred.
ticity of datasets, the meaningfulness of the studies described
in the problem statements, and the ratio of thinking to number
crunching.
In the 17 years since Cobb’s evaluative framework was pub-
lished two major changes have occurred in the landscape of
statistics education: First, there has been growing recognition of
statistical knowledge as a crucial component of core scientific
literacy (Utts 2003). As a result, we now see the teaching and
learning of statistics in elementary, secondary, and higher educa-
tion (NCTM 2000). Second, there has been a flowering of online
technologies and courses that both support and teach statistics.
The use of online technologies is often predicated under the as-
sumptions that the Internet can contribute to making statistical
knowledge accessible to vast audiences and that online mul-
timedia environments can make learning more interactive and
meaningful. Today an impressive variety of instructional mate-
rials in statistics education is accessible on the Internet, from
full stand-alone introductory courses and electronic textbooks
to digital repositories of learning objects and datasets.
This article extends Cobb’s evaluative framework to a set of
online instructional materials. Like Cobb, we focus on the qual-
ity of explanations and exercises. Yet, in identifying their critical
features we draw on recent cognitive research that underscores
the prevalence of learning from examples, the importance of gen-
uine problem solving in promoting knowledge in use, and the
use of feedback to maximize learning opportunities. Our goals
are to provide a sense of the quality of some instructional materi-
als available on the Internet and to suggest criteria for inspecting
such materials; criteria that we imagine will be expanded and
developed over time as more materials and resources emerge.
2. EVALUATIVE GOALS AND CRITERIA FOR
ONLINE COURSEWARE
An important goal for the evaluation of online multimedia
courseware is the assessment of instructional explanations and
learning opportunities. In order to assess explanations and learn-
ing opportunities three analyses need to be conducted. First,
conceptions of learning and teaching that underlie the design of
instructional materials must be explicated. Second, the extent to
which instructional materials comply with well-established prin-
ciples of learning and teaching should be assessed. Third, learn-
ing affordances and constraints linked to the technical imple-
mentation of courseware need to be specified. [We use the term
“affordances” following Gibson’s (1977) sense of the term as it
has been adopted by psychologists and others (Greeno 1994).
An affordance is an opportunity in an environment to make use
of a physical or mental resource to accomplish a goal. But the op-
240 The American Statistician, August 2005, Vol. 59, No. 3 © 2005 American Statistical Association DOI: 10.1198/000313005X54162