Copyright © 2011 Pearson Education, In. or its afliate(s). All rights reserved.
The nation is showing an unprecedented focus on increasing
the rigor in education and preparing students for college.
The college readiness trend is driving changes in the ways in
which the nation uses student test data. Educational data are
no longer limited to static data snapshots showing the status
of a student performance at one point in time. Instead data
are linked grade-to-grade and course-to-course, to create
a longitudinal measure of student performance. Inferences
about student progress are now made using status as well
as growth models. As an illustration of the national focus on
longitudinal data, the United States Department of Education
(2010) publication, A Blueprint for Reform:The Reauthorization
of the Elementary and Secondary Education Act noted,
“Instead of a single snapshot, we will recognize progress
and growth” (p. 2).
States have previously focused on snapshots of student
performance and have drawn inferences about progress
from those snapshots, assuming that passing in one grade/
course meant that students were on track to passing in the
next grade/course. The problem is that data supporting those
assumptions were not typically provided. In some instances
when states did analyze longitudinal data from a system built
for static interpretations, the results proved surprising. For
example, states transitioning from a horizontal to vertical
scale have found that when passing standards are put on a
vertical scale and comparisons are made across grades,
passing standards for grade level can be lower than the
passing standards for the prior grade level. The new national
trend is to enhance our ability to draw inferences about
student growth by collecting more direct evidence from
longitudinal student data.
The use of longitudinal data expands beyond informing about
student progress to evaluating teachers and educational
leaders. President Obama has repeatedly highlighted the need
for teacher efectiveness measures and ofered incentives for
Making Sense of the Metrics: Student Growth,
Value-added Models, and Teacher Efectiveness
Kimberly O’Malley, Ph.D., Katie McClarty, Ph.D., Tracey Magda, Ph.D., and Kelly Burling, Ph.D.
those who are willing to implement them. The Department
of Education awarded billions of dollars from the Race to
the Top fund to 11 states and the District of Columbia in
2010. In granting the awards, the Department evaluated state
applications for which 28% of the points were dedicated to a
section entitled “Great Teachers and Leaders.” As part of the
application requirements, states had to develop and describe
a system for assessing teacher efectiveness that included
student achievement data and provided annual efectiveness
ratings for all teachers. States awarded the Race to the Top
funds are currently working to implement their plans for
teacher efectiveness systems, with most relying on student
growth measures as essential measures in their systems.
The college readiness trend is driving
changes in the ways in which the nation
uses student test data.
The use of student score changes in diferent applications
has led to confusion in the use of terms and concepts. Terms
such as student growth, value-added models, and teacher
efectiveness are often used interchangeably. The diferences in
these three measures are signifcant. Using one when another
is intended has impeded the nation’s ability to develop these
measures well and to use the information in optimal ways.
The goal of this paper is to defne student growth, value-
added models, and teacher efectiveness, the three terms that
are often confused. Furthermore, the paper will compare and
contrast features of these three measures and identify next
steps needed for advancing the use of these measures for
educational reform.
Student growth measures focus on performance of individual
students, addressing questions about how much a student
progressed and if the student is on track, where on track
Bulletin
April 2011 | Issue 19
www.pearsonassessments.com
1