Effectiveness of an Adaptive Quizzing System as
an Institutional-Wide Strategy to Improve Student
Learning and Retention
E’Loria Simon-Campbell, PhD, RN & Julia Phelan, PhD
Exploring ways to help students achieve success in nursing programs is critical to increase retention and the number of nurse
graduates. This study examined the impact of an adaptive quizzing system implemented as a strategy to support student
persistence and performance measured by use, grades, and graduation. Results indicated that use of the system increased course
content mastery and predicted final course grades. Retention and program completion rates were also positively influenced.
Keywords: Adaptive Quizzing; NCLEX; Nursing Education; Retention; Remediation; Technology
T
he first-time National Council Licensure Examination
(NCLEX) pass rate has emerged as one indicator of
a successful and high-quality nursing program and
as an indicator of quality for state boards of nursing and
the nursing school’s community of interest.
1-3
The focus on
first-time pass rates is further supported by data showing
higher failure rates for repeat NCLEX takers compared to
first-time test takers.
4
The National Council of State Boards
of Nursing attributes the difference in failure rates to the
extended time between graduation and retaking examina-
tions for the repeat test-takers. Nursing schools also may risk
the stigma of students with repeated failures, influencing the
support of their community as well as continuing school
accreditation. Moreover, if a student cannot pass the NCLEX,
they are unable to pursue their chosen career.
The likelihood of a nursing student graduating and
passing the NCLEX the first time is difficult to predict given
the myriad interacting variables that influence success
or failure.
5
Despite this difficulty, many nursing programs
use standardized assessment programs to try and predict
student success on the NCLEX, with some schools imple-
menting graduation policies based on standardized test
scores.
2
In Texas, for example, a survey of 74% of the state
approved nursing programs found 98.8% of schools used
standardized tests, and 47.9% of schools used scores on
standardized tests to make progression or graduation deci-
sions.
6
Nationally, the percentage of nursing programs with
progression policies that mandate students meet a bench-
mark on a standardized test to qualify for graduation is
20%.
1,7
But as Spurlock
8
has indicated, little or no guidance
or validity evidence is available to faculty who wish to set
cut, or decision, scores for their progression or graduation
policies.
The process of using standardized tests (and achieve-
ment of a benchmark on those tests) to determine progres-
sion and graduation eligibility is also known as ‘‘high-stakes’’
testing.
1
Two of the most commonly used programs used in
high-stakes testing are the Health Educational Systems, Inc.
(HESI) and Assessment Technologies Institute. Although
both companies have a suite of examination offerings, many
schools use the HESI Exit Exam (HESI E
2
) as an NCLEX
predictor test. Research indicates that although predictor
tests may predict high-performing students who are likely to
pass the NCLEX, they are much less precise in identifying
the likelihood of failure.
8,9
This distinction in describing
the accuracy of a test is especially important when pro-
gression policies are in place and data may be used to pre-
clude graduation.
There is, of course, another possible use for data from
standardized tests; rather than only using the data to make
progression decisions, information may also help students
shape their remediation and studying efforts—before
taking the NCLEX. Despite successful completion of nursing
programs, some students may have difficulty achieving the
benchmark on a standardized measure. If this is the case,
standardized test results can provide students and faculty
Nurse Educator Volume 00 & Number 0 & Month 2016 1
Nurse Educator Nurse Educator
Nurse Educator
Vol. 00, No. 0, pp. 00-00
Copyright * 2016 Wolters Kluwer Health, Inc. All rights reserved.
Author Affiliations: Assistant Professor (Dr Simon-Campbell), School
of Nursing, Sam Houston State University, Texas; and Senior Researcher
(Dr Phelan), National Center for Research on Evaluation, Standards, and
Student Testing, University of California at Los Angeles.
Disclosure: Dr Simon-Campbell was a consultant for Lippincott pro-
viding faculty support related to the adaptive quizzing software.
Dr Phelan is an assessment expert and research consultant for Wolters
Kluwer Health.
Correspondence: Dr Simon-Campbell, Citizen’s National Bank Building,
1 Financial Plaza, Suite 215, Huntsville, TX 77340 (exs063@shsu.edu).
Supplemental digital content is available for this article. Direct URL citations
appear in the printed text and are provided in the HTML and PDF versions
of this article on the journal’s Web site (www.nurseeducatoronline.com).
Accepted for publication: January 30, 3016
Published ahead of print: March 10, 2016
DOI: 10.1097/NNE.0000000000000258
Copyright © 2016 Wolters Kluwer Health | Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.