A general factor of intelligence fails to account for changes in
tests’ scores after cognitive practice: A longitudinal multi-group
latent-variable study
Eduardo Estrada
a
, Emilio Ferrer
b
, Francisco J. Abad
a
, Francisco J. Román
a
, Roberto Colom
a,
⁎
a
Facultad de Psicología, Universidad Autónoma de Madrid, Spain
b
University of California at Davis, USA
article info abstract
Article history:
Received 23 December 2014
Received in revised form 4 February 2015
Accepted 20 February 2015
Available online xxxx
As a general rule, the repeated administration of tests measuring a given cognitive ability in the
same participants reveals increased scores. This brings to life the well-known practice effect and it
must be taken into account in research aimed at the proper assessment of changes after the
completion of cognitive training programs. Here we focus in one specific research question: Are
changes in test scores accounted for by the tapped underlying cognitive construct/factor? The
evaluation of the factor of interest by several measures is required for that purpose. 477 university
students completed twice a battery of four heterogeneous standardized intelligence tests within a
time lapse of four weeks. Between the pre-test and the post-test sessions, some participants
completed eighteen practice sessions based on memory span tasks, other participants completed
eighteen practice sessions based on processing speed tasks, and a third group of participants did
nothing between testing sessions. The three groups showed remarkable changes in test scores
from the pre-test to the post-test intelligence session. However, results from multi-group
longitudinal latent variable analyses revealed that the identified latent factor tapped by the
specific intelligence measures fails to account for the observed changes.
© 2015 Elsevier Inc. All rights reserved.
Keywords:
General cognitive ability
Practice effect
Working memory span
Processing speed
1. Introduction
Practice effects are broadly acknowledged in the cognitive
abilities literature (Anastasi, 1934; Colom et al., 2010; Hunt,
2011; Jensen, 1980; Reeve & Lam, 2005). When the same
individuals complete the same (or parallel) standardized tests,
their scores show remarkable improvements. However, as
discussed by Jensen (1998) among others (Colom, Abad,
García, & Juan-Espinosa, 2002; Colom, Jung, & Haier, 2006; te
Nijenhuis, van Vianen, & van der Flier, 2007), specific measures
tap cognitive abilities at three levels: general ability (such as the
general factor of intelligence, or g), group abilities (such as verbal
or spatial ability), and concrete skills required by the measure
(such as vocabulary or mental rotation of 2D objects).
Within this general framework, recent research aimed at
testing changes after the completion of cognitive training
programs has produced heated discussions regarding the nature
of the changes observed in the measures administered before
and after the training regime (Buschkuehl & Jaeggi, 2010;
Conway & Getz, 2010; Haier, 2014; Moody, 2009; Shipstead,
Redick, & Engle, 2010, 2012; Tidwell, Dougherty, Chrabaszcz,
Thomas, & Mendoza, 2013). The changes may or may not be
accounted for by the underlying construct of interest. Thus, for
instance, the pioneering work by Jaeggi, Buschkuehl, Jonides, and
Perrig (2008) observed changes in fluid intelligence measures
after completion of a challenging cognitive training program
based on the dual n-back task. This report stimulated a number
of investigations aimed at replicating the finding (Buschkuehl,
Hernandez-Garcia, Jaeggi, Bernard, & Jonides, 2014; Colom et al.,
Intelligence 50 (2015) 93–99
⁎ Corresponding author at: Facultad de Psicología, Universidad Autónoma de
Madrid, 28049 Madrid, Spain. Tel.: +34 91 497 41 14 (Voice).
E-mail address: roberto.colom@uam.es (R. Colom).
http://dx.doi.org/10.1016/j.intell.2015.02.004
0160-2896/© 2015 Elsevier Inc. All rights reserved.
Contents lists available at ScienceDirect
Intelligence