A NIT-picking analysis: Abstractness dependence of subtests correlated
to their Flynn effect magnitudes
Elijah L. Armstrong
a,
⁎, Jan te Nijenhuis
b
, Michael A. Woodley of Menie
c,d
, Heitor B.F. Fernandes
e
,
Olev Must
f
, Aasa Must
g
a
Washington University in St. Louis, United States
b
National Research Center for Dementia, Chosun University, Gwangju, Republic of Korea
c
Department of Psychology, Technische Universität Chemnitz, Germany
d
Center Leo Apostel for Interdisciplinary Research, Vrije Universiteit Brussel, Belgium
e
Department of Psychology, Federal University of Rio Grande do Sul Department of Psychology, Brazil
f
Department of Psychology, University of Tartu, Estonia
g
Estonian National Defense College, Estonia
abstract article info
Article history:
Received 21 December 2015
Received in revised form 24 February 2016
Accepted 24 February 2016
Available online xxxx
We examine the association between the strength of the Flynn effect in Estonia and highly convergent panel-
ratings of the ‘abstractness’ of nine subtests on the National Intelligence Test, in order to test the theory that
the Flynn effect results in part from an increase in the use of abstract reference frames in solving cognitive
problems. The vectors of abstractness ratings and Flynn effect gains, controlled for guessing) exhibit a near-
zero correlation (r = -.02); however, abstractness correlates positively with (and is therefore confounded by)
g-loadings (r = .61). A General Linear Model is used to determine the degree to which the abstractness vector
predicts the Flynn effect vector, independently of subtest g-loadings and the portion of the secular IQ gain due
to guessing (the Brand effect). Consistent with the abstract reasoning model of the Flynn effect, abstractness pos-
itively predicts Flynn effect magnitudes, once controlled for confounds (sr = .44), which indicates an increasing
tendency to utilize factors external to the items in order to abstract their solutions.
© 2016 Elsevier Inc. All rights reserved.
Keywords:
Abstract thinking
Flynn effect
Intelligence
National Intelligence Test
Estonia
g loading
1. Introduction
The Flynn effect describes the tendency for IQ scores to rise across
tests at a rate of approximately three points per decade (Flynn, 2009;
Pietschnig & Voracek, 2015). The causes of this effect are unknown,
although many factors have been postulated, including reduced
inbreeding, better education, improved nutrition, lower parasite preva-
lence, and slower life history speed (see Williams, 2013, and Pietschnig
& Voracek, 2015, for reviews of possible causes). To better understand
the effect's etiology, it is helpful to understand the profile of tests on
which it is most pronounced (e.g., Lynn, 1990; Rushton, 1999;
Pietschnig & Voracek, 2015; see also Rushton & Jensen, 2005). Previous
research has documented that the Flynn effect is more prominent on
tests with lower g loadings, i.e., that correlate less strongly with the
set of other tests (te Nijenhuis & van der Flier, 2013); stronger on
fluid, as opposed to crystallized, tests (e.g., Pietschnig & Voracek,
2015); and stronger on tests of mathematical achievement, as opposed
to verbal achievement (e.g., Herrnstein & Murray, 1994; Rindermann &
Thompson, 2013; Wai & Putallaz, 2011). The present study investigates
one further proposed determinant of Flynn effect strength — namely ab-
stract thinking ability: the capacity to infer general properties when solv-
ing problems, and to ignore irrelevant concrete facts (e.g., Flynn, 2009;
Jensen, 1998; Pinker, 2011; Terman, 1921, 1922; see Flynn, 1998 for
criticism).
1
Some tests, which rely heavily on this ability, such as the
Intelligence 57 (2016) xxx–xxx
⁎ Corresponding author.
E-mail address: armstrong357@wustl.edu (E.L. Armstrong).
1
Jensen (1998), for instance, defines a similar concept: “In almost every subject in the
school curriculum, pupils learn to discover the general rule that applies to a highly specific
situation and to apply a general rule in a wide variety of different contexts. The use of sym-
bols to stand for things in reading (and musical notation); basic arithmetic operations;
consistencies in spelling, grammar, and punctuation; regularities and generalizations in
history; categorizing, serializing, enumerating, and inferring in science, and so on. Learn-
ing to do these things, which are all part of the school curriculum, instills cognitive habits
that can be called decontextualization of cognitive skills” (p. 325). This definition, however,
does not exhaust abstract thinking as we define it: we include taking false or unknown hy-
potheticals seriously, and having absorbed, and being able to apply, scientific concepts, in
our definition (ref. Terman, 1956). Our definition includes one analogical or
“decontextualization”-based item, one “scientific spectacles” item that requires answering
based on abstract rather than concrete similarities, and one syllogism with a bizarre pre-
mise that requires taking false hypotheticals seriously. The second and third are from
Flynn (2009) and Luria (1976), respectively; the first is from Flynn (2012) summarizing
Fox and Mitchum (2013). These different aspects of abstract reasoning are theoretically
separable, but the high correlation between ratings suggests they were related in this
dataset.
INTELL-01111; No of Pages 6
http://dx.doi.org/10.1016/j.intell.2016.02.009
0160-2896/© 2016 Elsevier Inc. All rights reserved.
Contents lists available at ScienceDirect
Intelligence
Please cite this article as: Armstrong, E.L., et al., A NIT-picking analysis: Abstractness dependence of subtests correlated to their Flynn effect
magnitudes, Intelligence (2016), http://dx.doi.org/10.1016/j.intell.2016.02.009