Dynamic and Evolutionary Updates of Classificatory
Schemes in Scientific Journal Structures
Loet Leydesdorff
Science & Technology Dynamics, Amsterdam School of Communications Research (ASCoR),
Kloveniersburgwal 48, 1012 CX Amsterdam. E-mail: loet@leydesdorff.net; www.leydesdorff.net
Can the inclusion of new journals in the Science Citation
Index be used for the indication of structural change in
the database, and how can this change be compared
with reorganizations of relations among previously in-
cluded journals? Change in the number of journals (n) is
distinguished from change in the number of journal cat-
egories (m). Although the number of journals can be
considered as a given at each moment in time, the num-
ber of journal categories is based on a reconstruction
that is time-stamped ex post. The reflexive reconstruc-
tion is in need of an update when new information be-
comes available in a next year. Implications of this shift
towards an evolutionary perspective are specified.
Introduction
Because the sciences develop dynamically, one expects
to find change in trend lines of scientometric indicators. For
example, scientific productivity changes over time, and it is
also expected to differ among research groups. The varia-
tion among research groups at each moment in time may
interact with the processes of change over time. A policy
analyst, therefore, may wish to ask “what do the results
teach us?” Should policies nurture the “weak” units or
rather “pick the winners” (Irvine & Martin, 1984)? Does a
high score on an indicator predicate for further growth or
rather predict relative stability or even decline? In other
words: what is the strategic value of the measurement
results using scientometric indicators? How do the indicated
developments relate to a baseline for the comparison?
The question of the construction of a baseline for the
comparison (Studer & Chubin, 1980) has been prevailing in
scientometric studies during the 1980s and 1990s without
having been solved hitherto. Two important proposals for
methodologies were made right at the beginning of the
scientometric research program, notably (a) to make com-
parisons at each moment only in terms of “like with like”
(Martin & Irvine, 1983), and (b) to make comparisons over
time only in terms of journal sets which are kept fixed ex
ante during the period under study (Narin, 1976).
The heuristics of comparing “like with like” can be
considered as a definition of research groups in terms of
institutional parameters (Collins, 1985), while the definition
in terms of journal sets is expected to indicate the intellec-
tual exchange among scholars in a field or specialty (Whit-
ley, 1984). For example, an index of activity can be con-
structed for the comparison among research groups or other
units of analysis (Schubert, Gla¨nzel, & Braun, 1989). The
units of analysis of knowledge production can be defined
with reference to a relevant environment that one can mea-
sure independently, for example, in terms of the journal sets
used for the communication (Doreian & Fararo, 1985;
Moed, Burger, Frankfort, & Van Raan, 1985; Leydesdorff,
1987).
Can the changing positions of institutional units of
knowledge production in changing intellectual environ-
ments also be measured? Moed et al. (1985) proposed to
normalize output performance measurement results in rela-
tion to impact factors of journals used by the groups them-
selves. In a similar vein, Schubert et al. (1989) developed
the instrument of “expected” versus “observed” citation
rates. Further questions can be raised here both methodolog-
ically and theoretically. For example, the skewness of the
distributions considerably complicates the issue of an ap-
propriate normalization (Bonitz, 1997; Leydesdorff, 1995a).
With other colleagues (e.g., Cozzens & Leydesdorff,
1993; Leydesdorff, Cozzens, & Van den Besselaar, 1994), I
have been particularly interested in the measurement of
structural change at the network level and how such change
potentially redefines the universe (or, in other words, the
paradigm) in which practicing scientists assess the rele-
vance and the quality of the contributions of their col-
leagues. In my opinion, the innovative dimension of the
development of science and technology cannot be measured
using ex ante fixed journal sets or institutional units; the
institutions can be expected to aggregate both standardized
routines and innovative activities.
Received June 4, 2001; Revised January 3, 2002; accepted May 1, 2002
© 2002 Wiley Periodicals, Inc. ● Published online 7 August 2002 in Wiley
InterScience (www.interscience.wiley.com). DOI: 10.1002/asi.10144
JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 53(12):987–994, 2002