T.N. van Leeuwen, H.F. Moed and J. Reedijk University of Leiden, The Netherlands Received 18 June 1998 Revised 26 July 1999 Abstract. In this paper, empirical data are analysed to show some of the problems involved in the use of the Institute for Scientific Information’s (ISI) impact factors (IFs). Based on earlier work of the authors, and elaborating on some new topics, the paper shows that IFs as defined by ISI have shortcomings which make them inappropriate for the purposes for which people use them: researchers for their publication strategy, policy makers (at different levels) to evaluate research performance, and librarians to evaluate their journal collec- tions. Whereas earlier papers have focused on problems involved with the definitions of the constituting elements of the classical IF and the resulting errors, this paper focuses on the problems related to other characteristics of scientific journals; in particular, the influence of the distribution of papers among document types in a journal, the effects of splitting of journals or changing their names, the measure- ment of (un)citedness of papers in a journal, and the chosen length of the citation window within the definition of the classical IF. This will raise the fundamental question of whether an indicator, based on only a one- to two-year cita- tion window, will be sufficiently valid to be of any use in analyses of journal and research performance. 1. Introduction Impact factors (IFs) of scientific journals are frequently used in evaluations of journals and scientists [1]. These measures are calculated by the Institute for Scientific Information (ISI) and printed in Journal Citation Reports (JCR). The IF of a journal gives the average number of citations received by papers in that journal one or two years after its publication date. Therefore, the IF represents a ratio: in the numerator, the number of citations to a journal; in the denominator, the number of ‘citable’ items published in that journal [2]. An earlier paper [3] focused on errors in the IFs as printed in JCR because of an inappropriate definition of ‘citable’ items. It was found that document types not defined by ISI as citable (particularly letters and edi- torials) are actually cited. These uncitable items are not included in the denominator of a journal’s IF. On the other hand, the citations to these uncitable items do contribute to the IF’s numerator. For instance, evidence was obtained that the ‘correct’ IF of the journal Lancet in 1992 would be 43% lower than the IF listed in JCR. In two articles dealing with the journals Angewandte Chemie and Journal of the American Chemical Society [4, 5], it was shown that the IFs printed in JCR may also be affected by errors, due to problems with identifying a specific journal in the cited reference lists of papers included in Science Citation Index (SCI). In fact, in agree- ment with findings by Braun and Glänzel [6], evidence was obtained that the IF of Angewandte Chemie listed in JCR for 1994 is approximately 50% too high. This error is caused by ‘double counting’ of citations, due to the existence of different language versions of that journal. This paper focuses on the following five issues: (1) the accuracy of the IF of a selected set of journals in the field of inorganic molecular chemistry. In 1 2 3 4 5 6 7 8 9 1110 1 2 3 4 5 6 7 8 9 20 1 2 113 4 5 6 7 8 9 30 1 2 3 4 5 6 7 8 9 40 1 2 3 4 5 6 7 8 9 50 1 2 Journal of Information Science, 25 (6) 1999, pp. 489–498 489 The effect of postings information on searching behaviour Critical comments on Institute for Scientific Information impact factors: a sample of inorganic molecular chemistry journals Correspondence to: T.N. van Leeuwen, Centre for Science and Technology Studies (CWTS), University of Leiden, Wassenaarsweg 52, Postbus 9555, 2300 RB Leiden, The Netherlands. Tel: +31 71 527 3909/3928. Fax: +31 71 527 3911. E-mail: leeuwen@CWTS.LeidenUniv.nl