I studied the distribution of changes in journal impact factors (JIF) between 1998 and 2007 according to an empirical beta
law with two exponents. Changes in JIFs (CJIF) were calculated as the quotient obtained by dividing the JIF for a given year
by the JIF for the preceding year. The CJIFs showed good fit to a beta function with two exponents. In addition, I studied
the distribution of the changes in segments of the CJIF rank order. The distributions, which were similar from year to year,
could be fitted to a Lorentzian function. The methods used here can be useful to understand the changes in JIFs using relatively
I review and discuss instances in which 19 future Nobel Laureates encountered resistance on the part of the scientific community
towards their discoveries, and instances in which 24 future Nobel Laureates encountered resistance on the part of scientific
journal editors or referees to manuscripts that dealt with discoveries that later would earn them the Nobel Prize.
In the interesting and provocative paper on Journal Impact Factors by Vanclay (in press) there are some interesting points worth further reflection. In this short commentary I will focus in those that I consider most relevant because they suggest some ideas that could be addressed by researchers interested in this topic.
We studied the influence of the number of citations, the number of citable items and the number of journal self-citations
on increases in the impact factor (IF) in 123 journals from the Journal Citation Reports database in which this scientometric
indicator had decreased during the previous four years. In general, we did not find evidence that abuse of journal self-citations
contributed to the increase in the impact factor after several years of decreases.
In this article I study characteristics of the journal impact factor (JIF) computed using a 5-year citation window as compared with the classical JIF computed using a 2-year citation window. Since 2007 ISI-Thomson Reuters has published the new 5-year impact factor in the JCR database. I studied changes in the distribution of JIFs when the citation window was enlarged. The distributions of journals according their 5-year JIFs were very similar all years studied, and were also similar to the distribution according to the 2-year JIFs. In about 72% of journals, the JIF increased when the longer citation window was used. Plots of 5-year JIFs against rank closely followed a beta function with two exponents. Thus, the 5-year JIF seems to behave very similarly to the 2-year JIF. The results also suggest that gains in JIF with the longer citation window tend to distribute similarly in all years. Changes in these gains also tend to distribute similarly from 1 year to the following year.
We investigated the distribution of citations included in documents labeled by the ISI as “editorial material” and how they
contribute to the impact factor of journals in which the citing items were published. We studied all documents classified
by the ISI as “editorial material” in the Science Citation Index between 1999 and 2004 (277,231 records corresponding to editorial
material published in 6141 journals). The results show that most journals published only a few documents that included 1 or
2 citations that contributed to the impact factor, although a few journals published many such documents. The data suggest
that manipulation of the impact factor by publishing large amounts of editorial material with many citations to the journal
itself is not a widely used strategy to increase the impact factor.
Authors:Juan Miguel Campanario and María Angeles Coslado
First order digits in data sets of natural and social data often follow a distribution called Benford's law. We studied the number of articles published, citations received and impact factors of all journals indexed in the Science Citation Index from 1998 to 2007. We tested their compliance with Benford's law. Citations data followed Benford's law remarkably well in all years studied. However, for the data on the numbers of articles, the differences between the values predicted by Benford's law and the observed values were always statistically significant. This was also the case for most data for impact factors.
Authors:Antonia Andrade, Raúl González-Jonte and Juan Campanario
The aim of this study was to ascertain the possible effect of journal self-citations on the increase in the impact factors
of journals in which this scientometric indicator rose by a factor of at least four in only a few years. Forty-three journals
were selected from the Thomson—Reuters (formerly ISI) Journal Citation Reports as meeting the above criterion. Eight journals
in which the absolute number of citations was lower than 20 in at least two years were excluded, so the final sample consisted
of 35 journals. We found no proof of widespread manipulation of the impact factor through the massive use of journal self-citations.
Authors:Juan Miguel Campanario, Lidia González and Cristina Rodríguez
We present a new approach to study the structure of the impact factor of academic journals. This new method is based on calculation
of the fraction of citations that contribute to the impact factor of a given journal that come from citing documents in which
at least one of the authors is a member of the cited journal's editorial board. We studied the structure of three annual impact
factors of 54 journals included in the groups “Education and Educational Research” and “Psychology, Educational” of the Social
Sciences Citation Index. The percentage of citations from papers authored by editorial board members ranged from 0% to 61%.
In 12 journals, for at least one of the years analysed, 50% or more of the citations that contributed to the impact factor
were from documents published in the journal itself. Given that editorial board members are considered to be among the most
prestigious scientists, we suggest that citations from papers authored by editorial board members should be given particular
Authors:Juan Miguel Campanario, Jesús Carretero, Vera Marangon, Antonio Molina and Germán Ros
We studied the effect on journal impact factors (JIF) of citations from documents labeled as articles and reviews (usually peer reviewed) versus citations coming from other documents. In addition, we studied the effect on JIF of the number of citing records. This number is usually different from the number of citations. We selected a set of 700 journals indexed in the SCI section of JCR that receive a low number of citations. The reason for this choice is that in these instances some citations may have a greater impact on the JIF than in more highly-cited journals. After excluding some journals for different reasons, our sample consisted of 674 journals. We obtained data on citations that contributed to the JIF for the years 1998–2006. In general, we found that most journals obtained citations that contribute to the impact factor from documents labeled as articles and reviews. In addition, in most of journals the ratio between citations that contributed to the impact factor and citing records was greater than 80% in all years. Thus, in general, we did not find evidence that citations that contributed to the impact factor were dependent on non-peer reviewed documents or only a few citing records.