Authors:Michel Zitt, Suzy Ramanana-Rahary, and Elise Bassecoulard
Summary As citation practices strongly depend on fields, field normalisation is recognised as necessary for fair comparison of figures in bibliometrics and evaluation studies. However fields may be defined at various levels, from small research areas to broad academic disciplines, and thus normalisation values are expected to vary. The aim of this project was to test the stability of citation ratings of articles as the level of observation - hence the basis of normalisation - changes. A conventional classification of science based on ISI subject categories and their aggregates at various scales was used, namely at five levels: all science, large academic discipline, sub-discipline, speciality and journal. Among various normalisation methods, we selected a simple ranking method (quantiles), based on the citation score of the article in each particular aggregate (journal, speciality, etc.) it belonged to at each level. The study was conducted on articles in the full SCI range, for publication year 1998 with a four-year citation window. Stability is measured in three ways: overall comparison of article rankings; individual trajectory of articles; survival of the top-cited class across levels. Overall rank correlations on the observed empirical structure are benchmarked against two fictitious sets that keep the same embedded structure of articles but reassign citation scores either in a totally ordered or in a totally random distribution. These sets act respectively as a 'worst case' and 'best case' for the stability of citation ratings. The results show that: (a) the average citation rankings of articles substantially change with the level of observation (b) observation at the journal level is very particular, and the results differ greatly in all test circumstances from all the other levels of observation (c) the lack of cross-scale stability is confirmed when looking at the distribution of individual trajectories of articles across the levels; (d) when considering the top-cited fractions, a standard measure of excellence, it is found that the contents of the 'top-cited' set is completely dependent on the level of observation. The instability of impact measures should not be interpreted in terms of lack of robustness but rather as the co-existence of various perspectives each having their own form of legitimacy. A follow-up study will focus on the micro levels of observation and will be based on a structure built around bibliometric groupings rather than conventional groupings based on ISI subject categories.
from our analysis of factors potentially determining the persistence of innovative activity are broadly consistent with the prior findings on the scaleeffects of initial patent stock (Geroski et al. 1997 ; Cefis and Orsenigo 2001 ; Alfranca et al
-side normalization is classification-free does not abolish the fundamental question of “cross-scale” effects. The citation behavior is averaged ( lato sensu ) on sets of various definitions depending on the variants above (paper-level, journal-level, neighborhoods
indicators; they may correlate because of scaleeffects. In the cases that we tested, the correlations between I3 and total citations or total publications were higher than the correlations between these latter two ( Table 3 ). Table 3 Rank
by a real co-operation among participants and therefore can be expected to have a positive impact upon the production of scientific output. On the other hand, as long as there are significant scaleeffects in research, a more concentrated distribution
. Scientometrics 84 1 81 – 97 10.1007/s11192-009-0090-2 .
Zitt , M , Ramanana-Rahary , S , Bassecoulard , E 2005 Relativity of citation performance and excellence measures: From cross-field to cross-scale
Zitt , M , Ramanana-Rahary , S , Bassecoulard , E 2005 Relativity of citation performance and excellence measures: from cross-field to cross-scaleeffects of field-normalisation . Scientometrics 63 2 373 – 401 10.1007/s11192-005-0218-y