At CWTS, we always use multiple indicators in our performance evaluation studies. Some indicators focus on the productivity dimension of research performance, while others focus on the impact dimension. Also, some indicators are normalized (either at the level of fields or at the level of journals), while others are not. We use the term ‘crown indicator’ to refer to what we generally consider to be our most informative indicator. However, we emphasize that this ‘crown indicator’ is not intended to be used in isolation. The indicator should always be used in combination with other indicators.
The difference between the normalized mean citation rate indicator and the CPP/FCSm indicator is that the former indicator only normalizes for the field and the year in which a publication was published while the latter indicator also normalizes for a publication's document type. In this paper, we do not consider the issue of normalizing for a publication's document type. For our present purpose, the difference between the two indicators is therefore not important.
In the case of normalization at the level of journals, ei in (2) equals the average number of citations of all publications published in the same journal and in the same year as publication i. We do not recommend the use of (2) for normalization at the journal level. When (2) is used for normalization at the journal level, publications in journals with a very low average number of citations may have too much weight in the calculation of the indicator and may cause the indicator to become unstable.
However, as we will see later on in this paper, there are exceptional publications that receive lots of citations already in the year in which they were published.
We did not retrieve publications of the document type letter. Like recent publications, letters typically have no or almost no citations. In the calculation of the MNCS indicator, letters therefore cause the same difficulties as recent publications (see Sect. 3). A solution could be to modify the MNCS indicator in such a way that letters have a lower weight than other publications. (This is essentially what happens in the CPP/FCSm indicator.) In our analysis, however, we do not want to make any modifications to the MNCS indicator, and we therefore leave out letters. The document type note was used in the Web of Science database until 1996. From then on, most documents that would formerly have been classified as notes were classified as ordinary articles. In our analysis, we only have notes in the research groups data set.
In the case of the research groups data set, this for example means that we count citations until the end of 2000. Of course, we could also count all citations until today. However, we want to replicate as closely as possible the original study in which the data set was used (VSNU 2002). In this study, citations were counted until the end of 2000. More recent citation data was not available at the time of the study. In bibliometric performance evaluation studies, one almost always has to work with relatively short citation windows.
Recall from Sect. 2 that the expected number of citations of a publication equals the average number of citations of all publications published in the same field and in the same year as the publication of interest. In our calculations, fields were defined by Web of Science subject categories. When a publication belongs to multiple subject categories, the expected number of citations of the publication was calculated using the approach discussed by Waltman et al. (2011, Sect. 6).
Notice in Table 5 that the publication with the highest normalized citation score has just five citations. The high normalized citation score of this publication is due to the low expected number of citations of the publication. This illustrates that in the calculation of the MNCS2 indicator a recent publication with a relatively low number of citations can already have a quite large effect.
The extremely high number of citations of this recently published article was also discussed by Dimitrov et al. (2010), who pointed out the enormous effect of this single article on the impact factor of Acta Crystallographica Section A, the journal in which the article was published.
In the case of journals, the CPP/FCSm indicator is also referred to as the JFIS indicator (e.g., Van Leeuwen and Moed 2002).
Comparing the two scatter plots in Fig. 4, it can be seen that the journal with the highest CPP/FCSm score (17.68) has extremely different MNCS1 and MNCS2 scores (respectively 32.28 and 2.14). The MNCS1 score of the journal is much higher than the CPP/FCSm score, while the MNCS2 score is much lower. It turns out that in 2008 the journal, Acta Crystallographica Section A, published an article that by the end of 2008 had already been cited 3489 times. This is the same article mentioned earlier for the University of Göttingen. This article has much more weight in the MNCS1 indicator than in the CPP/FCSm indicator. In the MNCS2 indicator, the article is not taken into consideration at all. This explains the extremely different CPP/FCSm, MNCS1, and MNCS2 scores of the journal.
Bornmann, L 2010 Towards an ideal method of measuring research performance: Some comments to the Opthof and Leydesdorff (2010) paper. Journal of Informetrics 4 3 441–443 .
Bornmann, L, Mutz, R 2011 Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics 5 1 228–230 .
Campbell, D., Archambault, E., Côté, G. (2008). Benchmarking of Canadian Genomics—1996-2007. Retrieved Nov 5, 2010, from http://www.science-metrix.com/pdf/SM_Benchmarking_Genomics_Canada.pdf.
Colliander, C, Ahlgren, P 2011 The effects and their stability of field normalization baseline on relative performance with respect to citation impact: A case study of 20 natural science departments. Journal of Informetrics 5 1 101–113 .
- Search Google Scholar
- Export Citation
Colliander, C Ahlgren, P 2011 The effects and their stability of field normalization baseline on relative performance with respect to citation impact: A case study of 20 natural science departments. Journal of Informetrics 5 1 101– 113 10.1016/j.joi.2010.09.003.
RE De Bruin Kint, A, Luwel, M, Moed, HF 1993 A study of research evaluation and planning: The University of Ghent. Research Evaluation 3 1 25–41.
Dimitrov, JD, Kaveri, SV, Bayry, J 2010 Metrics: Journal's impact factor skewed by a single paper. Nature 466 7303 179 .
Egghe, L, Rousseau, R 1996 Averaging and globalising quotients of informetric and scientometric data. Journal of Information Science 22 3 165–170 .
Gingras, Y, Larivière, V 2011 There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization. Journal of Informetrics 5 1 226–227 .
Glänzel, W, Thijs, B, Schubert, A, Debackere, K 2009 Subfield-specific normalized relative indicators and a new generation of relational charts: methodological foundations illustrated on the assessment of institutional research performance. Scientometrics 78 1 165–188 .
- Search Google Scholar
- Export Citation
Glänzel, W , Thijs, B , Schubert, A Debackere, K 2009 Subfield-specific normalized relative indicators and a new generation of relational charts: methodological foundations illustrated on the assessment of institutional research performance. Scientometrics 78 1 165– 188 10.1007/s11192-008-2109-5.
Leydesdorff, L, Opthof, T 2010 Normalization at the field level: Fractional counting of citations. Journal of Informetrics 4 4 644–646 .
Leydesdorff, L, Opthof, T 2011 Remaining problems with the “new crown indicator” (MNCS) of the CWTS. Journal of Informetrics 5 1 224–225 .
Moed, HF 2010 CWTS crown indicator measures citation impact of a research group's publication oeuvre. Journal of Informetrics 4 3 436–438 .
Moed, HF RE De Bruin TN Van Leeuwen 1995 New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics 33 3 381–422 .
Opthof, T, Leydesdorff, L 2010 Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics 4 3 423–430 .
Rehn, C., & Kronman, U. (2008). Bibliometric handbook for Karolinska Institutet. Retrieved Nov 5, 2010, from http://ki.se/content/1/c6/01/79/31/bibliometric_handbook_karolinska_institutet_v_1.05.pdf.
Sandström, U. (2009). Bibliometric evaluation of research programs: A study of scientific quality. Retrieved Nov 5, 2010, from http://www.forskningspolitik.se/DataFile.asp?FileID=182.
Schubert, A, Braun, T 1986 Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics 9 5–6 281–291 .
SCImago Research Group. (2009). SCImago Institutions Rankings (SIR): 2009 world report. Retrieved Nov 5, 2010, from http://www.scimagoir.com/pdf/sir_2009_world_report.pdf.
TN Van Leeuwen Moed, HF 2002 Development and application of journal impact measures in the Dutch science system. Scientometrics 53 2 249–266 .
AFJ Van Raan 2005 Measuring science: Capita selecta of current main issues HF Moed W Glänzel U Schmoch eds. Handbook of quantitative science and technology research Springer New York 19–50 .
AFJ Van Raan TN Van Leeuwen Visser, MS NJ Van Eck Waltman, L 2010 Rivals for the crown: Reply to Opthof and Leydesdorff. Journal of Informetrics 4 3 431–435 .
MGP Van Veller Gerritsma, W PL Van der Togt Leon, CD CM Van Zeist 2009 Bibliometric analyses on repository contents for the evaluation of research at Wageningen UR A Katsirikou CH Skiadas eds. Qualitative and quantitative methods in libraries: Theory and applications World Scientific Singapore 19–26.
- Search Google Scholar
- Export Citation
MGP Van Veller Gerritsma, W PL Van der Togt Leon, CD CM Van Zeist 2009 Bibliometric analyses on repository contents for the evaluation of research at Wageningen UR A Katsirikou CH Skiadas World Scientific Singapore 19– 26.
Vinkler, P 1986 Evaluation of some methods for the relative assessment of scientific publications. Scientometrics 10 3–4 157–177 .
Vinkler, P 1996 Model for quantitative selection of relative scientometric impact indicators. Scientometrics 36 2 223–236 .
Waltman, L NJ Van Eck TN Van Leeuwen Visser, MS AFJ Van Raan 2011 Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics 5 1 37–47 .