We compared three different bibliometric evaluation approaches: two citation-based approaches and one based on manual classification of publishing channels into quality levels. Publication data for two universities was used, and we worked with two levels of analysis: article and department. For the article level, we investigated the predictive power of field normalized citation rates and field normalized journal impact with respect to journal level. The results for the article level show that evaluation of journals based on citation impact correlate rather well with manual classification of journals into quality levels. However, the prediction from field normalized citation rates to journal level was only marginally better than random guessing. At the department level, we studied three different indicators in the context of research fund allocation within universities and the extent to which the three indicators produce different distributions of research funds. It turned out that the three distributions of relative indicator values were very similar, which in turn yields that the corresponding distributions of hypothetical research funds would be very similar.
Auranen, O, Nieminen, M. University research funding and publication performance-An international comparison. Research Policy2010396822–834.
Moed, HFREDe BruinTNvan Leeuwen1995New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics333381–422.
Moed, HFREDe BruinTNvan Leeuwen1995New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics333381–42210.1007/BF02017338.)| false
Opthof, T, Leydesdorff, L. Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics201043423–43010.1016/j.joi.2010.02.003.)| false