Since bibliometric indicators have obtained a general acceptance in science policy and attained applied relevance in research evaluation, feedback effects on scientists’ behaviour resulting from the use of these indicators for science funding decisions have been reported. These adaptation strategies could be called mimicry in science. Scientists apply strategies that should enable them to comply to bibliometric accountability and to secure funds to their own research.
Purpose—this paper aims to look at the Hawthorne effect in editorial peer review. Design/methodology/approach—discusses the quality evaluation of refereed scholarly journals. Findings—a key finding of this research was that in the peer review process of one and the same manuscript, reviewers or editors, respectively, arrive at different judgments. This phenomenon is named as “Hawthorne effect” because the different judgements are dependent on the specific conditions under which the peer review process at the individual journals takes place. Originality/value—provides a discussion on the quality evaluation of scholarly journals.
The Leiden ranking 2011/2012 provides the Proportion top-10% publications (PPtop-10%) as a new indicator. This indicator allows for testing performance differences between two universities for statistical significance.
In the discussion paper on this issue, Vanclay () describes and uncovers several weaknesses of the JIF based on a thorough literature review and detailed empirical analyses. In this short comment we would like to add the results of two studies to the discussion around the JIF. In these studies we investigated the effect of several versions of one and the same manuscript published by a journal on its JIF.
Up to the 1960s the prevalent view of science was that it was a step-by-step undertaking in slow, piecemeal progression towards
truth. Thomas Kuhn argued against this view and claimed that science always follows this pattern: after a phase of “normal”
science, a scientific “revolution” occurs. Taking as a case study the transition from the static view of the universe to the
Big Bang theory in cosmology, we appraised Kuhn’s theoretical approach by conducting a historical reconstruction and a citation
analysis. As the results show, the transition in cosmology can be linked to many different persons, publications, and points
in time. The findings indicate that there was not one (short term) scientific revolution in cosmology but instead a paradigm
shift that progressed as a slow, piecemeal process.
This paper investigates the extent to which staff editors’ evaluations of submitted manuscripts—that is, internal evaluations
carried out before external peer reviewing—are valid. To answer this question we utilized data on the manuscript reviewing
process at the journal Angewandte Chemie International Edition. The results of this study indicate that the initial internal evaluations are valid. Further, it appears that external review
is indispensable for the decision on the publication worthiness of manuscripts: (1) For the majority of submitted manuscripts,
staff editors are uncertain about publication worthiness; (2) there is a statistically significant proportional difference
in “Rejection” between the editors' initial evaluation and the final editorial decision (after peer review); (3) three-quarters
of the manuscripts that were rated negatively at the initial internal evaluation but accepted for publication after the peer
review had far above-average citation counts.
We investigated committee peer review for awarding long-term fellowships to post-doctoral researchers as practiced by the
Boehringer Ingelheim Fonds (B.I.F.) - a foundation for the promotion of basic research in biomedicine. Assessing the validity
of selection decisions requires a generally accepted criterion for research impact. A widely used approach is to use citation
counts as a proxy for the impact of scientific research. Therefore, a citation analysis for articles published previous to
the applicants' approval or rejection for a B.I.F. fellowship was conducted. Based on our model estimation (negative binomial
regression model), journal articles that had been published by applicants approved for a fellowship award (n = 64) prior to applying for the B.I.F. fellowship award can be expected to have 37% (straight counts of citations) and 49%
(complete counts of citations) more citations than articles that had been published by rejected applicants (n = 333). Furthermore, comparison with international scientific reference values revealed (a) that articles published by successful
and non-successful applicants are cited considerably more often than the “average” publication and (b) that excellent research
performance can be expected more of successful than non-successful applicants. The findings confirm that the foundation is
not only achieving its goal of selecting the best junior scientists for fellowship awards, but also successfully attracting
highly talented young scientists to apply for B.I.F. fellowships.
Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.