This paper investigates the extent to which staff editors’ evaluations of submitted manuscripts—that is, internal evaluations
carried out before external peer reviewing—are valid. To answer this question we utilized data on the manuscript reviewing
process at the journal Angewandte Chemie International Edition. The results of this study indicate that the initial internal evaluations are valid. Further, it appears that external review
is indispensable for the decision on the publication worthiness of manuscripts: (1) For the majority of submitted manuscripts,
staff editors are uncertain about publication worthiness; (2) there is a statistically significant proportional difference
in “Rejection” between the editors' initial evaluation and the final editorial decision (after peer review); (3) three-quarters
of the manuscripts that were rated negatively at the initial internal evaluation but accepted for publication after the peer
review had far above-average citation counts.
The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most commonly used and prominent citation-based indicators of the performance and significance of a scientific journal. The JIF is simple, reasonable, clearly defined, and comparable over time and, what is more, can be easily calculated from data provided by Thomson Reuters, but at the expense of serious technical and methodological flaws. The paper discusses one of the core problems: The JIF is affected by bias factors (e.g., document type) that have nothing to do with the prestige or quality of a journal. For solving this problem, we suggest using the generalized propensity score methodology based on the Rubin Causal Model. Citation data for papers of all journals in the ISI subject category “Microscopy” (Journal Citation Report) are used to illustrate the proposal.
Citation analysis for evaluative purposes requires reference standards, as publication activity and citation habits differ
considerably among fields. Reference standards based on journal classification schemes are fraught with problems in the case
of multidisciplinary and general journals and are limited with respect to their resolution of fields. To overcome these shortcomings
of journal classification schemes, we propose a new reference standard for chemistry and related fields that is based on the
sections of the Chemical Abstracts database. We determined the values of the reference standard for research articles published in 2000 in the biochemistry
sections of Chemical Abstracts as an example. The results show that citation habits vary extensively not only between fields but also within fields. Overall, the sections of Chemical Abstracts seem to be a promising basis for reference standards in chemistry and related fields for four reasons: (1) The wider coverage
of the pertinent literature, (2) the quality of indexing, (3) the assignment of papers published in multidisciplinary and
general journals to their respective fields, and (4) the resolution of fields on a lower level (e.g. mammalian biochemistry)
than in journal classification schemes (e.g. biochemistry & molecular biology).
We investigated committee peer review for awarding long-term fellowships to post-doctoral researchers as practiced by the
Boehringer Ingelheim Fonds (B.I.F.) - a foundation for the promotion of basic research in biomedicine. Assessing the validity
of selection decisions requires a generally accepted criterion for research impact. A widely used approach is to use citation
counts as a proxy for the impact of scientific research. Therefore, a citation analysis for articles published previous to
the applicants' approval or rejection for a B.I.F. fellowship was conducted. Based on our model estimation (negative binomial
regression model), journal articles that had been published by applicants approved for a fellowship award (n = 64) prior to applying for the B.I.F. fellowship award can be expected to have 37% (straight counts of citations) and 49%
(complete counts of citations) more citations than articles that had been published by rejected applicants (n = 333). Furthermore, comparison with international scientific reference values revealed (a) that articles published by successful
and non-successful applicants are cited considerably more often than the “average” publication and (b) that excellent research
performance can be expected more of successful than non-successful applicants. The findings confirm that the foundation is
not only achieving its goal of selecting the best junior scientists for fellowship awards, but also successfully attracting
highly talented young scientists to apply for B.I.F. fellowships.
Summary In science, peer review is the best-established method of assessing manuscripts for publication and applications for research fellowships and grants. However, the fairness of peer review, its reliability and whether it achieves its aim to select the best science and scientists has often been questioned. The paper presents the first comprehensive study on committee peer review for the selection of doctoral (Ph.D.) and post-doctoral research fellowship recipients. We analysed the selection procedure followed by the Boehringer Ingelheim Fonds (B.I.F.), a foundation for the promotion of basic research in biomedicine, with regard to the reliability, fairness and predictive validity of the procedure - the three quality criteria for professional evaluations. We analysed a total of 2,697 applications, 1,954 for doctoral and 743 for post-doctoral fellowships. In 76& of the cases, the fellowship award decision was characterized by agreement between reviewers. Similar figures for reliability have been reported for the grant selection procedures of other major funding agencies. With regard to fairness, we analysed whether potential sources of bias, i.e., gender, nationality, major field of study and institutional affiliation, could have influenced decisions made by the B.I.F. Board of Trustees. For post-doctoral fellowship applications, no statistically significant influence of any of these variables could be observed. For doctoral fellowship applications, we found evidence of an institutional, major field of study and gender bias, but not of a nationality bias. The most important aspect of our study was to investigate the predictive validity of the procedure, i.e., whether the foundation achieves its aim to select as fellowship recipients the best junior scientists. Our bibliometric analysis showed that this is indeed the case and that the selection procedure is thus highly valid: research articles by B.I.F. fellows are cited considerably more often than the “average' paper (average citation rate) published in the journal sets corresponding to the fields “Multidisciplinary', “Molecular Biology & Genetics', and “Biology & Biochemistry' in Essential Science Indicators (ESI) from the Institute for Scientific Information (ISI, Philadelphia, Pennsylvania, USA). Most of the fellows publish within these fields.
Authors:Christoph Neuhaus, Andreas Litscher, and Hans-Dieter Daniel
The database host STN International allows for extensive citation analysis in the SCISEARCH database (Science Citation Index
Expanded) and in the CAplus database (Chemical Abstracts). Along with its powerful browsing, searching and analyzing facilities,
STN International also features scripts. In this paper we examine the usefulness of the script language in the automation
of citation analysis in SCISEARCH and CAplus.
Authors:Lutz Bornmann, Rüdiger Mutz, and Hans-Dieter Daniel
In the grant peer review process we can distinguish various evaluation stages in which assessors judge applications on a rating
scale. Bornmann & al.  show that latent Markov models offer a fundamentally good opportunity to model statistically peer review processes.
The main objective of this short communication is to test the influence of the applicants’ gender on the modeling of a peer
review process by using latent Markov models. We found differences in transition probabilities from one stage to the other
for applications for a doctoral fellowship submitted by male and female applicants.
Authors:Lutz Bornmann, Christoph Neuhaus, and Hans-Dieter Daniel
Taking the interactive open access journal Atmospheric Chemistry and Physics as an example, this study examines whether Thomson Reuters, for the Journal Citation Reports, correctly calculates the Journal Impact Factor (JIF) of a journal that publishes several versions of a manuscript within a two-stage publication process. The results of this study show that the JIF of the journal is not overestimated through the two-stage publication process.