scientific field convenes to evaluate scientific outcomes.
In journalpeerreview, reviewers sought by the editor normally provide the editor with a written review and overall publication recommendation. “The editor, on the basis of the reviews and
Authors:Lutz Bornmann, Markus Wolf, and Hans-Dieter Daniel
dictionary for the word analysis makes possible direct comparison of the results of different text analyses.
In this study we compared three types of texts from journalpeerreview. For one, we examined agreement and difference between comments that
Authors:Lutz Bornmann, Christophe Weymuth, and Hans-Dieter Daniel
Using the data of a comprehensive evaluation study on the peer review process of Angewandte Chemie International Edition (AC-IE), we examined in this study the way in which referees’ comments differ on manuscripts rejected at AC-IE and later
published in either a low-impact journal (Tetrahedron Letters, n = 54) or a high-impact journal (Journal of the American Chemical Society, n = 42). For this purpose, a content analysis was performed of comments which led to the rejection of the manuscripts at AC-IE.
For the content analysis, a classification scheme with thematic areas developed by Bornmann et al. (<cite>2008</cite>) was used. As the results of the analysis demonstrate, a large number of negative comments from referees in the areas “Relevance
of contribution” and “Design/Conception” are clear signs that a manuscript rejected at AC-IE will not be published later in
a high-impact journal. The number of negative statements in the areas “Writing/Presentation,” “Discussion of results,” “Method/Statistics,”
and “Reference to the literature and documentation,” on the other hand, had no statistically significant influence on the
probability that a rejected manuscript would later be published in a low- or high-impact journal. The results of this study
have various implications for authors, journal editors and referees.
Authors:Lutz Bornmann, Hermann Schier, Werner Marx, and Hans-Dieter Daniel
Schubert (Scientometrics, 78:559–565, ) showed that “a Hirsch-type index can be used for assessing single highly cited publications by calculating the h index of the set of papers citing the work in question” (p. 559). To demonstrate that this single publication h index is a useful yardstick to compare the quality of different publications; the index should be strongly related to the assessment by peers. In a comprehensive research project we investigated the peer review process of the Angewandte Chemie International Edition. The data set contains manuscripts reviewed in the year 2000 and accepted by the journal or rejected but published elsewhere. Single publication h index values were calculated for a total of 1,814 manuscripts. The results show a correlation in the expected direction between peer assessments and single publication h index values: After publication, manuscripts with positive ratings by the journal's reviewers show on average higher h index values than manuscripts with negative ratings by reviewers (and later published elsewhere). However, our findings do not support Schubert's () assumption that the additional dimension of indirect citation influence contributes to a more refined picture of the most cited papers.
Authors:Lutz Bornmann, Irina Nast, and Hans-Dieter Daniel
The case of Dr. Hwang Woo Suk, the South Korean stem-cell researcher, is arguably the highest profile case in the history
of research misconduct. The discovery of Dr. Hwang’s fraud led to fierce criticism of the peer review process (at Science). To find answers to the question of why the journal peer review system did not detect scientific misconduct (falsification
or fabrication of data) not only in the Hwang case but also in many other cases, an overview is needed of the criteria that
editors and referees normally consider when reviewing a manuscript. Do they at all look for signs of scientific misconduct
when reviewing a manuscript? We conducted a quantitative content analysis of 46 research studies that examined editors’ and
referees’ criteria for the assessment of manuscripts and their grounds for accepting or rejecting manuscripts. The total of
572 criteria and reasons from the 46 studies could be assigned to nine main areas: (1) ‘relevance of contribution,’ (2) ‘writing
/ presentation,’ (3) ‘design / conception,’ (4) ‘method / statistics,’ (5) ‘discussion of results,’ (6) ‘reference to the
literature and documentation,’ (7) ‘theory,’ (8) ‘author’s reputation / institutional affiliation,’ and (9) ‘ethics.’ None of the criteria or reasons that were assigned to the nine main areas refers to or is related to possible falsification or
fabrication of data. In a second step, the study examined what main areas take on high and low significance for editors and
referees in manuscript assessment. The main areas that are clearly related to the quality of the research underlying a manuscript
emerged in the analysis frequently as important: ‘theory,’ ‘design / conception’ and ‘discussion of results.’