Authors:Chia-Lin Chang, Michael McAleer and Les Oxley
The paper is concerned with analysing what makes a great journal great in the sciences, based on quantifiable Research Assessment Measures (RAM). Alternative RAM are discussed, with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Various ISI RAM that are calculated annually or updated daily are defined and analysed, including the classic 2-year impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or 0-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored—By Even The Authors), Impact Factor Inflation (IFI), and three new RAM, namely Historical Self-citation Threshold Approval Rating (H-STAR), 2 Year Self-citation Threshold Approval Rating (2Y-STAR), and Cited Article Influence (CAI). The RAM data are analysed for the 6 most highly cited journals in 20 highly-varied and well-known ISI categories in the sciences, where the journals are chosen on the basis of 2YIF. The application to these 20 ISI categories could be used as a template for other ISI categories in the sciences and social sciences, and as a benchmark for newer journals in a range of ISI disciplines. In addition to evaluating the 6 most highly cited journals in each of 20 ISI categories, the paper also highlights the similarities and differences in alternative RAM, finds that several RAM capture similar performance characteristics for the most highly cited scientific journals, determines that PI-BETA is not highly correlated with the other RAM, and hence conveys additional information regarding research performance. In order to provide a meta analysis summary of the RAM, which are predominantly ratios, harmonic mean rankings are presented of the 13 RAM for the 6 most highly cited journals in each of the 20 ISI categories. It is shown that emphasizing THE impact factor, specifically the 2-year impact factor, of a journal to the exclusion of other informative RAM can lead to a distorted evaluation of journal performance and influence on different disciplines, especially in view of inflated journal self citations.
Authors:Alireza Abbasi, Jörn Altmann and Junseok Hwang
Although there are many studies for quantifying the academic performance of researchers, such as measuring the scientific
performance based on the number of publications, there are no studies about quantifying the collaboration activities of researchers.
This study addresses this shortcoming. Based on three measures, namely the collaboration network structure of researchers,
the number of collaborations with other researchers, and the productivity index of co-authors, two new indices, the RC-Index and CC-Index, are proposed for quantifying the collaboration activities of researchers and scientific communities. After applying these
indices on a data set generated from publication lists of five schools of information systems, this study concludes with a
discussion of the shortcomings and advantages of these indices.