Search Results
Abstract
A method is presented to display the comparative impact of scientific publications relative to their environment (e.g., journals). Furthermore, the method gives a new approach to the establishment of a journal's impact as measured by received citations. Moreover, in this impact measurement a differentiation between various types of publications (editorials and letters, normal papers, reviews, etc.) can be made. It is argued that the method presented is more useful for library and research evaluation policies than the ISI impact factor.
Summary
Quantitative and qualitative scientific evaluations of the research performance of Thai researchers were carried out with regards to their international publications and citations in four different subject categories; namely Clinical Medicine, Chemistry, Material Sciences, and Engineering. This work used citations to publications of Thai researchers in the Science Citation Index (SCI) database during 1998-2002 as a data source. The calculations and comparisons of article impact factors (AIF), position impact factors (PIF) and journal impact factors (JIF) were attempted for quantitative evaluation.The positions and significance levels (cited contents) of the citations were considered for qualitative assessment.For quantitative evaluation, the highest article quantity and number of times cited were given by Thai researchers in Clinical Medicine, the lowest being for Material Sciences. Clinical Medicine had the highest AIF value, while Engineering exhibited the lowest. Each article by Thai researchers was found to be cited more than once within a citing article, especially articles in Clinical Medicine. For qualitative assessment, most articles from Thai scholars were cited in Introduction and Results & Discussion sections of the citing articles. Only non-Thai researchers in Clinical Medicine preferred to use Discussion from Thais' articles for discussion of their work whereas those in Chemistry, Material Sciences and Engineering were referred as general references. Less than 1.5% of research works of Thai scholars were cited as “the pioneer”for the research communities of the subject categories of interest.
Abstract
By employing the Pearson correlation, Fisher-and t-tests, the present study analyzes and compares scientometric data including number of source items, number of citations, impact factor, immediacy index, citing half-life and cited half-life, for essential journals in physics, chemistry and engineering, from SCI JCR on the Web 2002. The results of the study reveal that for all the scientometric indicators, except the cited half-life, there is no significant mean difference between physics and chemistry subjects indicating similar citation behavior among the scientists. There is no significant mean difference in the citing half-life among the three subjects. Significant mean difference is generally observed for most of the scientometric indicators between engineering and physics (or chemistry) demonstrating the difference in citation behavior among engineering researchers and scientists in physics or chemistry. Significant correlations among number of source items, number of citations, impact factor, and immediacy index and between cited half-life and citing half-life generally prevail for each of the three subjects. On the contrary, in general, there is no significant correlation between the cited half-life and other scientometric indicators. The three subjects present the same strength of the correlations between number of source items and number of citations, between number of citations and impact factor, and between cited half-life and citing half-life.
Abstract
In this paper, our objective is to delineate some of the problems that could arise in using research output for performance evaluation. Research performance in terms of the Impact Factor (IF) of papers, say of scientific institutions in a country, could depend critically on coauthored papers in a situation where internationally co-authored papers are known to have significantly different (higher) impact factors as compared to purely indigenous papers. Thus, international collaboration not only serves to increase the overall output of research papers of an institution, the contribution of such papers to the average Impact Factor of the institutional output could also be disproportionately high. To quantify this effect, an index of gain in impact through foreign collaboration (GIFCOL) is defined such that it ensures comparability between institutions with differing proportions of collaborative output. A case study of major Indian institutions is undertaken, where Cluster Analysis is used to distinguish between intrinsically high performance institutions and those that gain disproportionately in terms of perceived quality of their output as a result of international collaboration.
Abstract
It is the objective of this article to examine in which aspects journal usage data differ from citation data. This comparison is conducted both at journal level and on a paper by paper basis. At journal level, we define a so-called usage impact factor and a usage half-life in analogy to the corresponding Thomson’s citation indicators. The usage data were provided from Science Direct, subject category “oncology”. Citation indicators were obtained from JCR, article citations were retrieved from SCI and Scopus. Our study shows that downloads and citations have different obsolescence patterns. While the average cited half-life was 5.6 years, we computed a mean usage half-life of 1.7 years for the year 2006. We identified a strong correlation between the citation frequencies and the number of downloads for our journal sample. The relationship was lower when performing the analysis on a paper by paper basis because of existing variances in the citation-download-ratio among articles. Also the correlation between the usage impact factor and Thomson’s journal impact factor was “only” moderate because of different obsolescence patterns between downloads and citations.
Abstract
Scientific production has been evaluated from very different perspectives, the best known of which are essentially based on the impact factors of the journals included in the Journal Citation Reports (JCR). This has been no impediment to the simultaneous issuing of warnings regarding the dangers of their indiscriminate use when making comparisons. This is because the biases incorporated in the elaboration of these impact factors produce significant distortions, which may invalidate the results obtained. Notable among such biases are those generated by the differences in the propensity to cite of the different areas, journals and/or authors, by variations in the period of materialisation of the impact and by the varying presence of knowledge areas in the sample of reviews contained in the JCR. While the traditional evaluation method consists of standardisation by subject categories, recent studies have criticised this approach and offered new possibilities for making inter-area comparisons. In view of such developments, the present study proposes a novel approach to the measurement of scientific activity, in an attempt to lessen the aforementioned biases. This approach consists of combining the employment of a new impact factor, calculated for each journal, with the grouping of the institutions under evaluation into homogeneous groups. An empirical application is undertaken to evaluate the scientific production of Spanish public universities in the year 2000. This application considers both the articles published in the multidisciplinary databases of the Web of Science (WoS) and the data concerning the journals contained in the Sciences and Social Sciences Editions of the Journal Citation Report (JCR). All this information is provided by the Institute of Scientific Information (ISI), via its Web of Knowledge (WoK).
Abstract
The Science Citation Index, Journal Citation Reports (JCR), published by the Institute for Scientific Information (ISI) and designed to rank, evaluate, categorize and compare journals, is used in a wide scientific context as a tool for evaluating researchers and research work, through the use of just one of its indicators, the impact factor. With the aim of obtaining an overall and synthetic perspective of impact factor values, we studied the frequency distributions of this indicator using the box-plot method. Using this method we divided the journals listed in the JCR into five groups (low, lower central, upper central, high and extreme). These groups position the journal in relation to its competitors. Thus, the group designated as extreme contains the journals with high impact factors which are deemed to be prestigious by the scientific community. We used the JCR data from 1996 to determine these groups, firstly for all subject categories combined (all 4779 journals) and then for each of the 183 ISI subject categories. We then substituted the indicator value for each journal by the name of the group in which it was classified. The journal group may differ from one subject category to another. In this article, we present a guide for evaluating journals constructed as described above. It provides a comprehensive and synthetic view of two of the most used sections of the JCR. It makes it possible to make more accurate and complete judgements on and through the journals, and avoids an oversimplified view of the complex reality of the world of journals. It immediately reveals the scientific subject category where the journal is best positioned. Also, whereas it used to be difficult to make intra- and interdisciplinary comparisons, this is now possible without having to consult the different sections of the JCR. We construct this guide each year using indicators published in the JCR by the ISI.
Abstract
Activity shares in different types of research work for coauthors of scientific papers were detected by questionnaire methods. It was found e.g. that first authors perform about 70% of the total work needed for two authored papers, which decreases to 34% for papers with five authors. From Total Activity Shares determined for coauthors Total Team Contribution Factors could be calculated for cooperating teams. Total as well as Intramural and Extramural Team Cooperativeness for research teams were obtained by relating shares of impact factor scores for the investigated teams to the total.
Abstract
Hirsch’s concept of h-index was used to define a similarity measure for journals. The h-similarity is easy to calculate from the publicly available data of the Journal Citation Reports, and allows for plausible interpretation. On the basis of h-similarity, a relative eminence indicator of journals was determined: the ratio of the JCR impact factor to the weighted average of that of similar journals. This standardization allows journals from disciplines with lower average citation level (mathematics, engineering, etc.) to get into the top lists.