Search Results
Abstract
Comparative assessment of the journal literature produced by laboratories/institutions working in different fields is a difficult exercise. The impact factor of the journals is not a suitable indicator since citation practices vary with fields. The variation is corrected in this study using a measure, the “subfield corrected impact factor” and it is applied to the journal papers produced by the Indian Council of Scientific and Industrial Research Laboratories. This measure helped to compare the impact of journal literature in different fields.
Summary The purpose of this study is to analyze and compare journal citation data, from Journal Citation Reports on the Web 2000, of general and internal medicine and surgery. The source items and five kinds of citation data, i.e. citation counts, impact factor, immediacy index, citing half-life and cited half-life are examined and the correlation between each of the fifteen pairs of citation data is determined based on the Pearson correlation tests. The Fisher’s Z-transform was employed to test the significant difference between the Pearson correlation coefficient for each pair of citation data of these two subject areas. The following results of this work reveal: the frequently published journals are cited more frequently and also with high impact factor and immediacy index, in addition, they are usually accompanied with short citing half-life (i.e., usually cite current literature). The impact factor and immediacy index has significant correlation with citation counts. A significant correlation also exists between impact factor and immediacy index. However there is no correlation between cited half-life and other citation data, except citing half-life. For journals of general and internal medicine and surgical medicine, there are no significant difference of the Pearson correlation coefficient for the following pair of citation data: source items and citation counts, source items and impact factor, source items and citing half-life, citation counts and citing half-life, impact factor and citing half-life, immediacy index and citing half-life, and cited half-life and citing half-life.
one journal, the Asia-Pacific Education Researcher , which was indexed starting in 2007, was published locally. Most of the Philippine-authored papers were published in journals with no or low impact factors. Table 3
References [1] Impact factor, from Wikipedia, the free encyclopedia, https:// en. wikipedia. org/wiki/Impact_factor; retrieved 2016-10-06. [2] “European Association of
Abstract
Publication and citation data for the thirty journals listed in the Dermatology & VenerealDiseases category of the 1996 edition of the Journal Citation Reports (JCR) on CDROM andseven dermatology journals not listed in the JCR-1996 were retrieved online from DIMDI andanalysed with respect to short- and long-term impact factors, ratios of cited to uncited papers, aswell as knowledge export and international visibility.The short-term impact factors (calculated according to the rules applied in the JCR) are verysimiliar to their JCR counterparts; thus there are only minor changes in the rankings according toJCR impact factors and those calculated on the basis of online data. The non-JCR journals rankwithin the upper (two titles) and the lower third of the 37 journals (one title being at the upper endof the last third and the other four titles being at the very end of the list). Ranking the journalsaccording to their long-term impact factors results in no major changes of a journal's position.Normalized mean citation rates which give a more direct impression of a journals's citedness inrelation to the average citedness of its subfield are also shown.Ratios of cited to uncited papers parallel in general the impact factors, i.e., journals withhigher (constructed) impact factors have a higher percentage of cited papers. For each journal, theGini concentration coefficient was calculated as a measure of unevenness of the citationdistribution. In general, journals with higher (constructed) impact factors have higher Ginicoefficients, i.e., the higher the impact factors the more uneven the citation distribution.Knowledge export and international visibility were measured by determination of the distinctcategories to which the citing journals have been assigned ("citing subfields") and of the distinctcountries to which the citing authors belong ("citing countries"), respectively. Each journalexhibits a characteristic profile of citing subfields and citing countries. Normalized rankingsbased on knowledge export and international visibility (relating the number of published papers tothe number of distinct subfields and distinct countries) are to a large extent different compared tothe impact factor rankings. It is concluded that the additional data given, especially the data onknowledge export and international visibility, are necessary ingredients of a comprehensivedescription of a journal's significance and its position within its subject category.
Abstract
Citations from 1980 to 1988, obtained from fifty biomedical journals covered by theJournal Citation Reports (JCR) are studied. In purely numerical terms, the evolution of each citation (journal citation), including its impact factor (IF), would depend essentially on three variables for each journal: (i) the yearly rate of increase of items that could be cited (citable items), (ii) the relative yearly increment of the citing journals, (iii) the relative yearly increment of citations. The mechanics of this give rise to the three standard patterns for journal citations, namely: (i) annual impact factors increase each year (ascending evolution), (ii) annual impact factors remain the same each year (constant evolution), (iii) annual impact factors decrease each year (descending evolution). The reason why some journal citation profiles do not fit into the standard patterns is presumably that forces are at work able to alter the numerical mechanics described. The concepts of saturation/unsaturation of the demand for scientific information are introduced, showing how they are reflected in the impact factor figures for the journals cited.
Abstract
The bibliometric indicators currently used to assess scientific production have a serious flaw: a notable bias is produced when different subfields are compared. In this paper we demonstrate the existence of this bias using the impact factor (IF) indicator. The impact factor is related to the quality of a published article, but only when each specific subfield is taken separately: only 15.6% of the subfields we studied were found to have homogeneous means. The bias involved can be very misleading when bibliometric estimators are used as a basis for assigning research funds. To improve this situation, we propose a new estimator, the RPU, based on a normalization of the impact factor that minimizes bias and permits comparison among subfields. The RPU of a journal is calculated with the formula: RPU=10(1-exp (-IF/x)), where IF is the impact factor of the journal and x the mean IF for the subfield in which the journal belongs. The RPU retains the advantages of the impact factor: simplicity of calculation, immediacy and objectivity, and increases homogeneous subfields from 15.6% to 93.7%.
Abstract
Peer review is fundamental to science as we know it, but is also a source of delay in getting discoveries communicated to the world. Researchers have investigated the effectiveness and bias of various forms of peer review, but little attention has been paid to the relationships among journal reputation, rejection rate, number of submissions received and time from submission to acceptance. In 22 ecology/interdisciplinary journals for which data could be retrieved, higher impact factor is positively associated with the number of submissions. However, higher impact factor journals tend to be significantly quicker in moving from submission to acceptance so that journals which receive more submissions are not those which take longer to get them through the peer review and revision processes. Rejection rates are remarkably high throughout the journals analyzed, but tend to increase with increasing impact factor and with number of submissions. Plausible causes and consequences of these relationships for journals, authors and peer reviewers are discussed.
Abstract
Impact factors are a widely accepted means for the assessment of journal quality. However, journal editors have possibilities to influence the impact factor of their journals, for example, by requesting authors to cite additional papers published recently in that journal thus increasing the self-citation rate. I calculated self-citation rates of journals ranked in the Journal Citation Reports of ISI in the subject category “Ecology” (n = 107). On average, self citation was responsible for 16.2 � 1.3% (mean � SE) of the impact factor in 2004. The self-citation rates decrease with increasing journal impact, but even high impact journals show large variation. Six journals suspected to request for additional citations showed high self-citation rates, which increased over the last seven years. To avoid further deliberate increases in self-citation rates, I suggest to take journal-specific self-citation rates into account for journal rankings.
Abstract
Using an online survey, we have asked the researchers in the field of environmental and resource economics how they themselves would rank a representative list of journals in their field. The results of this ranking are then compared to the ordering based on the journals’ impact factors as published by Thomson Scientific. The two sets of rankings seem to be positively correlated, but statistically the null hypothesis that the two rankings are uncorrelated cannot be rejected. This observation suggests that researchers interpret the current quality of journals based on other factors in addition to the impact factors.