Search Results

You are looking at 61 - 70 of 463 items for :

  • "Impact factors" x
  • Refine by Access: All Content x
Clear All

Abstract  

In a general framework, given a set of articles and their received citations (time periods of publication or citation are not important here) one can define the impact factor (IF) as the total number of received citations divided by the total number of publications (articles). The uncitedness factor (UF) is defined as the fraction of the articles that received no citations. It is intuitively clear that IF should be a decreasing function of UF. This is confirmed by the results in [van Leeuwen & Moed, 2005] but all the given examples show a typical shape, seldom seen in informetrics: a horizontal S-shape (first convex then concave). Adopting a simple model for the publication-citation relation, we prove this horizontal S-shape in this paper, showing that such a general functional relationship can be generally explained.

Restricted access
Scientometrics
Authors:
Mark Elkins
,
Christopher Maher
,
Robert Herbert
,
Anne Moseley
, and
Catherine Sherrington

Abstract  

To determine the degree of correlation among journal citation indices that reflect the average number of citations per article, the most recent journal ratings were downloaded from the websites publishing four journal citation indices: the Institute of Scientific Information’s journal impact factor index, Eigenfactor’s article influence index, SCImago’s journal rank index and Scopus’ trend line index. Correlations were determined for each pair of indices, using ratings from all journals that could be identified as having been rated on both indices. Correlations between the six possible pairings of the four indices were tested with Spearman’s rho. Within each of the six possible pairings, the prevalence of identifiable errors was examined in a random selection of 10 journals and among the 10 most discordantly ranked journals on the two indices. The number of journals that could be matched within each pair of indices ranged from 1,857 to 6,508. Paired ratings for all journals showed strong to very strong correlations, with Spearman’s rho values ranging from 0.61 to 0.89, all p < 0.001. Identifiable errors were more common among scores for journals that had very discordant ranks on a pair of indices. These four journal citation indices were significantly correlated, providing evidence of convergent validity (i.e. they reflect the same underlying construct of average citability per article in a journal). Discordance in the ranking of a journal on two indices was in some cases due to an error in one index.

Restricted access

Abstract  

The aim of this study was to ascertain the possible effect of journal self-citations on the increase in the impact factors of journals in which this scientometric indicator rose by a factor of at least four in only a few years. Forty-three journals were selected from the Thomson—Reuters (formerly ISI) Journal Citation Reports as meeting the above criterion. Eight journals in which the absolute number of citations was lower than 20 in at least two years were excluded, so the final sample consisted of 35 journals. We found no proof of widespread manipulation of the impact factor through the massive use of journal self-citations.

Restricted access

Abstract  

There is an evident need for the most scrupulous assessment possibleof the fruits of research (in the context considered here; namely, publications)with a qualitative, hence in-depth analysis of the single products of . Butthis would require time and competences which not all policy makers have attheir disposal. Hopefully, quantitative procedures, apparently objective andeasy to apply, would be able to surmount these difficulties. The diffusionof the quantitative evaluation of research is, that is, the policy makers'adaptive response to the need to increase controls of the efficiency of publicspending in since public investment clearly could not be determined at theoutset on the basis of the market's spontaneous, decentralised balancingmechanisms. An essential step towards the prevention of the distortions mostlikely to result from quantitative evaluation is the adoption of quantitativeprocedures of evaluation of the editorial policies of scientific journals– or, rather, of journals which claim to be scientific. Such proceduresmust be designed to highlight any distortions caused by the non-optimal editorialpolicies of journals. With quantitative evaluation, in fact, journals playa crucial role in the formation of public science policies. They thus haveto be subjected to specific monitoring to make sure that their conduct fitsin with the prerequisites necessary for them to perform their semi-officialactivity as certifiers of the quality of the products of research. The phenomenaof the production, divulgation and fruition of scientific discovery are, ofcourse, so complex that it is necessary to weigh them not with a single indicator,however helpful it may be, but with a constellation of indicators. We receivedconfirmation of the reliability of the impact factor as an instrument to monitorthe quality of research and as a means of evaluating the research itself.This is a reassuring result for the current formulation of public policiesand confirms the substantial honesty of the competition mechanisms of thescientific enterprise.

Restricted access

Abstract  

Some important bibliometric characteristics of chemistry journals were studied. Contrary to expectations, calculations of impact factors asynchronized for shorter and longer periods yield similar values. A new overlap measure for journals is suggested which is based on frequency distribution of references by journals.

Restricted access

Summary This paper identifies and presents some characteristics of the psychology journals included in each of the Journal Citation Reports (JCR) categories in 2002. The study shows that most of the journals belong to the categories of Multidisciplinary Psychology (102) and Clinical Psychology (83). Their ranking is seen to vary depending on the category, and the same journal may occupy different positions in different JCR categories. Journals included in the categories of Biological Psychology, Experimental Psychology and Multidisciplinary Psychology had the highest impact factor (IF).

Restricted access

Abstract  

Impact factors for 20 journals ranked first by Journal Citation Reports (JCR) were compared with the same indicator calculated on the basis of citation data obtained from Scopus database. A significant discrepancy was observed as Scopus, though results differed from title to title, found in general more citations than listed in JCR. This also affected ranking of the journals. More thorough examination of two selected titles proved that the divergence resulted mainly from difference in coverage of two products, although other important factors also play their part.

Restricted access

Abstract  

The purpose of this study is to analyze the hypothetical changes in the 2002 impact factor (IF) of the biomedical journals included in the Science Citation Index-Journal Citation Reports (SCI-JCR) by also taking into account cites coming from 83 non-indexed Spanish journals on different medical specialties. A further goal of the study is to identify the subject categories of the SCI-JCR with the largest increase in their IF, and to estimate the 2002 hypothetical impact factor (2002 HIF) of these 83 non-indexed Spanish journals. It is demonstrated that the inclusion of cites from a selection of non SCI-JCR-indexed Spanish medical journals in the SCI-JCR-indexed journals produces a slight increase in their 2002 IF, specially in journals edited in the USA and in the UK. More than half of the non-indexed Spanish journals has a higher 2002 HIF than that of the SCI-JCR-indexed journal with the lowest IF in the same subject category.

Restricted access

Summary  

We investigated the distribution of citations included in documents labeled by the ISI as “editorial material” and how they contribute to the impact factor of journals in which the citing items were published. We studied all documents classified by the ISI as “editorial material” in the Science Citation Index between 1999 and 2004 (277,231 records corresponding to editorial material published in 6141 journals). The results show that most journals published only a few documents that included 1 or 2 citations that contributed to the impact factor, although a few journals published many such documents. The data suggest that manipulation of the impact factor by publishing large amounts of editorial material with many citations to the journal itself is not a widely used strategy to increase the impact factor.

Restricted access

Summary  

We present a new approach to study the structure of the impact factor of academic journals. This new method is based on calculation of the fraction of citations that contribute to the impact factor of a given journal that come from citing documents in which at least one of the authors is a member of the cited journal's editorial board. We studied the structure of three annual impact factors of 54 journals included in the groups “Education and Educational Research” and “Psychology, Educational” of the Social Sciences Citation Index. The percentage of citations from papers authored by editorial board members ranged from 0% to 61%. In 12 journals, for at least one of the years analysed, 50% or more of the citations that contributed to the impact factor were from documents published in the journal itself. Given that editorial board members are considered to be among the most prestigious scientists, we suggest that citations from papers authored by editorial board members should be given particular consideration.

Restricted access