Search Results
Summary In a recent article Sombatsompop et al. (2004) proposed a new way of calculating a synchronous journal impact factor. Their proposal seems quite interesting and will be discussed in this note. Their index will be referred as the Median Impact Factor (MIF). I explain every step in detail so that readers with little mathematical background can understand and apply the procedure. Illustrations of the procedure are presented. Some attention is given to the estimation of the median cited age in case it is larger than ten year. I think the idea introduced by Sombatsompop, Markpin and Premkamolnetr has a great theoretical value as they are - to the best of my knowledge - the first ones to consider impact factors not using years as a basic ingredient, but an element of the actual form of the citation curve. The MIF is further generalized to the notion of a percentile impact factor.
journals (currently more than 8,200 journals from more than 3,300 publishers in 60 countries) are listed in the JCR with a series of bibliometric data and indicators (e.g., total citations, Journal Impact Factor (JIF), Journal Immediacy Index, Journal Cited
Summary In this study, journal impact factors play a central role. In addition to this important bibliometric indicator, which evolves around the average impact of a journal in a two-year timeframe, related aspects of journal impact measurement are studied. Aspects like the output volume, the percentage of publications not cited, and the citation frequency distribution within a set timeframe are researched, and put in perspective with the 'classical' journal Impact Factor. In this study it is shown that these aspects of journal impact measurement play a significant role, and are strongly inter-related. Especially the separation between journals on the basis of the differences in output volume seems to be relevant, as can be concluded from the different results in the analysis of journal impact factors, the degree of uncitedness, and the share of a journal its contents above or below the impact factor value.
Summary
Based on the convolution formula of the disturbed aging distribution (Egghe&Rousseau, 2000) and the transfer function model of the publishing delay process, we establish the transfer function model ofthe disturbed citing process. Using the model, we make simulative investigations of disturbed citation distributions and impact factors according to different average publication delays. These simulative results show that the bigger increment the average publication delays in a scientific field, the larger shift backwards of the citation distribution curves and the more fall the impact factors of journals in the field. Based on sometheoretical hypotheses, it is shown that there exists theoretically an approximate inverse linear relation between the field (or discipline) average publication delay and the journal impact factor.
Summary
Based on the impact factors of the journals recorded by JCR from 1998 to 2003, this paper established the fluctuation model for discipline development. According to the Fluctuation Strength Coefficient, then we gave analysis and evaluation of developing trends of the disciplines in recent years.
Abstract
An attempt is made to correlate bibliometric data of journals (impact factors, half-life) for scientific disciplines in the exact sciences to bibliotheconomic data (subscription prices, prices per article and holdings). Data are presented for 5399 journals in 131 disciplines, as mentioned in theJournal Citation Reports 1900 (Science Citation Index).
Abstract
We weighted the output of SCI items from Australian universities using journal impact factors. This provides us with an accessible quality indicator of science journal publishing, and allow us to scale for institutional size in terms of output and research staff. Use of this indicator for the 20 pre-1987 Australian universities demonstrates that although some universities rank highly on output, when scaled for institutional size they are overtaken by some of the smaller, more recently established universities.
Abstract
Measurement of research activity still remains a controversial question. The use of the impact factor from the Institute for Scientific Information (ISI) is quite widespread nowadays to carry out evaluations of all kinds; however, the calculation formula employed by ISI in order to construct its impact factors biases the results in favour of knowledge fields which are better represented in the sample, cite more in average and whose citations are concentrated in the early years of the articles. In the present work, we put forward a theoretical proposal regarding how aggregated normalization should be carried out with these biases, which allows comparing scientific production between fields, institutions and/or authors in a neutral manner. The technical complexity of such work, together with data limitations, lead us to propose some adjustments on the impact factor proposed by ISI which — although they do not completely solve the problem — reduce it and allow glimpsing the path towards more neutral evaluations. The proposal is empirically applied to three analysis levels: single journals, knowledge fields and the set of journals from the Journal Citation Report.
Introduction The series of papers (Vanclay 2008a , b , 2009 , 2011 , 2012 ), reporting the opinion, comments and “findings” of the author about Journal citation reports (JCR), the Journal impact factors (JIF) and Web of
Introduction In the lead article of this topical issue entitled “Impact Factor: Outdated artefact or stepping-stone of journal certification?” Jerome K. Vanclay focuses primarily on data errors in the database of Thomson