Search Results
Abstract
The article describes the method for the online determination of the journal impact factor (JIF). The method is very simple and can be used both for the ISI defined journal impact factor and for the calculation of other generalised journal impact factors. But the direct online method fails for non-ISI journals i.e. journals not indexed by ISI to the three citation databases. For such journals only the “External Cited Impact Factor” associated with citations from ISI journals (ECIFisi) can be determined online by the common method. As an extra benefit the online method makes available the determination of the geographical distribution of citations and citable units in relation to any given JIF, i.e. the international impact for a particular journal in a given year. The method is illustrated by calculating the generalised JIF, self-citations and ECIF(isi) as well as the international impact for Journal of Documentation and Scientometrics.
Abstract
The uncitedness factor of a journal is its fraction of uncited articles. Given a set of journals (e.g. in a field) we can determine the rank-order distribution of these uncitedness factors. Hereby we use the Central Limit Theorem which is valid for uncitedness factors since it are fractions, hence averages. A similar result was proved earlier for the impact factors of a set of journals. Here we combine the two rank-order distributions, hereby eliminating the rank, yielding the functional relation between the impact factor and the uncitedness factor. It is proved that the decreasing relation has an S-shape: first convex, then concave and that the inflection point is in the point (μ′, μ) where μ is the average of the impact factors and μ′ is the average of the uncitedness factors.
Abstract
In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as journal impact factor. The present work continues this line of research, examining whether it is valid that the use of the impact factor should always be avoided in favour of citations, or whether the use of impact factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.
Abstract
The critical evaluation of scientific productivity during, last years has been done with the help of the Journal Citation Reports ranks of journals. The relative performance of each journal was derived from a simple calculation called Impact Factor. Such measure has been widely criticized by scientometricians, but alternative proposals were nerver adopted due perhaps to their complexity, but also to economic limitations. For the informetric purposes this situation has led to a worrying lack of standardization and, worst of all, makes useless many studies for comparative purposes. In order to enhance the comparative value of the impact factor we develop a new easy method that increases the time period used for its calculation. Such new index has advantages over the old one.
( 1997 ), who finds that citation rates of papers determine the impact factor of journals, and not vice versa). It should be borne in mind that the acceptance of a paper for publication in a journal is typically based on one or more of the Editor
Abstract
Currently the Journal Impact Factors (JIF) attracts considerable attention as components in the evaluation of the quality of research in and between institutions. This paper reports on a questionnaire study of the publishing behaviour and researchers preferences for seeking new knowledge information and the possible influence of JIF on these variables. 54 Danish medical researchers active in the field of Diabetes research took part. We asked the researchers to prioritise a series of scientific journals with respect to which journals they prefer for publishing research and gaining new knowledge. In addition we requested the researchers to indicate whether or not the JIF of the prioritised journals has had any influence on these decisions. Furthermore we explored the perception of the researchers as to what degree the JIF could be considered a reliable, stable or objective measure for determining the scientific quality of journals. Moreover we asked the researchers to judge the applicability of JIF as a measure for doing research evaluations. One remarkable result is that app. 80% of the researchers share the opinion that JIF does indeed have an influence on which journals they would prefer for publishing. As such we found a statistically significant correlation between how the researchers ranked the journals and the JIF of the ranked journals. Another notable result is that no significant correlation exists between journals where the researchers actually have published papers and journals in which they would prefer to publish in the future measured by JIF. This could be taken as an indicator for the actual motivational influence on the publication behaviour of the researchers. That is, the impact factor actually works in our case. It seems that the researchers find it fair and reliable to use the Journal Impact Factor for research evaluation purposes.
Abstract
Selecting an appropriate set of scientific journals which best meets the users' needs and the dynamics of science requires usage of weight parameters by which journals can be ranked. Previous methods are based on the simple counting of relevant articles, or hits in SDI runs. The new method proposed combines hit numbers in SDI runs and journals' impact factors to a weight parameter called Selective Impact. The experimental results obtained show that ranking by Selective Impact leads to a higher quality of the conclusions to be drawn from journal rank distributions.
Abstract
This article proposed a new index, so-called “Article-Count Impact Factor” (ACIF) for evaluating journal quality in light of citation behaviour in comparison with the ISI journal impact factors. The ACIF index was the ratio of the number of articles that were cited in the current year to the source items published in that journal during the previous two years. In this work, we used 171 journal titles in materials categories published in the years of 2001–2004 in international journals indexed in the Science Citation Index Expanded (SCI) database as data source. It was found that ACIF index could be used as an alternative tool in assessing the journal quality, particularly in the case where the assessed journals had the same (equal or similar) JIF values. The experimental results suggested that the higher the ACIF value, the more the number of articles being cited. The changes in ACIF values were more dependent on the JIF values rather than the total number of articles. Polymer Science had the greatest ACIF values, suggesting that the articles in Polymer Science had greater “citation per article” than those in Metallurgical Engineering and Ceramics. It was also suggested that in order to increase a JIF value of 1.000, Ceramics category required more articles to be cited as compared to Metallurgical Engineering and Polymer Science categories.
Abstract
This article introduces the Impact Factor squared or IF2-index, an h-like indicator of research performance. This indicator reflects the degree to which large entities such as countries and/or their states participate in top-level research in a field or subfield. The IF2-index uses the Journal Impact Factor (JIF) of research publications instead of the number of citations. This concept is applied to other h-type indexes and their results compared to the IF2-index. These JIF-based indexes are then used to assess the overall performance of cancer research in Australia and its states over 8 years from 1999 to 2006. The IF2-index has three advantages when evaluating larger research units: firstly, it provides a stable value that does not change over time, reflecting the degree to which a research unit participated in top-level research in a given year; secondly, it can be calculated closely approximating the publication date of yearly datasets; and finally, it provides an additional dimension when a full article-based citation analysis is not feasible. As the index reflects the degree of participation in top-level research it may favor larger units when units of different sizes are compared.
Summary
This paper estimates the long-term impact of journals aggregated in 24 different fields, using a simple logistic diffusion model, and relates the results to the current impact factor. Results show that while the current and the long-term impact factors have a high correlation coefficient, some fields are systematically slower-moving than others, as they often differ in the proportion of the overall impact through time that occurs in the short term.