Authors:Bárbara Lancho-Barrantes, Vicente Guerrero-Bote, and Félix Moya-Anegón
A study is described of the rank/JIF (Journal Impact Factor) distributions in the high-coverage Scopus database, using recent
data and a three-year citation window. It includes a comparison with an older study of the Journal Citation Report categories
and indicators, and a determination of the factors most influencing the distributions. While all the specific subject areas
fit a negative logarithmic law fairly well, those with a greater External JIF have distributions with a more sharply defined
peak and a longer tail—something like an iceberg. No S-shaped distributions, such as predicted by Egghe, were found. A strong
correlation was observed between the knowledge export and import ratios. Finally, data from both Scopus and ISI were used
to characterize the rank/JIF distributions by subject area.
Nanotechnology is an emerging field of science with the potential to generate new and enhance existing products and transform
the production process. US patent data is used to track the emergence of nanotechnologies since 1978. The nanotechnologies
that have undergone the most development are identified using patent citation data and co-citation patterns of patents are
examined to define clusters of related nanotechnologies. The potential for economic impact of the emerging nanotechnologies
is assessed using a generality index.
The most popular method for judging the impact of biomedical articles is citation count which is the number of citations received.
The most significant limitation of citation count is that it cannot evaluate articles at the time of publication since citations
accumulate over time. This work presents computer models that accurately predict citation counts of biomedical publications
within a deep horizon of 10 years using only predictive information available at publication time. Our experiments show that
it is indeed feasible to accurately predict future citation counts with a mixture of content-based and bibliometric features
using machine learning methods. The models pave the way for practical prediction of the long-term impact of publication, and
their statistical analysis provides greater insight into citation behavior.
Authors:Pedro Albarrán, Juan Crespo, Ignacio Ortuño, and Javier Ruiz-Castillo
In this paper, scientific performance is identified with the impact that journal articles have through the citations they
receive. In 15 disciplines, as well as in all sciences as a whole, the EU share of total publications is greater than that
of the U.S. However, as soon as the citations received by these publications are taken into account the picture is completely
reversed. Firstly, the EU share of total citations is still greater than the U.S. in only seven fields. Secondly, the mean
citation rate in the U.S. is greater than in the EU in every one of the 22 fields studied. Thirdly, since standard indicators—such
as normalized mean citation ratios—are silent about what takes place in different parts of the citation distribution, this
paper compares the publication shares of the U.S. and the EU at every percentile of the world citation distribution in each
field. It is found that in seven fields the initial gap between the U.S. and the EU widens as we advance towards the more
cited articles, while in the remaining 15 fields—except for Agricultural Sciences—the U.S. always surpasses the EU when it
counts, namely, at the upper tail of citation distributions. Finally, for all sciences as a whole the U.S. publication share
becomes greater than that of the EU for the top 50% of the most highly cited articles. The data used refers to 3.6 million
articles published in 1998–2002, and the more than 47 million citations they received in 1998–2007.
Witnessing a substantial growth rate in its scientific production, Iran is considered as one of the recently rising stars
in scientific contribution scene. However, its impact in science progress is widely unknown, especially at global level. Studying
Iran’s scholarly publications and recognition in SCI, the present communication tries to clarify the country’s science system
performance using regression analyses and then to compare its performance to that of the world, using Relative Citation Rate
(RCR) and Relative Subfield Citedness (RW). The results of the regression analyses reveal that although Iran displays considerable
weaknesses in its performance, it is increasingly recognized as its outputs grow. According to the RCR values, Iran performed
at/above the global level in 21 subfields. However, the RW values show that the country’s performance is above the global
level in only two subfields. Although Iran is very far from an ideal situation; these evidences can be considered as heralds
of a successful movement towards a wealthy scientific future.
The investigators studied author research impact using the number of citers per publication an author’s research has been
able to attract, as opposed to the more traditional measure of citations. A focus on citers provides a complementary measure
of an author’s reach or influence in a field, whereas citations, although possibly numerous, may not reflect this reach, particularly
if many citations are received from a small number of citers. In this exploratory study, Web of Science was used to tally
citer and citation-based counts for 25 highly cited researchers in information studies in the United States and 26 highly
cited researchers from the United Kingdom. Outcomes of the tallies based on several measures, including an introduced ch-index,
were used to determine whether differences arise in author rankings when using citer-based versus citation-based counts. The
findings indicate a strong correlation between some citation and citer-based measures, but not with others. The findings of
the study have implications for the way authors’ research impact may be assessed.
Authors:Mark Elkins, Christopher Maher, Robert Herbert, Anne Moseley, and Catherine Sherrington
To determine the degree of correlation among journal citation indices that reflect the average number of citations per article,
the most recent journal ratings were downloaded from the websites publishing four journal citation indices: the Institute
of Scientific Information’s journal impact factor index, Eigenfactor’s article influence index, SCImago’s journal rank index and Scopus’ trend line index. Correlations were determined for each pair of indices, using ratings from all journals that could be identified as
having been rated on both indices. Correlations between the six possible pairings of the four indices were tested with Spearman’s
rho. Within each of the six possible pairings, the prevalence of identifiable errors was examined in a random selection of
10 journals and among the 10 most discordantly ranked journals on the two indices. The number of journals that could be matched
within each pair of indices ranged from 1,857 to 6,508. Paired ratings for all journals showed strong to very strong correlations,
with Spearman’s rho values ranging from 0.61 to 0.89, all p < 0.001. Identifiable errors were more common among scores for journals that had very discordant ranks on a pair of indices.
These four journal citation indices were significantly correlated, providing evidence of convergent validity (i.e. they reflect
the same underlying construct of average citability per article in a journal). Discordance in the ranking of a journal on
two indices was in some cases due to an error in one index.
In order to measure the degree to which Google Scholar can compete with bibliographical databases, search results from this
database is compared with Thomson’s ISI WoS (Institute for Scientific Information, Web of Science). For earth science literature
85% of documents indexed by ISI WoS were recalled by Google Scholar. The rank of records displayed in Google Scholar and ISI
WoS, is compared by means of Spearman’s footrule. For impact measures the h-index is investigated. Similarities in measures were significant for the two sources.
Traditional co-citationanalysis does not take into account the proximity of references co-cited by an article. Some references are cited within the same sentence, whereas other references may be cited in further
The paper introduces a concept for measuring the interpretive fragmentation of scientific fields by the analysis of their citation networks. Transitive closure in two-mode networks is the basis of the proposed measurement. To test the validity of the concept two analyses are presented. One compares the integrity of two social sciences, sociology and economics, and a natural science, biophysics. The results are in line with the widely held opinion, that because of the lack in cumulative and consensual knowledge production mechanisms the social sciences are more disintegrated. Sociology is considerably more fragmented then economics, as the different paradigm structure of these disciplines would predict. As a second test, the fragmentation of scholarly communication inside and between the sub-fields of sociology is measured. The results correctly indicate that meaning making processes are taking place inside invisible colleges.