) also compared overall S&T publicationsoutput of India, China and South Korea across twenty broad subjects as defined by Scopus bibliographical database in terms of selected indicators.
The objectives of the
results in high values of A .
The aim of the present paper is fourfold: (1) to analyze the nature of the Hirsch constant A of the publicationoutput of different professors working in different institutions in Poland and their Hirsch index h
Authors:Krisztina Károly, Bea Winkler, and Péter Kiszl
not (only) European researchers . The quantitative analysis of the articles also shows that international cooperation plays a crucial role in the publicationoutput of the journal . The figures reported on so far demonstrate that the journal works
The term “European Paradox” describes the perceived failure of the EU to capture full benefits of its leadership of science
as measured by publications and some other indicators. This paper investigates what might be called the “American Paradox,”
the decline in scientific publication share of the U.S. despite world-leading investments in research and development (R&D)
— particularly as that decline has accelerated in recent years. A multiple linear regression analysis was made of which inputs
to the scientific enterprise are most strongly correlated with the number of scientific papers produced. Research investment
was found to be much more significant than labor input, government investment in R&D was much more significant than that by
industry, and government non-defense investment was somewhat more significant than its defense investment. Since the EU actually
leads the U.S. in this key component, this could account for gradual loss of U.S. paper share and EU assumption of leadership
of scientific publication in the mid-1990s. More recently the loss of U.S. share has accelerated, and three approaches analyzed
this phenomenon: (1) A companion paper shows that the SCI database has not significantly changed to be less favorable to the
U.S.; thus the decline is real and is not an artifact of the measurement methods. (2) Budgets of individual U.S. research
agencies were correlated with overall paper production and with papers in their disciplines. Funding for the U.S. government
civilian, non-healthcare sector was flat in the last ten years, resulting in declining share of papers. Funding for its healthcare
sector sharply increased, but there were few additional U.S. healthcare papers. While this inefficiency contributes to loss
of U.S. share, it is merely a specific example of the general syndrome that increased American investments have not produced
increased publication output. (3) In fact the decline in publication share appears to be due to rapidly increasing R&D investments
by China, Taiwan, S. Korea, and Singapore. A model shows that in recent years it is a country’s share of world investment that is most predictive of its publication share. While the U.S. has increased its huge R&D investment, its investment share still declined because of even more rapidly increasing
investments by these Asian countries. This has likely led to their sharply increased share of scientific publication, which
must result in declines of shars of others — the U.S. and more recently, the EU.
The publication and citation records of a group of 34 senior members of the faculty of the Department of Chemistry at Technion-Israel Institute of Technology over the period 1980–90 have been analyzed under the contention that dealing with a small group makes it possible for one to pay adequate attention to the methodology of the measurement and analysis processes. Choosing the most suitable index for measuring Publications Output has been considered in detail; it is suggested that it is essential to make allowances for both the number of co-authors and for the lengths of publications in order to obtain a more valid measure than is provided by a simple count of equally-weighted publications. Analogously it is argued that simple citation counts provide an inadequate measure of the impact that publications make on the group outside the authors' immediate circle and thus that it is necessary to subtract self citations and divide the credit for a citation among the co-authors of the publication. Results of the analysis show that in agreement with all previous findings a few members (perhaps less than 20%) produce more than half the publications and receive more than half the citations of the Group as a whole.
The output of a total of 860 publications in physics for the period 1938–1987 is used to analyse the mainstream of physics research in Turkey. The productivity and growth characteristics of the research in experimental and theoretical areas as well as in different subfields and institutions in the country are briefly discussed. The total output is also assessed by its citation impact.
Authors:Thomas Anderson, Robin Hankin, and Peter Killworth
An individual’s h-index corresponds to the number h of his/her papers that each has at least h citations. When the citation count of an article exceeds h, however, as is the case for the hundreds or even thousands of citations that accompany the most highly cited papers, no
additional credit is given (these citations falling outside the so-called “Durfee square”). We propose a new bibliometric
index, the “tapered h-index” (hT), that positively enumerates all citations, yet scoring them on an equitable basis with h.
The career progression of hT and h are compared for six eminent scientists in contrasting fields. Calculated hT for year 2006 ranged between 44.32 and 72.03, with a corresponding range in h of 26 to 44. We argue that the hT-index is superior to h, both theoretically (it scores all citations), and because it shows smooth increases from year to year as compared with the
irregular jumps seen in h. Conversely, the original h-index has the benefit of being conceptually easy to visualise. Qualitatively, the two indices show remarkable similarity
(they are closely correlated), such that either can be applied with confidence.
Cross-field comparison ofscientometric indicators1 is severely hindered by the differences in publication and citation habits of science fields. However, relating publication and citation indicators to proper field-specific reference standards,relative indicators can be built, which may prove rather useful in the comparative assessment of scientists, groups, institutions or countries. The use ofrelational charts in displaying the indicators broadens the scope of such assessments. Relative indicators of chemistry research in 25 countries are presented as an illustrative example.
This paper seeks to provide current indicators on Indian science and technology for measuring the country’s progress in research.
The study uses for the purpose 11 years publications data on India and top 20 productive countries as drawn from the Scopus
database for the period 1996 to 2006. The study examines country performance on several measures including country publication
share in the world research output, country publication share in various subjects in the national context and in the global
context, patterns of research communication in core Indian domestic and international journals, geographical distribution
of publications, share of international collaborative papers at the national level as well as across subjects and characteristics
of high productivity institutions, scientists and cited papers. The paper also compares the similarity of Indian research
profile with top 20 productive countries. The findings of the study should be of special significance to the planners & policy-makers
as they have implications for the long term S&T planning of the country.