Search Results

You are looking at 101 - 110 of 239 items for :

  • Refine by Access: All Content x
Clear All

Abstract  

In order to measure the degree to which Google Scholar can compete with bibliographical databases, search results from this database is compared with Thomson’s ISI WoS (Institute for Scientific Information, Web of Science). For earth science literature 85% of documents indexed by ISI WoS were recalled by Google Scholar. The rank of records displayed in Google Scholar and ISI WoS, is compared by means of Spearman’s footrule. For impact measures the h-index is investigated. Similarities in measures were significant for the two sources.

Restricted access

Abstract  

Relationships between the journal download immediacy index (DII) and some citation indicators are studied. The Chinese full-text database CNKI is used for data collection. Results suggest that the DII can be considered as an independent indicator, but that it also has predictive value for other indicators, such as a journal’s h-index. In case a journal cannot yet have an impact factor—because its citation history within the database is too short—the DII can be used for a preliminary evaluation. The article provides results related to the CNKI database as a whole and additionally, some detailed information about agricultural and forestry journals.

Restricted access

In this paper we present characteristics of the statistical correlation between the Hirsch (h-) index and several standard bibliometric indicators, as well as with the results of peer review judgment. We use the results of a large evaluation study of 147 university chemistry research groups in the Netherlands covering the work of about 700 senior researchers during the period 1991-2000. Thus, we deal with research groups rather than individual scientists, as we consider the research group as the most important work floor unit in research, particularly in the natural sciences.  Furthermore, we restrict the citation period to a three-year window instead of 'life time counts' in order to focus on the impact of recent work and thus on current research performance. Results show that the h-index and our bibliometric 'crown indicator' both relate in a quite comparable way with peer judgments. But for smaller groups in fields with 'less heavy citation traffic' the crown indicator appears to be a more appropriate measure of research performance.

Restricted access

Abstract  

In general information production processes (IPPs), we define productivity as the total number of sources but we present a choice of seven possible definitions of performance: the mean or median number of items per source, the fraction of sources with a certain minimum number of items, the h-, g-, R- and hw-index. We give an overview of the literature on different types of IPPs and each time we interpret “performance” in these concrete cases. Examples are found in informetrics (including webometrics and scientometrics), linguistics, econometrics and demography. In Lotkaian IPPs we study these interpretations of “performance” in function of the productivity in these IPPs. We show that the mean and median number of items per source as well as the fraction of sources with a certain minimum number of items are increasing functions of the productivity if and only if the Lotkaian exponent is decreasing in function of the productivity. We show that this property implies that the g-, R- and hw-indices are increasing functions of the productivity and, finally, we show that this property implies that the h-index is an increasing function of productivity. We conclude that the h-index is the indicator which shows best the increasing relation between productivity and performance.

Restricted access

Erratum to: Scientometrics DOI 10.1007/s11192-012-0671-3 Since the original publication of the article, the author would like to inform the readers that concept of the “2 year synchronous h-index for journals” was first

Restricted access

In a recent note, Abt ( 2011 ) argues that, since an individual's h -index increases over career length, “it is unfair to compare the h -index of a young person with that of a mature researcher”. He therefore proposes dividing “a person's h

Restricted access

Abstract  

Recent research has shown that simple graphical representations of research performance can be obtained using two-dimensional maps based on impact (i) and citations (C). The product of impact and citations leads to an energy term (E). Indeed, using E as the third coordinate, three-dimensional landscape maps can be prepared. In this paper, instead of using the traditional impact factor and total citations received for journal evaluation, Article InfluenceTM and EigenfactorTM are used as substitutes. Article Influence becomes a measure of quality (i.e. a proxy for impact factor) and Eigenfactor is a proxy for size/quantity (like citations) and taken together, the product is an energy-like term. This can be used to measure the influence/prestige of a journal. It is also possible to propose a p-factor (where p = E 1/3) as an alternative measure of the prestige or prominence of a journal which plays the equivalent role of the h-index.

Restricted access

Abstract  

Given the current availability of different bibliometric indicators and of production and citation data sources, the following two questions immediately arise: do the indicators’ scores differ when computed on different data sources? More importantly, do the indicator-based rankings significantly change when computed on different data sources? We provide a case study for computer science scholars and journals evaluated on Web of Science and Google Scholar databases. The study concludes that Google scholar computes significantly higher indicators’ scores than Web of Science. Nevertheless, citation-based rankings of both scholars and journals do not significantly change when compiled on the two data sources, while rankings based on the h index show a moderate degree of variation.

Restricted access

Abstract  

The characteristic scores and scales (CSS), introduced by Glänzel and Schubert (J Inform Sci 14:123–127, <cite>1988</cite>) and further studied in subsequent papers of Glänzel, can be calculated exactly in a Lotkaian framework. We prove that these CSS are simple exponents of the average number of items per source in general IPPs. The proofs are given using size-frequency functions as well as using rank-frequency functions. We note that CSS do not necessarily have to be defined as averages but that medians can be used as well. Also for these CSS we present exact formulae in the Lotkaian framework and both types of CSS are compared. We also link these formulae with the h-index.

Restricted access

Abstract  

This paper explores the relationship between patenting and publishing in the field of nanotechnology for Chinese universities. With their growing patents, Chinese universities are becoming main technological source for nanotechnology development that is extremely important in China. Matching names of patentees to names of research paper authors in Chinese universities, we find 6,321 authors with patents, i.e. inventor–authors, and 65,001 without any patent. Research performance is measured using three indicators—publication counts, total citations and h-index received by each researcher. It is found that research performance of authors who are also inventors holding patents is better than that of those authors who do not have a patent, and that most of high quality research is performed by inventor–authors. Our findings indicate that patent-oriented research may produce better results.

Restricted access