Search Results

You are looking at 1 - 10 of 14 items for

  • Author or Editor: Gangan Prathap x
  • Refine by Access: All Content x
Clear All Modify Search

The h-index is now used almost as a canonical tool for research assessment of individuals, research faculties and institutions and even for comparing performance of journals and countries. However, its limitations have also been noticed and many Hirsch-type variants have been proposed. In this paper, a “mock h-index” which was recently proposed is compared with the “tapered h-index”.

Restricted access

Abstract

Journals have been ranked on the basis of impact factors for a long time. This is a quality indicator, and often favours review journals with few articles. Integrated impact indicators try to factor in size (quantity) as well, and are correlated with total number of citations. The total number of papers in a portfolio can be considered a zeroth order performance indicator and the total number of citations a first order performance indicator. Indicators like the h-Index and the g-Index are actually performance indicators in that they integrate both quality and quantity assessment into a single number. The p-Index is another variant of this class of performance indicators and is based on the cubic root of a second order performance indicator called the exergy indicator. The Eigenfactor score and article influence are respectively first order quantity and quality indicators. In this paper, we confirm the above relationships.

Restricted access

Abstract

Quality, Quantity, Performance,… An unresolved challenge in performance evaluation in a very general context that goes beyond scientometrics, has been to determine a single indicator that can combine quality and quantity of output or outcome. Toward this end, we start from metaphysical considerations and propose introducing a new name called Quasity to describe those quantity terms which incorporate a degree of quality and best measures the output. The product of quality and quasity then becomes an energy term which serves as a performance indicator. Lessons from kinetics, bibliometrics and sportometrics are used to build up this theme.

Restricted access

Abstract

In this comment, we re-evaluate an example using a “thermodynamic” paradigm to show how bibliometrics can incorporate normalization into the evaluative process. The motivation for this is the recent exchange in the pages of this journal from two groups that have taken different positions on how normalization should be done.

Restricted access

Abstract  

Recent research has shown that simple graphical representations of research performance can be obtained using two-dimensional maps based on impact (i) and citations (C). The product of impact and citations leads to an energy term (E). Indeed, using E as the third coordinate, three-dimensional landscape maps can be prepared. In this paper, instead of using the traditional impact factor and total citations received for journal evaluation, Article InfluenceTM and EigenfactorTM are used as substitutes. Article Influence becomes a measure of quality (i.e. a proxy for impact factor) and Eigenfactor is a proxy for size/quantity (like citations) and taken together, the product is an energy-like term. This can be used to measure the influence/prestige of a journal. It is also possible to propose a p-factor (where p = E 1/3) as an alternative measure of the prestige or prominence of a journal which plays the equivalent role of the h-index.

Restricted access

Abstract

Quantitative assessment of information production processes requires the definition of a robust citation performance indicator. This is particularly so where there is a need to introduce a normalization mechanism for correcting for quality across field and disciplines. In this paper, we offer insights from the “thermodynamic” approach in terms of quality, quantity and quasity and energy, exergy and entropy to show how the recently introduced expected value measure can be rationalized and improved. The normalized energy indicator E is proposed as a suitable single number scalar indicator of a scientist's or group's performance (i.e. as a multiplicative product of quality and quantity), when complete bibliometric information is available.

Restricted access

Abstract  

The h-index has captured the imagination of scientometricians and bibliometricians to such an extent that one can now divide the history of the subject virtually into a pre-Hirsch and a post-Hirsch period. Beyond its academic value, it is now used as a tool for research assessment of individuals, research faculties and institutions and even for comparing performance of journals and countries. Since its introduction, many Hirsch-type variants have been proposed to overcome perceived limitations of the original index. In this paper, using ideas from mathematical modeling, another mock h-index is proposed which may complement the h-index and give it better resolving power.

Restricted access

Abstract

Scalar measures of research performance (Energy, Exergy, and Entropy or EEE) are based on what can be called the bibliometrics-thermodynamics consilience. Here, their application to the percentile ranking normalization scheme is demonstrated.

Restricted access

Abstract  

An indicator called the performance index (p-index) which can effectively combine size and quality of scientific papers, mocking what the h-index could do, emerges from an energy like term E = iC, where i is a measure of quality, expressed as the ratio of citations C to papers published P. In this paper, we demonstrate how this energy paradigm can be used for bibliometric research assessment. The energy assessment technique is demonstrated by applying it to the research assessment of all the countries listed in Essential Science Indicators. Partitioning is easily done by using contour lines on the two-dimensional iCE (impact–Citations–Energy) map.

Restricted access