Browse

You are looking at 31 - 40 of 76 items for :

  • Mathematics and Statistics x
  • Refine by Access: Content accessible to me x
Clear All
Scientometrics
Authors: Thomas Gurney, Edwin Horlings, and Peter van den Besselaar

Abstract

Key to accurate bibliometric analyses is the ability to correctly link individuals to their corpus of work, with an optimal balance between precision and recall. We have developed an algorithm that does this disambiguation task with a very high recall and precision. The method addresses the issues of discarded records due to null data fields and their resultant effect on recall, precision and F-measure results. We have implemented a dynamic approach to similarity calculations based on all available data fields. We have also included differences in author contribution and age difference between publications, both of which have meaningful effects on overall similarity measurements, resulting in significantly higher recall and precision of returned records. The results are presented from a test dataset of heterogeneous catalysis publications. Results demonstrate significantly high average F-measure scores and substantial improvements on previous and stand-alone techniques.

Open access

Abstract

Performance measures of individual scholars tend to ignore the context. I introduce contextualised metrics: cardinal and ordinal pseudo-Shapley values that measure a scholar's contribution to (perhaps power over) her own school and her market value to other schools should she change job. I illustrate the proposed measures with business scholars and business schools in Ireland. Although conceptually superior, the power indicators imply a ranking of scholars within a school that is identical to the corresponding conventional performance measures. The market value indicators imply an identical ranking within schools and a very similar ranking between schools. The ordinal indices further contextualise performance measures and thus deviate further from the corresponding conventional indicators. As the ordinal measures are discontinuous by construction, a natural classification of scholars emerges. Averaged over schools, the market values offer little extra information over the corresponding production and impact measures. The ordinal power measure indicates the robustness or fragility of an institution's place in the rank order. It is only weakly correlated with the concentration of publications and citations.

Open access

Abstract

The first part of the paper deals with the assessment of international databases in relation to the number of historical publications (representation and relevance in comparison with the model database). The second part is focused on providing answer to the question whether historiography is governed by similar bibliometric rules as exact sciences or whether it has its own specific character. Empirical database for this part of the research constituted the database prepared ad hoc: The Citation Index of the History of Polish Media (CIHPM). Among numerous typically historical features the main focus was put on: linguistic localism, specific character of publishing forms, differences in citing of various sources (contributions and syntheses) and specific character of the authorship (the Lorenz Curve and the Lotka's Law). Slightly more attention was devoted to the half-life indicator and its role in a diachronic study of a scientific field; also, a new indicator (HL14), depicting distribution of citations younger then half-life was introduced. Additionally, the comparison and correlation of selected parameters for the body of historical science (citations, HL14, the Hirsch Index, number of publications, volume and other) were also conducted.

Open access

Abstract

Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.

Open access

Abstract

The purpose of this study is to examine efficiency and its determinants in a set of higher education institutions (HEIs) from several European countries by means of non-parametric frontier techniques. Our analysis is based on a sample of 259 public HEIs from 7 European countries across the time period of 2001–2005. We conduct a two-stage DEA analysis (Simar and Wilson in J Economet 136:31–64, ), first evaluating DEA scores and then regressing them on potential covariates with the use of a bootstrapped truncated regression. Results indicate a considerable variability of efficiency scores within and between countries. Unit size (economies of scale), number and composition of faculties, sources of funding and gender staff composition are found to be among the crucial determinants of these units’ performance. Specifically, we found evidence that a higher share of funds from external sources and a higher number of women among academic staff improve the efficiency of the institution.

Open access

Abstract

I propose a new method (Pareto weights) to objectively attribute citations to co-authors. Previous methods either profess ignorance about the seniority of co-authors (egalitarian weights) or are based in an ad hoc way on the order of authors (rank weights). Pareto weights are based on the respective citation records of the co-authors. Pareto weights are proportional to the probability of observing the number of citations obtained. Assuming a Pareto distribution, such weights can be computed with a simple, closed-form equation but require a few iterations and data on a scholar, her co-authors, and her co-authors’ co-authors. The use of Pareto weights is illustrated with a group of prominent economists. In this case, Pareto weights are very different from rank weights. Pareto weights are more similar to egalitarian weights but can deviate up to a quarter in either direction (for reasons that are intuitive).

Open access

Abstract

Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.

Open access

Abstract

The obsolescence and “durability” of scientific literature have been important elements of debate during many years, especially regarding the proper calculation of bibliometric indicators. The effects of “delayed recognition” on impact indicators have importance and are of interest not only to bibliometricians but also among research managers and scientists themselves. It has been suggested that the “Mendel syndrome” is a potential drawback when assessing individual researchers through impact measures. If publications from particular researchers need more time than “normal” to be properly acknowledged by their colleagues, the impact of these researchers may be underestimated with common citation windows. In this paper, we answer the question whether the bibliometric indicators for scientists can be significantly affected by the Mendel syndrome. Applying a methodology developed previously for the classification of papers according to their durability (Costas et al., J Am Soc Inf Sci Technol 61(8):1564–1581, ; J Am Soc Inf Sci Technol 61(2):329–339, ), the scientific production of 1,064 researchers working at the Spanish Council for Scientific Research (CSIC) in three different research areas has been analyzed. Cases of potential “Mendel syndrome” are rarely found among researchers and these cases do not significantly outperform the impact of researchers with a standard pattern of reception in their citations. The analysis of durability could be included as a parameter for the consideration of the citation windows used in the bibliometric analysis of individuals.

Open access

Abstract

In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, ), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, ) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, ). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.

Open access
Scientometrics
Authors: Ludo Waltman, Nees Jan van Eck, Thed N. van Leeuwen, Martijn S. Visser, and Anthony F. J. van Raan

Abstract

Opthof and Leydesdorff (Scientometrics, ) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, ) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment.

Open access