Search Results

You are looking at 1 - 6 of 6 items for

  • Author or Editor: Anthony F. J. van Raan x
  • Refine by Access: Content accessible to me x
Clear All Modify Search

Abstract

In this paper we present a compilation of journal impact properties in relation to other bibliometric indicators as found in our earlier studies together with new results. We argue that journal impact, even calculated in a sufficiently advanced way, becomes important in evaluation practices based on bibliometric analysis only at an aggregate level. In the relation between average journal impact and actual citation impact of groups, the influence of research performance is substantial. Top-performance as well as lower performance groups publish in more or less the same range of journal impact values, but top-performance groups are, on average, more successful in the entire range of journal impact. We find that for the high field citation-density groups a larger size implies a lower average journal impact. For groups in the low field citation-density regions however a larger size implies a considerably higher average journal impact. Finally, we found that top-performance groups have relatively less self-citations than the lower performance groups and this fraction is decreasing with journal impact.

Open access

Abstract

The obsolescence and “durability” of scientific literature have been important elements of debate during many years, especially regarding the proper calculation of bibliometric indicators. The effects of “delayed recognition” on impact indicators have importance and are of interest not only to bibliometricians but also among research managers and scientists themselves. It has been suggested that the “Mendel syndrome” is a potential drawback when assessing individual researchers through impact measures. If publications from particular researchers need more time than “normal” to be properly acknowledged by their colleagues, the impact of these researchers may be underestimated with common citation windows. In this paper, we answer the question whether the bibliometric indicators for scientists can be significantly affected by the Mendel syndrome. Applying a methodology developed previously for the classification of papers according to their durability (Costas et al., J Am Soc Inf Sci Technol 61(8):1564–1581, 2010a; J Am Soc Inf Sci Technol 61(2):329–339, 2010b), the scientific production of 1,064 researchers working at the Spanish Council for Scientific Research (CSIC) in three different research areas has been analyzed. Cases of potential “Mendel syndrome” are rarely found among researchers and these cases do not significantly outperform the impact of researchers with a standard pattern of reception in their citations. The analysis of durability could be included as a parameter for the consideration of the citation windows used in the bibliometric analysis of individuals.

Open access

Abstract

We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurements of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.

Open access
Scientometrics
Authors:
Reindert K. Buter
,
Ed. C. M. Noyons
, and
Anthony F. J. Van Raan

Abstract

We define converging research as the emergence of an interdisciplinary research area from fields that did not show interdisciplinary connections before. This paper presents a process to search for converging research using journal subject categories as a proxy for fields and citations to measure interdisciplinary connections, as well as an application of this search. The search consists of two phases: a quantitative phase in which pairs of citing and cited fields are located that show a significant change in number of citations, followed by a qualitative phase in which thematic focus is sought in publications associated with located pairs. Applying this search on publications from the Web of Science published between 1995 and 2005, 38 candidate converging pairs were located, 27 of which showed thematic focus, and 20 also showed a similar focus in the other, reciprocal pair.

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment.

Open access