Search Results

You are looking at 1 - 6 of 6 items for

  • Author or Editor: Thed N. van Leeuwen x
  • Refine by Access: All Content x
Clear All Modify Search

Summary In this study, journal impact factors play a central role. In addition to this important bibliometric indicator, which evolves around the average impact of a journal in a two-year timeframe, related aspects of journal impact measurement are studied. Aspects like the output volume, the percentage of publications not cited, and the citation frequency distribution within a set timeframe are researched, and put in perspective with the 'classical' journal Impact Factor. In this study it is shown that these aspects of journal impact measurement play a significant role, and are strongly inter-related. Especially the separation between journals on the basis of the differences in output volume seems to be relevant, as can be concluded from the different results in the analysis of journal impact factors, the degree of uncitedness, and the share of a journal its contents above or below the impact factor value.

Restricted access

Abstract

The obsolescence and “durability” of scientific literature have been important elements of debate during many years, especially regarding the proper calculation of bibliometric indicators. The effects of “delayed recognition” on impact indicators have importance and are of interest not only to bibliometricians but also among research managers and scientists themselves. It has been suggested that the “Mendel syndrome” is a potential drawback when assessing individual researchers through impact measures. If publications from particular researchers need more time than “normal” to be properly acknowledged by their colleagues, the impact of these researchers may be underestimated with common citation windows. In this paper, we answer the question whether the bibliometric indicators for scientists can be significantly affected by the Mendel syndrome. Applying a methodology developed previously for the classification of papers according to their durability (Costas et al., J Am Soc Inf Sci Technol 61(8):1564–1581, 2010a; J Am Soc Inf Sci Technol 61(2):329–339, 2010b), the scientific production of 1,064 researchers working at the Spanish Council for Scientific Research (CSIC) in three different research areas has been analyzed. Cases of potential “Mendel syndrome” are rarely found among researchers and these cases do not significantly outperform the impact of researchers with a standard pattern of reception in their citations. The analysis of durability could be included as a parameter for the consideration of the citation windows used in the bibliometric analysis of individuals.

Open access

Summary  

This paper introduces a citation-based metholodology to characterize and measure the magnitude and intensity of knowledge flows and knowledge spillovers from the public research sector to basic and strategic research in the private sector. We present results derived from an interrelated series of statistical analyses based on Private-to-Public Citations (PrPuCs) within reference lists of the research articles produced by industrial researchers during the years 1996-2003. The first part of the results provides an overview of PrPuC statistics worldwide for OECD countries. Overall, 70% to 80% of those references within corporate research papers relate to papers produced by public research organizations. When controlling for the size of their public sector research bases, Switzerland and the United States appear to be the major suppliers of 'citable' scientific knowledge for industrial research - the value of their Corporate Citation Intensity (CCI) exceeds their statistically expected value by more than 25%. A country's CCI performance turns out to be closely related to the citation impact of the entire domestic science base. The second section deals with an exploratory case study devoted to Electrical Engineering and Telecommunications, one of the corporate sector's major research areas. The findings include a list of the major citing and cited sources at the level of countries and organizations, as well as an analysis of PrPuCs as a “missing link”connection intra-science citations and citations received from corporate science-based patents.

Restricted access

Abstract

We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurements of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment.

Open access