Search Results

You are looking at 1 - 10 of 17 items for

  • Author or Editor: Henk Moed x
  • All content x
Clear All Modify Search

Without Abstract

Restricted access

Abstract  

This contribution focuses on the application of bibliometric techniques to research activities in China, based on data extracted from the Science Citation Index (SCI) and related Citation Indexes, produced by the Institute for Scientific Information (ISI). The main conclusion is that bibliometric analyses based on the ISI databases in principle provide useful and valid indicators of the international position of Chinese research activities, provided that these analyses deal properly with the relatively large number of national Chinese journals covered by the ISI indexes. It is argued that it is important to distinguish between a national and an international point of view. In order to assess the Chinese research activities from a national perspective, it is appropriate to use the scientific literature databases with a good coverage of Chinese periodicals, such as the Chinese Science Citation Database (CSCD), produced at the Chinese Academy of Sciences. Assessment of the position of Chinese research from an international perspective should be based on the ISI databases, but it is suggested to exclude national Chinese journals from this analysis. In addition it is proposed to compute an indicator of international publication activity, defined as the percentage of articles in journals processed for the ISI indexes, with the national Chinese journals being removed, relative to the total number of articles published either in national Chinese or in other journals, regardless of whether these journals are processed for the ISI indexes or not. This indicator can only be calculated by properly combining CSCD and ISI indexes.

Restricted access

Abstract  

In a bibliometric study of nine research departments in the field of biotechnology and molecular biology, indicators of research capacity, output and productivity were calculated, taking into account the researchers' participation in scientific collaboration as expressed in co-publications. In a quantitative approach, rankings of departments based on a number of different research performance indicators were compared with one another. The results were discussed with members from all nine departments involved. Two publication strategies were identified, denoted as a quantity of publication and a quality of publication strategy, and two strategies with respect to scientific collaboration were outlined, one focusing on multi-lateral and a second on bi-lateral collaborations. Our findings suggest that rankings of departments may be influenced by specific publication and management strategies, which in turn may depend upon the phase of development of the departments or their personnel structure. As a consequence, differences in rankings cannot be interpreted merely in terms of quality or significance of research. It is suggested that the problem of assigning papers resulting from multi-lateral collaboration to the contributing research groups has not yet been solved properly, and that more research is needed into the influence of a department's state of development and personnel structure upon the values of bibliometric indicators. A possible implication at the science policy level is that different requirements should hold for departments of different age or personnel structure.

Restricted access

Abstract  

A longitudinal analysis of UK science covering almost 20 years revealed in the years prior to a Research Assessment Exercise (RAE 1992, 1996 and 2001) three distinct bibliometric patterns, that can be interpreted in terms of scientists’ responses to the principal evaluation criteria applied in a RAE. When in the RAE 1992 total publications counts were requested, UK scientists substantially increased their article production. When a shift in evaluation criteria in the RAE 1996 was announced from ‘quantity’ to ‘quality’, UK authors gradually increased their number of papers in journals with a relatively high citation impact. And during 1997–2000, institutions raised their number of active research staff by stimulating their staff members to collaborate more intensively, or at least to co-author more intensively, although their joint paper productivity did not. This finding suggests that, along the way towards the RAE 2001, evaluated units in a sense shifted back from ‘quality’ to ‘quantity’. The analysis also observed a slight upward trend in overall UK citation impact, corroborating conclusions from an earlier study. The implications of the findings for the use of citation analysis in the RAE are briefly discussed.

Restricted access
Restricted access

Abstract  

This paper discusses development and application of journal impact indicators in a number of bibliometric studies commissioned by Dutch organizations and institutions, and conducted in our institute during the past five years. An outline is given of the research questions addressed in these studies and their policy context. For each study the appropriateness of the use of journal impact indicators produced by the Institute for Scientific Information (ISI) is evaluated. Alternative journal impact measures were developed which are shown to be more appropriate in the particular research and policy contexts than the ISI measures. These measures were considered to be highly useful by the users. The studies have revealed methodological flaws of the ISI journal impact factors.

Restricted access

Abstract

The empirical question addressed in this contribution is: How does the relative frequency at which authors in a research field cite ‘authoritative’ documents in the reference lists in their papers vary with the number of references such papers contain? ‘Authoritative’ documents are defined as those that are among the ten percent most frequently cited items in a research field. It is assumed that authors who write papers with relatively short reference lists are more selective in what they cite than authors who compile long reference lists. Thus, by comparing in a research field the fraction of references of a particular type in short reference lists to that in longer lists, one can obtain an indication of the importance of that type. Our analysis suggests that in basic science fields such as physics or molecular biology the percentage of ‘authoritative’ references decreases as bibliographies become shorter. In other words, when basic scientists are selective in referencing behavior, references to ‘authoritative’ documents are dropped more readily than other types. The implications of this empirical finding for the debate on normative versus constructive citation theories are discussed.

Restricted access

Abstract  

Methods were developed to allow quality assessment of academic research in linguistics in allsub-disciplines. Data were obtained from samples of respondents from Flanders, the Netherlands,as well as a world-wide sample, evaluated journals, publishers, and scholars. Journals andpublishers were ranked by several methods. First, we weighted the number of times journals orpublishers were ranked as 'outstanding', 'good', or 'occasionally/not at all good'. To reduce theinfluence of unduly positive or negative biases of respondents, the most extreme ratings weretrimmed. A second weight reflects the (international) visibility of journals and publishers. Here,journals or publishers nominated by respondents from various countries or samples received agreater weight than journals or publishers nominated by respondents from one country or onesample only. Thirdly, a combined index reflects both quality and international visibility. Its use isillustrated on the output of scholars in linguistics. Limitations and potentials for application ofbibliometric methods in output assessments are discussed.

Restricted access