Search Results

You are looking at 1 - 10 of 243 items for :

  • "Bibliometric indicators" x
  • Refine by Access: All Content x
Clear All

scientific activities. A considerable part consisted of bibliometric indicators (National Science Board 1973 ; Elkana et al. 1978 ; Moed et al. 1992 ). The example set by the NSF was followed by a large number of countries in the second half of the 1980s

Restricted access

Abstract  

Given the current availability of different bibliometric indicators and of production and citation data sources, the following two questions immediately arise: do the indicators’ scores differ when computed on different data sources? More importantly, do the indicator-based rankings significantly change when computed on different data sources? We provide a case study for computer science scholars and journals evaluated on Web of Science and Google Scholar databases. The study concludes that Google scholar computes significantly higher indicators’ scores than Web of Science. Nevertheless, citation-based rankings of both scholars and journals do not significantly change when compiled on the two data sources, while rankings based on the h index show a moderate degree of variation.

Restricted access

Abstract  

The article deals with the various problems of an implementation of publication indicators on a departmental level in West-German universities. The German university system relies mostly on social and informal control mechanisms. Bibliometric indicators can provide adequate information for an effective social control in such a system. However, they will only be accepted and effective if they are valid, thoroughly reliable and robust. A successful adaptation of individual goals and behaviour depends largely on the particular interests and incentives of the faculty members across various departmental arrangements.

Restricted access

Abstract  

In a bibliometric study of nine research departments in the field of biotechnology and molecular biology, indicators of research capacity, output and productivity were calculated, taking into account the researchers' participation in scientific collaboration as expressed in co-publications. In a quantitative approach, rankings of departments based on a number of different research performance indicators were compared with one another. The results were discussed with members from all nine departments involved. Two publication strategies were identified, denoted as a quantity of publication and a quality of publication strategy, and two strategies with respect to scientific collaboration were outlined, one focusing on multi-lateral and a second on bi-lateral collaborations. Our findings suggest that rankings of departments may be influenced by specific publication and management strategies, which in turn may depend upon the phase of development of the departments or their personnel structure. As a consequence, differences in rankings cannot be interpreted merely in terms of quality or significance of research. It is suggested that the problem of assigning papers resulting from multi-lateral collaboration to the contributing research groups has not yet been solved properly, and that more research is needed into the influence of a department's state of development and personnel structure upon the values of bibliometric indicators. A possible implication at the science policy level is that different requirements should hold for departments of different age or personnel structure.

Restricted access

Abstract  

We have developed a method to obtain robust quantitative bibliometric indicators for several thousand scientists. This allows us to study the dependence of bibliometric indicators (such as number of publications, number of citations, Hirsch index...) on the age, position, etc. of CNRS scientists. Our data suggests that the normalized h-index (h divided by the career length) is not constant for scientists with the same productivity but different ages. We also compare the predictions of several bibliometric indicators on the promotions of about 600 CNRS researchers. Contrary to previous publications, our study encompasses most disciplines, and shows that no single indicator is the best predictor for all disciplines. Overall, however, the Hirsch index h provides the least bad correlations, followed by the number of papers published. It is important to realize however that even h is able to recover only half of the actual promotions. The number of citations or the mean number of citations per paper are definitely not good predictors of promotion. Due to space constraints, this paper is a short version of a more detailed article. [JENSEN & AL., 2008B]

Restricted access

Abstract  

Bibliometric analyses of scientific publications provide quantitative information that enables evaluators to obtain a useful picture of a team's research visibility. In combination with peer judgements and other qualitative background knowledge, these analyses can serve as a basis for discussions about research performance quality. However, many mathematicians are not convinced that citation counts do in fact provide useful information in the field of mathematics. According to these mathematicians, citation and publication habits differ completely from scholarly fields such as chemistry or physics. Therefore, it is impossible to derive valid information regarding research performance from citation counts. The aim of this study is to obtain more insight into the significance of citation-based indicators in the field of mathematics. To which extent do citation-scores mirror to the opinions of experts concerning the quality of a paper or a journal? A survey was conducted to answer this question.Top journals, as qualified by experts, receive significantly higher citation rates thangood journals. Thesegood journals, in turn, have significantly higher scores than journals with the qualificationless good. Top publications, recorded in the ISI database, receive on the average 15 times more citations than the mean score within the field of mathematics as a whole. In conclusion, the experts' views on top publications or top journals correspond very well to bibliometric indicators based on citation counts.

Restricted access

Abstract

Surnames have been used as a proxy in studies on health care for various ethnic groups and also applied to ascribe ethnicity in studies on the genetic structure of a population. The aim of this study was to use a surname-based bibliometric indicator to assess the representation of Jewish authors in US biomedical journals. The other aim was to test the hypothesis that the representation of Jewish authors in US biomedical journals corresponds to their representation among US Nobel Prize winners in Medicine, 1960–2009. From among articles published 1960–2009 in all journals covered by Medline (>5,000), and in the top 10 US biomedical journals we counted articles by authors from the following three groups: Kohenic–Levitic surnames, other common Jewish surnames, and the most frequent non-Jewish surnames in the USA. The frequency of a surname in the US population (1990 US Census) was used to calculate the expected number of scientific publications: the total number of published articles multiplied by a surname's frequency. The actual number of articles with that surname was also determined. The ratio of actual to expected number of articles was used as a measure of representation proportionality. It was found that the ratio of actual to expected number of articles in both Jewish groups is close to 10 among all (>5,000) journals, and close to 20 in the top 10 journals. The ratio of actual to expected numbers of Jewish Nobel Laureates in the USA is also close to 20. In conclusion, the representation of Jewish authors in top 10 US biomedical journals corresponds to the representation of Jewish Nobel Laureates among US laureates. We hypothesize that disproportional representation of Jewish scientists as authors in top biomedical journals and among Nobel Prize laureates in Medicine is mostly due to their overrepresentation as research participants, not because of the increased chances for reward for a Jewish researcher per se.

Restricted access

Abstract  

The results of a study for evaluating research performance of two Greek University Departments of Mathematics are presented. In order to achieve this elements from the Sussex and Leiden methodologies of constructing and using bibliometric indicators were used. Comparison of the two groups were based on their similarities. The convergence of bibliometric indicators procedure as applied in Leiden methodology together with a number new bibliometric indicators were used. Results shown that bibliometric indicators if applied properly may give very interesting information on the research performance and nature of research carried out in University Departments.

Restricted access

drugs in the top specialty journals. It remains to be seen that the predictive value of this indicator is also good for new drug development in the fields other than analgesics. In conclusion, a bibliometric indicator based on initial predominancy of

Restricted access

Abstract  

This contribution discusses basic technical-methodological issues with respect to data collection and the construction of bibliometric indicators, particularly at the macro or meso level. It focusses on the use of the Science Citation Index. Its aim is to highlight important decisions that have to be made in the process of data collection and the construction of bibliometric indicators. It illustrates differences in the methodologies applied by several important producers of bibliometric indicators: the Institute for Scientific Information (ISI); CHI Research, Inc.; the Information Science and Scientometrics Research Unit (ISSRU) at Budapest; and the Centre for Science and Technology Studies at Leiden University (CWTS). The observations made in this paper illustrate the complexity of the process of standardisation of bibliometric indicators. Moreover, they provide possible explanations for divergence of results obtained in different studies. The paper concludes with a few general comments related to the need of standardisation in the field of bibliometrics.

Restricted access