Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Markus von Ins x
Clear All Modify Search

Abstract  

The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.

Open access

Abstract  

The small size of institutes and publication clusters is a problem when determining citationindices. To improve the citation indexing of small sets of publications (less than 50 or 100publications), a method is proposed. In addition, a method for error calculation is given for largesets of publications. Here, the classical methods of citation indexing remain valid.

Restricted access
Scientometrics
Authors: Marianne Gauffriau, Peder Larsen, Isabelle Maye, Anne Roulin-Perriard and Markus von Ins

Abstract  

The literature on publication counting demonstrates the use of various terminologies and methods. In many scientific publications, no information at all is given about the counting methods used. There is a lack of knowledge and agreement about the sort of information provided by the various methods, about the theoretical and technical limitations for the different methods and about the size of the differences obtained by using various methods. The need for precise definitions and terminology has been expressed repeatedly but with no success. Counting methods for publications are defined and analysed with the use of set and measure theory. The analysis depends on definitions of basic units for analysis (three chosen for examination), objects of study (three chosen for examination) and score functions (five chosen for examination). The score functions define five classes of counting methods. However, in a number of cases different combinations of basic units of analysis, objects of study and score functions give identical results. Therefore, the result is the characterization of 19 counting methods, five complete counting methods, five complete-normalized counting methods, two whole counting methods, two whole-normalized counting methods, and five straight counting methods. When scores for objects of study are added, the value obtained can be identical with or higher than the score for the union of the objects of study. Therefore, some classes of counting methods, including the classes of complete, complete-normalized and straight counting methods, are additive, others, including the classes of whole and whole-normalized counting methods, are non-additive. An analysis of the differences between scores obtained by different score functions and therefore the differences obtained by different counting methods is presented. In this analysis we introduce a new kind of objects of study, the class of cumulative-turnout networks for objects of study, containing full information on cooperation. Cumulative-turnout networks are all authors, institutions or countries contributing to the publications of an author, an institute or a country. The analysis leads to an interpretation of the results of score functions and to the definition of new indicators for scientific cooperation. We also define a number of other networks, internal cumulative-turnout networks, external cumulative-turnout networks, underlying networks, internal underlying networks and external underlying networks. The networks open new opportunities for quantitative studies of scientific cooperation.

Restricted access
Scientometrics
Authors: Marianne Gauffriau, Peder Larsen, Isabelle Maye, Anne Roulin-Perriard and Markus von Ins

Abstract  

Using a database for publications established at CEST and covering the period from 1981 to 2002 the differences in national scores obtained by different counting methods have been measured. The results are supported by analysing data from the literature. Special attention has been paid to the comparison between the EU and the USA. There are big differences between scores obtained by different methods. In one instance the reduction in scores going from whole to complete-normalized (fractional) counting is 72 per cent. In the literature there is often not enough information given about methods used, and no sign of a clear and consistent terminology and of agreement on properties of and results from different methods. As a matter of fact, whole counting is favourable to certain countries, especially countries with a high level of international cooperation. The problems are increasing with time because of the ever-increasing national and international cooperation in research and the increasing average number of authors per publication. The need for a common understanding and a joint effort to rectify the situation is stressed.

Restricted access
Scientometrics
Authors: Stefan Hornbostel, Susan Böhmer, Bernd Klingsporn, Jörg Neufeld and Markus von Ins

Abstract  

The German Research Foundation’s (DFG) Emmy Noether Programme aims to fund excellent young researchers in the postdoctoral phase and, in particular, to open up an alternative to the traditional route to professorial qualification via the Habilitation (venia legendi). This paper seeks to evaluate this funding programme with a combination of methods made up of questionnaires, interviews, appraisals of the reviews, and bibliometric analyses. The key success criteria in this respect are the frequency of professorial appointments plus excellent research performance demonstrated in the form of publications. Up to now, such postdoc programme evaluations have been conducted only scarcely. In professional terms, approved applicants are actually clearly better placed. The personal career satisfaction level is also higher among funding recipients. Concerning publications and citations, some minor performance differences could be identified between approved and rejected applicants. Nevertheless, we can confirm that, on average, the reviewers indeed selected the slightly better performers from a relatively homogenous group of very high-performing applicants. However, a comparison between approved and rejected applicants did not show that participation in the programme had decisively influenced research performance in the examined fields of medicine and physics.

Restricted access