Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Peder Larsen x
Clear All Modify Search

Abstract  

The proceedings of the ISSI conferences in Stockholm, 2005, and Madrid, 2007, contain 85 contributions based on publication counting. The methods used in these contributions have been analyzed. The counting methods used are stated explicitly in 26 contributions and can be derived implicitly from the discussion of methods in 10 contributions. In only five contributions, there is a justification for the choice of method. Only one contribution gives information about different results obtained by using different methods. The non-additive results from whole counting give problems in the calculation of shares in seven contributions, but these problems are not mentioned. Only 11 contributions give a term (terms) for the counting method(s) used. To illustrate the problems, 11 of the contributions are discussed in detail. The conclusion is that 40 years of publication counting have not resulted in general agreement on definitions of methods and terminology nor in any kind of standardization.

Restricted access

Abstract  

The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.

Open access

Summary For all rankings of countries research output based on number of publications or citations compared with population, GDP, R&D and public R&D expenses, and other national characteristics the counting method is decisive. Total counting (full credit to a country when at least one of the authors is from this country) and Fractional Counting (a country receives a fraction of full credit for a publication equal to the fraction of authors from this country) of publications give widely different results. Counting methods must be stated, rankings based on different counting methods cannot be compared, and Fractional Counting is to be preferred.

Restricted access
Scientometrics
Authors: Marianne Gauffriau, Peder Larsen, Isabelle Maye, Anne Roulin-Perriard and Markus von Ins

Abstract  

The literature on publication counting demonstrates the use of various terminologies and methods. In many scientific publications, no information at all is given about the counting methods used. There is a lack of knowledge and agreement about the sort of information provided by the various methods, about the theoretical and technical limitations for the different methods and about the size of the differences obtained by using various methods. The need for precise definitions and terminology has been expressed repeatedly but with no success. Counting methods for publications are defined and analysed with the use of set and measure theory. The analysis depends on definitions of basic units for analysis (three chosen for examination), objects of study (three chosen for examination) and score functions (five chosen for examination). The score functions define five classes of counting methods. However, in a number of cases different combinations of basic units of analysis, objects of study and score functions give identical results. Therefore, the result is the characterization of 19 counting methods, five complete counting methods, five complete-normalized counting methods, two whole counting methods, two whole-normalized counting methods, and five straight counting methods. When scores for objects of study are added, the value obtained can be identical with or higher than the score for the union of the objects of study. Therefore, some classes of counting methods, including the classes of complete, complete-normalized and straight counting methods, are additive, others, including the classes of whole and whole-normalized counting methods, are non-additive. An analysis of the differences between scores obtained by different score functions and therefore the differences obtained by different counting methods is presented. In this analysis we introduce a new kind of objects of study, the class of cumulative-turnout networks for objects of study, containing full information on cooperation. Cumulative-turnout networks are all authors, institutions or countries contributing to the publications of an author, an institute or a country. The analysis leads to an interpretation of the results of score functions and to the definition of new indicators for scientific cooperation. We also define a number of other networks, internal cumulative-turnout networks, external cumulative-turnout networks, underlying networks, internal underlying networks and external underlying networks. The networks open new opportunities for quantitative studies of scientific cooperation.

Restricted access
Scientometrics
Authors: Marianne Gauffriau, Peder Larsen, Isabelle Maye, Anne Roulin-Perriard and Markus von Ins

Abstract  

Using a database for publications established at CEST and covering the period from 1981 to 2002 the differences in national scores obtained by different counting methods have been measured. The results are supported by analysing data from the literature. Special attention has been paid to the comparison between the EU and the USA. There are big differences between scores obtained by different methods. In one instance the reduction in scores going from whole to complete-normalized (fractional) counting is 72 per cent. In the literature there is often not enough information given about methods used, and no sign of a clear and consistent terminology and of agreement on properties of and results from different methods. As a matter of fact, whole counting is favourable to certain countries, especially countries with a high level of international cooperation. The problems are increasing with time because of the ever-increasing national and international cooperation in research and the increasing average number of authors per publication. The need for a common understanding and a joint effort to rectify the situation is stressed.

Restricted access