A severe criticism against the use of citation indicators for the measurement of a research group's performance holds that these indicators reflect at least partly the size of the scientific activity in the subfield or topic in which the group works. In this contribution an attempt is made to substantiate this claim within the framework of Price's theory on the processes of knowledge growth. Empirical evidence is presented that among a number of subfields from the natural and life sciences significant differences exist with respect to Price's index, and that the citation scores of research groups tend to be high in subfields showing a high value of Price's index and other characteristics of reference patterns. These findings suggest that groups sharing an intellectual focus with other researchers tend to obtain higher citation scores than groups working more on their own.
This contribution discusses basic technical-methodological issues with respect to data collection and the construction of bibliometric indicators, particularly at the macro or meso level. It focusses on the use of the Science Citation Index. Its aim is to highlight important decisions that have to be made in the process of data collection and the construction of bibliometric indicators. It illustrates differences in the methodologies applied by several important producers of bibliometric indicators: the Institute for Scientific Information (ISI); CHI Research, Inc.; the Information Science and Scientometrics Research Unit (ISSRU) at Budapest; and the Centre for Science and Technology Studies at Leiden University (CWTS). The observations made in this paper illustrate the complexity of the process of standardisation of bibliometric indicators. Moreover, they provide possible explanations for divergence of results obtained in different studies. The paper concludes with a few general comments related to the need of standardisation in the field of bibliometrics.
This study of multinational publication (publications involving authors from more than one country) focuses on a viable method of fractionation, which can be used in on-line bibliometric research. Fractionation occurs when the credit for co-authored papers is added only partially to the total of publications of countries or authors. We attempted to find an empirical relation between the share of a country's papers in some field that is multinationally co-authored and the degree of fractionation which results. A linear regression analysis yielded a significant correlation of –0.95. The fractionation method is the first that can be applied to publication data collected on-line. A comparison is made with fractionation by first author (i.e., first address) counting. Application of the method to British scientific output for 1984–1989 suggests that British output was stable. The fractionation method can be applied to both natural and life sciences and to social and behavioral sciences. Findings suggest that similar processes of multinational publication are prevalent in both types of science. Implications of the model are discussed.
This article presents an exploratory analysis of publication delays in the science field. Publication delay is defined as
the time period between submission and publication of an article for a scientific journal. We obtained a first indication
that these delays are longer with regard to journals in the fields of mathematics and technical sciences than they are in
other fields of science. We suggest the use of data on publication delays in the analysis of the effects of electronic publishing
on reference/citation patterns. A preliminary analysis on a small sample suggests that—under rather strict assumptions—the
cited half-life of references may be reduced with a factor of about 2 if publication delays decrease radically.
An appropriate delimitation of scientific subfields constitutes one of the key problems in bibliometrics. Several methods have been explored for this task. The main ones are co-citation analysis, co-word analysis, the use of indexing systems based on controlled or uncontrolled keywords, and finally the use of a classification of scientific journals into subfields or categories. In our contribution we will explore a new method, which is based on cognitive words from addresses (corporate sources) in scientific publications. Cognitive address words are words referring to scientific (sub)fields, methods or objects of research, that appear in the institutional affiliations of the publishing authors (e.g., Department of Pharmacology, AIDS Research Center). We will focus on theScience Citation Index (SCI), published by the Institute for Scientific Information. Our methods will be applied to a multidisciplinary set of articles extracted from the journalsScience andNature.
This paper analyses the phenomenon when a publication referring to the oeuvre of a research group (i.e. all the articles published by its members) cites several articles rather than one article from that oeuvre (multiple citations, MC). It is shown that significant differences exist between research groups with respect to the frequency at which MC to their respective oeuvres occur, and that these differences affect to some extent rankings of these groups based on citation counts. In order to find an explanation for our results, four factors are discussed: (1) the impact of a research group; (2) mutual multiple citing arrangements; (3) the size of a group's oeuvre and (4): the degree of common intellectual interest between the research activities in a group. No definite conclusions can be drawn yet on the extent to which these factors are responsible for the observed patterns in the MC frequency. We conclude however that attempts to identify top or sub-top groups in comparative evaluations based on citation analysis should be performed with the greatest care.
It is shown that the Journal Impact Factor as published by ISI — an indicator increasingly used as an measure for the quality of scientific journals — is misleading when two leading journals in chemistry,Angew. Chem., andJ. Am. Chem. Soc., are compared. A detailed analysis of the various kinds publications in both journals over the period 1982–1994 shows that the overall impact factors based on publications and citations in two consecutive years forJACS communications (5.27 for 1993) are significantly higher than those ofAngew. Chem. (3.26 for 1993). Even when all types of articles, i.e. including reviews, are included in the impact factors,JACS has a higher score thanAngew. Chem. (5.07 vs. 4.03 in 1993). Critical and accurate analyses of citation figures is required when such data are used in science policy decisions, such as library subscriptions. It is proposed that when IF values for several journals are compared, only similar publication types are considered.
In a study of the Dutch publication output in physics we tested methods of delimitating fields by journal categories in theScience Citation Index (SCI) compared to the classification of individual publications into subfields in the subject specific databasePhysics Briefs (PHYS). Different methods of measuring national scientific output were compared as well. In this paper we report the main findings on these issues, based on a study of six selected subfields in physics. The main conclusion with respect to the use of different classification methods is that in most of the selected fields in physics the method which delimitates fields by journal categories yields an incomplete picture of the output of a country. Particularly because this method neglects a considerable number of articles published in general journals. With respect to different methods of counting publications it was corroborated by the Dutch data inPhysics Briefs that: 1. so-called integer counted world shares are very much influenced by the degree of internationalisation and 2. first author counting gives a satisfactory approximation of fractional counting. Citation indicators based on first author counting, however, may be distorted in fields with a large fraction of international co-authored publications.
This paper reviews a range of studies conducted by the authors on indicators reflecting scholarly journal impact. A critical
examination of the journal impact data in theJournal Citation Reports (JCR), published by the Institute for Scientific Information (ISI) has shown that the JCR impact factor is inaccurate and
biased towards journals revealing a rapid maturing or decline in impact. In addition, it was found that the JCR cited half
life is an inappropriate measure of decline of journal impact. More appropriate impact measures of scholarly journals are
proposed. A new classification system is explored, describing both maturing and decline of journal impact as measured through
citations. Suggestions for future research are made, analysing in more detail the distribution of citations among papers in