Search Results
their subunits can be rewarded accordingly. Bibliometric indicators of publication output, like the efficiency indicator referred to above, and citation impact are increasingly used. Such indicators offer a possibility for high transparence, as well as
Abstract
One of the most crucial points of citation-based assessments is to find proper reference standards to which the otherwise meaningless plain citation counts can be compared. Using such standards, mere absolute numbers can be turned into relative indicators, suitable for cross-national and cross-field comparisons. In the present study, three possible choice of reference standards for citation assessments are discussed. Citation rates of publications under study can be compared to the average citation rates of the papers of the publishing journals to result inRelative Citation Rate (RCR), an indicator successfully used in several comparative scientometric analyses (see, e.g. Refs 1–5). A more customized reference set is defined by therelated records in the new CD Edition of theScience Citation Index database. Using the socalled bibliographic coupling technique, a set of papers with a high measure of similarity in their list of references is assigned to every single paper of the database. Beside of being an excellent retrieval tool, related records provide a suitable reference set to assess the relative standing of a given set of papers as measured by citation indicators. The third choice introduced in this study is specifically designed for assessing journals. For this purpose, the set of journals cited by the journal in question seems to be a useful basis to compare with. The pros and cons of the three choices are discussed and several examples are given.
Abstract
The small size of institutes and publication clusters is a problem when determining citationindices. To improve the citation indexing of small sets of publications (less than 50 or 100publications), a method is proposed. In addition, a method for error calculation is given for largesets of publications. Here, the classical methods of citation indexing remain valid.
Abstract
Background: Citation analysis for evaluative purposes typically requires normalization against some control group of similar papers. Selection of this control group is an open question. Objectives: Gain a better understanding of control group requirements for credible normalization. Approach: Performed citation analysis on prior publications of two proposing research units, to help estimate team research quality. Compared citations of each unit"s publications to citations received by thematically and temporally similar papers. Results: Identification of thematically similar papers was very complex and labor intensive, even with relatively few control papers selected. Conclusions: A credible citation analysis for determining performer or team quality should have the following components: – Multiple technical experts to average out individual bias and subjectivity; – A process for comparing performer or team output papers with a normalization base of similar papers; – A process for retrieving a substantial fraction of candidate normalization base papers; Manual evaluation of many candidate normalization base papers to obtain high thematic similarity and statistical representation.
Abstract
The application of methods of quantitative analysis makes it possible to evaluate the impact of scientific journals on one another. These methods are used to determine the significance of similar scientific journals by their cross-citations, taking into account data from theJournal Citation Reports (JCR). They also help to improve theJournal Citation Reports structure and widen its uses for the evaluation of scientific journals. The above methods are applied to analyse critically the principles of ranking journals in package 1 and the tabular contents ofJCR's packages 2 and 3, as well as to study frequency distributions of the journals both in time and space.
Abstract
Bibliographic records are extensively used in the study of citations. Based on ISI data, this paper examines citation patterns of the publications of South African scientists in recent years. In particular, the focus of this paper is on citations as to the collaborative dimensions of South African scientists in their publications. The study reveals that the number of citations received by a publication varies not only according to the collaboration but also to the types of collaboration of the authors who are involved in its production. Furthermore, it emerges that the impact of citations on publications differs from discipline to discipline, and affiliating sector to sector, regardless of collaboration.
Abstract
Citation based measures of research interactivity are derived starting from the array of bibliographic intercitations known as the citation matrix. These measures may be applied to any publishing aggregates such as journals, fields of research or nations and are size normalized, providing size independent measures of interactivity. Interactivity measures are defined for pairs of units, for a unit within a system and for a system as a whole.
Abstract
In honor of the centennial of the American Astronomical Society, we asked 53 senior astronomers to select what they thought were the most important papers published in the Astronomical Journal or Astrophysical Journal during this century. This selection of important papers gives us the opportunity to determine whether important papers invariably produce high citation counts. We compared those papers with control papers that appeared immediately before and after the important papers. We found that the important papers published before 1950 produced 11 times as many citations on the average as the controls and after 1950, 5.1 times as many citations. Of the important papers, 92% produced more citations than the average for the control papers. Therefore important papers almost invariably produce many more citations than others, and citation counts are good measures of importance or usefulness. An appraisal of the 53 papers is that three are primarily useful collections of data or descriptions, 46 are fundamental studies giving important results, and four are both useful and fundamental. The lifetimes of all 53 important papers average 2.5 times longer than for the controls. The ages of the authors of these important papers ranged from 23 to 70, with a mean of 39±11 years, indicating that astronomers can write important papers at any age.
inform researchers, policy makers as well as laypersons. The present study uses an author co-citation analysis (ACA) approach to identify major specialties, laboratories, researchers, and research groups in the stem cell research field, and to
Summary The present paper addresses the objective of developing forward indicators of research performance using bibliometric information on the UK science base. Most research indicators rely primarily on historical time series relating to inputs to, activity within and outputs from the research system. Policy makers wish to be able to monitor changing research profiles in a more timely fashion, the better to determine where new investment is having the greatest effect. Initial (e.g. 12 months from publication) citation counts might be useful as a forward indicator of the long-term (e.g. 10 years from publication) quality of research publications, but - although there is literature on citation-time functions - no study to evaluate this specifically has been carried out by Thomson ISI or any other analysts. Here, I describe the outcomes of a preliminary study to explore these citation relationships, drawing on the UK National Citation Report held by Evidence Ltd under licence from Thomson ISI for OST policy use. Annual citation counts typically peak at around the third year after publication. I show that there is a statistically highly significant correlation between initial (years 1-2) and later (years 3-10) citations in six research categories across the life and physical sciences. The relationship holds over a wide range of initial citation counts. Papers that attract more than a definable but field dependent threshold of citations in the initial period after publication are usually among the top 1& (the most highly cited papers) for their field and year. Some papers may take off slowly but can later join the high impact group. It is important to recognise that the statistical relationship is applicable to groups of publications. The citation profiles of individual articles may be quite different. Nonetheless, it seems reasonable to conclude that leading indicators of research excellence could be developed. This initial study should now be extended across a wider range fields to test the initial outcomes: earlier papers suggest the model holds in economics. Additional statistical tests should be applied to explore and model the relationship between initial, later and total citation counts and thus to create a general tool for policy application.