Background: Citation analysis for evaluative purposes typically requires normalization against some control group of similar papers. Selection of this control group is an open question. Objectives: Gain a better understanding of control group requirements for credible normalization. Approach: Performed citation analysis on prior publications of two proposing research units, to help estimate team research quality. Compared citations of each unit"s publications to citations received by thematically and temporally similar papers. Results: Identification of thematically similar papers was very complex and labor intensive, even with relatively few control papers selected. Conclusions: A credible citation analysis for determining performer or team quality should have the following components: – Multiple technical experts to average out individual bias and subjectivity; – A process for comparing performer or team output papers with a normalization base of similar papers; – A process for retrieving a substantial fraction of candidate normalization base papers; Manual evaluation of many candidate normalization base papers to obtain high thematic similarity and statistical representation.
A comprehensive discussion on the use of citation analysis to rate scientific performance and the controversy surrounding it. The general adverse criticism that citation counts include an excessive number of negative citations (citations to incorrect results worthy of attack), self-citations (citations to the works of the citing authors), and citations to methodological papers is analyzed. Included are a discussion of measurement problems such as counting citations for multiauthored papers, distinguishing between more than one person with the same last name (homographs), and what it is that citation analysis actually measures. It is concluded that as the scientific enterprise becomes larger and more complex, and its role in society more critical, it will become more difficult, expensive and necessary to evaluate and identify the largest contributors. When properly used, citation analysis can introduce a useful measure of objectivity into the evaluation process at relatively low financial cost.
This paper examines a number of the criticisms that citation analysis has been subjected to over the years. It is argued that
many of these criticisms have been based on only limited examinations of data in particular contexts and it remains unclear
how broadly applicable these problems are to research conducted at different levels of analysis, in specific field, and among
various national data sets. Relevant evidence is provided from analysis of Australian and international data. Citation analysis
is likely to be most reliable when data is aggregated and at the highly-cited end of the distribution. It is possible to make
valid inferences about individual cases, although considerable caution should be used. Bibliometric measures should be viewed
as a useful supplement to other research evaluation measures rather than as a replacement.
Authors:Ji-ping Gao, Kun Ding, Li Teng, and Jie Pang
Since Narin and his colleagues pioneered and further developed ‘patent citationanalysis’ (Narin and Noma 1985 ; Narin et al. 1997 ; Narin and Olivastro 1998 ), an optimal method to analyze the interaction
The purpose of this study is to map semiconductor literature using journal co-citation analysis. The journal sample was gathered
from the INSPEC database from 1978 to 1997. In the co-citation analysis, the data compiled were counts of the number of times
two journal titles were jointly cited in later publications. It is assumed that the more two journals are cited together,
the closer the relationship between them. The journal set used was the 30 most productive journals in the field of semiconductors.
Counts of co-citations to the set of semiconductor journals were retrieved from SciSearch database, accessed through Dialog.
Cluster analysis and multi-dimensional scaling were employed to create two-dimensional maps of journal relationships in the
cross-citation networks. The following results were obtained through this co-citation study: The 30 journals fall fairly clearly
into three clusters. The major cluster of journals, containing 17 titles, is in the subject of physics. The second cluster,
consisting of 9 journals, includes journals primarily on material science. The remaining cluster represents research areas
in the discipline of electrical and electronic engineering. All co-cited journals share similar co-citation profiles, reflected
in high positive Pearson correlation. Two hundred and ninety-six pairs (68%) correlate at greater than 0.70. This shows that
there is strong relationship between semiconductor journals. Five individual journals in five paired sets with co-citation
frequency over 100,000 times include Physical Review B, Condensed Matter; Physical Review Letters; Applied Physics Letters; Journal of Applied Physics; and Solid State Communications.
This paper reports the first results of the extension of citation analysis to 'non-source' items, which is one strand of an
extensive study of quantitative performance indicators used in the assessment of research. It would be presumptuous to draw
firm conclusions from this first foray into the realm of non-source citations, however our analysis is based on an extensive
experimental database of over 30,000 publications, so the results can be viewed as strong pointers to possible generalised
outcomes. We show that it is possible to mine ISI databases for references to a comprehensive oeuvre of items from whole institutions.
Many types of publications are visible in the ISI data - books, book chapters, journals not indexed by ISI, and some conference
publications. When applied to the assessment of university departments, they can have a significant effect on rankings, though
this does not follow in all cases. The investment of time, effort, and money in a significantly extended analysis will not
be equally beneficial in all fields. However, a considerable amount of testing is required to confirm our initial results.
The purpose of this study is to map semiconductor literature by author co-citation analysis in order to highlight major subject
specializations in semiconductors and identify authors and their relationships within these specialties and within the field.
Forty-six of the most productive authors were included in the sample list. Author samples were gathered from the INSPEC database
from 1978 to 1997. The relatively low author co-citation frequencies indicate that there is a low connection among authors
who publish in semiconductor journals and big differences among authors' research areas. Six sets of authors with co-citation
greater than 100 times are M. Cardona and G. Lucovsky; T. Ito and K. Kobayashi; M. Cardona and G. Abstreiter; A. Y. Cho and
H. Morkoc; C. R. Abernathy and W. S. Hobson; H. Morkoc and I. Akasaki. The Pearson correlation coefficient of author co-citation
varies widely, i.e., from -0.17 to 0.92. This shows that some authors with high positive correlations are related in certain
ways and co-cited, while other authors with high negative correlations may be rarely or never related and co-cited. Cluster
analysis and multi-dimensional scaling are employed to create two-dimensional maps of author relationships in the cross-citation
networks. It is found that the authors fall fairly clearly into three clusters. The first cluster covers authors in physics
and its applications. The authors in the second group are experts in electrical and electronic engineering. The third group
includes specialists in materials science. Because of its interdisciplinary nature and diverse subjects, semiconductor literature
lacks a strong group of core authors. The field consists of several specialties around a weak center.
Authors:Christoph Neuhaus, Andreas Litscher, and Hans-Dieter Daniel
The database host STN International allows for extensive citation analysis in the SCISEARCH database (Science Citation Index
Expanded) and in the CAplus database (Chemical Abstracts). Along with its powerful browsing, searching and analyzing facilities,
STN International also features scripts. In this paper we examine the usefulness of the script language in the automation
of citation analysis in SCISEARCH and CAplus.
The present paper addresses some of the many possible uses of citations, including bookmark, intellectual heritage, impact
tracker, and self-serving purposes. The main focus is on the applicability of citation analysis as an impact or quality measure.
If a paper's bibliography is viewed as consisting of a directed (research impact or quality) component related to intellectual
heritage and random components related to specific self-interest topics, then for large numbers of citations from many different
citing paper, the most significant intellectual heritage (research impact or quality) citations will aggregate and the random
author-specific self-serving citations will be scattered and not accumulate. However, there are at least two limitations to
this model of citation analysis for stand-alone use as a measure of research impact of quality. First, the reference to intellectual
heritage could be positive or negative. Second, there could be systemic biases which affect the aggregate results, and one
of these, the “Pied Piper Effect”, is described in detail. Finally, the results of a short citation study comparing Russian
and American papers in different technical fields are presented. The questions raised in interpreting this data highlight
a few of the difficulties in attempting to interpret citation results without supplementary information.
Leydesdorff (Leydesdorff, 1998) addresses the history of citations and citation analysis, and the transformation of a reference mechanism into a purportedly
quantitive measure of research impact/quality. The present paper examines different facets of citations and citation analysis,
and discusses the validity of citation analysis as a useful measure of research impact/quality.
This paper is an investigation of the knowledge sources of Korean innovation studies using citation analysis, based on a Korean
database during 1993–2004. About two thirds of knowledge has come from foreign sources and 94% of them are from English materials.
Research Policy is the most frequently cited journal followed by Harvard Business Review, R&D Management and American Economic Review. An analysis of who cites the most highly cited journal is also included. Neo-Schumpeterians in Korea cite more papers from
Research Policy than general researchers, and there is no difference between groups in the year of citation.