Authors:Li Ying Yang, Ting Yue, Jie Lan Ding, and Tao Han
different countries we will, however, not use such radar plots.
In this study, we try to detect and unveil the different disciplinary structure between two kinds of countries with different S&T level. Bibliometricmethods are used to compare
Today science policy makers in many countries worry about a brain drain, i.e., about permanently losing their best scientists
to other countries. However, such a brain drain has proven to be difficult to measure. This article reports a test of bibliometric
methods that could possibly be used to study the brain drain on the micro-level. An investigation of elite mobility must solve
the three methodological problems of delineating a specialty, identifying a specialty's elite and identifying international
mobility and migration. The first two problems were preliminarily solved by combining participant lists from elite conferences
(Gordon conferences) and citation data. Mobility was measured by using the address information of publication databases. The
delineation of specialties has been identified as the crucial problem in studying elite mobility on the micro- level. Policy
concerns of a brain drain were confirmed by measuring the mobility of the biomedical Angiotensin specialty.
This paper gives an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments. We distinguish two different methods. In the first application, research performance assessment, the bibliometric method is based on advanced analysis of publication and citation data. We show that the resulting indicators are very useful, and in fact an indispensable element next to peer review in research evaluation procedures. Indicators based on advanced bibliometric methods offer much more than only numbers. They provide insight into the position of actors at the research front in terms of influence and specializations, as well as into patterns of scientific communication and processes of knowledge dissemination. After a discussion of technical and methodological problems, we present practical examples of the use of research performance indicators. In the second application, monitoring scientific developments, bibliometric methods based on advanced mapping techniques are essential. We discuss these techniques briefly and indicate their most important potentials, particularly their role in foresight exercises. Finally, we give a first outline of how both bibliometric approaches can be combined to a broader and powerful methodology to observe scientific advancement and the role of actors.
Several bibliometric methods of assessing the research performance of departments are examined: intranational comparison of departments, comparison with foreign departments of good standing, and comparison with a bibliometric world average. In the study, two Dutch experimental psychology departments were compared with one good US and one outstanding UK department. The better of the Dutch departments performed below both foreign departments. However, using the method involving Journal Citation Scores, it was shown that this Dutch department scored above world average recently, while the other department consistently scored below world average. The best picture is obtained when both methods are combined, which shows that the better Dutch department is ranking in the sub-top of the world, while the other department performs below average.
Authors:Primož Južnič, Stojan Pečlin, Matjaž Žaucer, Tilen Mandelj, Miro Pušnik, and Franci Demšar
The paper discusses the role of scientometric indicators in peer-review selection of research project proposals. An ex post
facto evaluation was made of three calls for research project proposals in Slovenia: 2003 with a peer review system designed
in a way that conflict of interest was not avoided effectively, 2005 with a sound international peer-review system with minimized
conflict of interest influence but a limited number of reviewers, and 2008 with a combination of scientometric indicators
and a sound international peer review with minimized conflict of interest influence. The hypothesis was that the three different
peer review systems would have different correlations with the same set of scientometric indicators. In the last two decision-making
systems (2005 and 2008) where conflict of interest was effectively avoided, we have a high percentage (65%) of projects that
would have been selected in the call irrespective of the method (peer review or bibliometrics solely). In contrast, in the
2003 call there is a significantly smaller percentage (49%) of projects that would have been selected in the call irrespective
of the method (peer review or bibliometrics solely). It was shown that while scientometric indicators can hardly replace the
peer-review system as the ultimate decision-making and support system, they can reveal its weaknesses on one hand and on the
other can verify peer-review scores and minimize conflict of interest if necessary.
Describes a new method of evaluation of scientific output by laboratories engaged in diverse fields of research. This method helps to evaluate those outputs which are quite recent and not amenable to citation analysis. For the purpose of analysis, impact factor of journals in which papers are published are considered. A method for normalisation of impact factor of journals has been described and, normalised impact factors have also been used for the purpose of analysis. It is found that in such analysis normalised impact factor tends to show better results compared to simple impact factor. The analysis helps us to generate numerous performance indicators such as average impact factor and normalised impact factor for each laboratory and the research complex such as CSIR as a whole; average impact factor and normalised impact factor for each scientist of a laboratory and the research complex; spectral distribution of papers falling within various ranges of impact factors and normalised impact factors. By comparing the performances over several years the trend of research activity of each laboratory can also be obtained.
Summary This paper builds on previous research concerned with the classification and specialty mapping of research fields. Two methods are put to test in order to decide if significant differences as to mapping results of the research front of a science field occur when compared. The first method was based on document co-citation analysis where papers citing co-citation clusters were assumed to reflect the research front. The second method was bibliographic coupling where likewise citing papers were assumed to reflect the research front. The application of these methods resulted in two different types of aggregations of papers: (1) groups of papers citing clusters of co-cited works and (2) clusters of bibliographically coupled papers. The comparision of the two methods as to mapping results was pursued by matching word profiles of groups of papers citing a particular co-citation cluster with word profiles of clusters of bibliographically coupled papers. Findings suggested that the research front was portrayed in two considerably different ways by the methods applied. It was concluded that the results in this study would support a further comparative study of these methods on a more detailed and qualitative ground. The original data set encompassed 73,379 articles from the fifty most cited environmental science journals listed in Journal Citation Report, science edition downloaded from the Science Citation Index on CD-ROM.
Authors:Claire Creaser, Charles Oppenheim, and Mark A. C. Summers
This paper reports on one aspect of a project which investigated the publication and dissemination behaviour of UK researchers and the effect of research assessment on this behaviour. 1 Using a bibliometricmethod