This paper gives an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments. We distinguish two different methods. In the first application, research performance assessment, the bibliometric method is based on advanced analysis of publication and citation data. We show that the resulting indicators are very useful, and in fact an indispensable element next to peer review in research evaluation procedures. Indicators based on advanced bibliometric methods offer much more than only numbers. They provide insight into the position of actors at the research front in terms of influence and specializations, as well as into patterns of scientific communication and processes of knowledge dissemination. After a discussion of technical and methodological problems, we present practical examples of the use of research performance indicators. In the second application, monitoring scientific developments, bibliometric methods based on advanced mapping techniques are essential. We discuss these techniques briefly and indicate their most important potentials, particularly their role in foresight exercises. Finally, we give a first outline of how both bibliometric approaches can be combined to a broader and powerful methodology to observe scientific advancement and the role of actors.
In this presentation we argue that the core research activities of scientometries fall in four interrelated areas: science
and technology indicators, information systems on science and technology, the interaction between science and technology,
and cognitive as well as socioorganisational structures in science and technology.
We emphasize that an essential condition for the healthy development of the field is a careful balance between application
and basic work, in which the applied side is the driving force. In other words: scientometrics is primarily a field of applied
science. This means that the interaction users' is at least as important as the interaction with colleague-scientists. We
state that this situation is very stimulating, it strengthens methodology and it activates basic work. We consider idea of
scientometrics lacking theoretical content or being otherwise in a 'crisis-like' situation groundless.
Scientometrics is in a typical developmental stage in which the creativity of its individual researchers and the ‘climate’
and facilities of their institutional environments determine the Progress in the field and, particularly, its relation with
other disciplines. These aspects also contribute substantially to the reputation of scientometrics as a research field respected
by the broader scientific community. And this latter point is important, both to let quantitative studies of science and technology
take more advantage of an academic environment, as well as to keep it innovative and thus attractive in terms of applications
at the longer term.
In this paper we discuss geometrical properties of information space as represented by the phenomenon of co-citation clustering. More specifically, the size distribution of co-citation clusters is studied and interpreted in terms of fractal dimensions.
There is an ongoing discussion on the influence of international collaboration on impact as measured by citation-based indicators.
Collaboration generally involves more authors than ‘no collaboration’ work and it is obvious that the phenomenon of self-citation
will be stronger (there are more authors to cite themselves). Thus it can be seen as an important ‘amplifier’ of measured
impact. Although this effect is certainly possible and already demonstrated recently, it should not be considered as the only
or even major explanation of higher impact in the comparison between ‘no collaboration’ and international collaboration. Using
data of an extensive bibliometric study of astronomical research in the Netherlands, we prove that higher rates of self-citation
in international collaboration do not play any significant role as ‘impact amplifier’. The central point is that proper impact
measurement must involve corrections for self-citations.
In this paper we take position in the ‘citation theory’ debate. First we revisit relevant earlier work of our group and try
to assemble the findings. We criticise the constructivist fashion in sociology of science concerning citation practices. With
statistical arguments we show the strong limitations of any ‘citation theory’ at the ‘citer side’. We emphasize that citations
should be conceived of as ‘binding properties’ of an individual publication, from which many types of structuring follow.
As keywords also have such binding properties at the same time, and as there are empirically established relations between
the citation domain and the word domain, it is useless to develop a model concerning citations only. We envisage an interesting
development, both theoretically and empirically, of what we would like to call ‘bibliometric chemistry’.