We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric
maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical
representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric
maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality
for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program
is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to
construct and display a co-citation map of 5,000 major scientific journals.
Two commonly used ideas in the development of citation-based research performance indicators are the idea of normalizing citation counts based on a field classification scheme and the idea of recursive citation weighing (like in PageRank-inspired indicators). We combine these two ideas in a single indicator, referred to as the recursive mean normalized citation score indicator, and we study the validity of this indicator. Our empirical analysis shows that the proposed indicator is highly sensitive to the field classification scheme that is used. The indicator also has a strong tendency to reinforce biases caused by the classification scheme. Based on these observations, we advise against the use of indicators in which the idea of normalization based on a field classification scheme and the idea of recursive citation weighing are combined.
A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in
the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection
has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for
automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the
proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed
by a number of operations research experts. It turns out that in general the proposed methodology performs quite well.
We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.
Opthof and Leydesdorff (Scientometrics, ) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, ) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment.