Measuring the efficiency of scientific research activity presents critical methodological aspects, many of which have not
been sufficiently studied. Although many studies have assessed the relation between quality and research productivity and
academic rank, not much is known about the extent of distortion in national university performance rankings when academic
rank and the other labor factors are not considered as a factor of normalization. This work presents a comparative analysis
that aims to quantify the sensitivity of bibliometric rankings to the choice of input, with input considered as only the number
of researchers on staff, or alternatively where their cost is also considered. The field of observation consists of all 69
Italian universities active in the hard sciences. Performance measures are based on the 81,000 publications produced during
the 2004–2006 triennium by all 34,000 research staff, with analysis carried out at the level of individual disciplines, 187
in total. The effect of the switch from labor to cost seems to be minimal except for a few outliers.
In national research assessment exercises that take the peer review approach, research organizations are evaluated on the
basis of a subset of their scientific production. The dimension of the subset varies from nation to nation but is typically
set as a proportional function of the number of researchers employed at each research organization. However, scientific fertility
varies from discipline to discipline, meaning that the representativeness of such a subset also varies according to discipline.
The rankings resulting from the assessments could be quite sensitive to the size of the share of articles selected for evaluation.
The current work examines this issue, developing empirical evidence of variations in ranking due changes in the dimension
of the subset of products evaluated. The field of observation is represented by the scientific production from the hard sciences
of the entire Italian university system, from 2001 to 2003.
There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.
This paper presents a methodology for measuring the technical efficiency of research activities. It is based on the application
of data envelopment analysis to bibliometric data on the Italian university system. For that purpose, different input values
(research personnel by level and extra funding) and output values (quantity, quality and level of contribution to actual scientific
publications) are considered. Our study aims at overcoming some of the limitations connected to the methodologies that have
so far been proposed in the literature, in particular by surveying the scientific production of universities by authors’ name.
The state of the art on the issue of sex differences in research efficiency agrees in recognizing higher performances for
males, however there are divergences in explaining the possible causes. One of the causes advanced is that there are sex differences
in the availability of aptitude at the “high end”. By comparing sex differences in concentration and performance of Italian
academic star scientists to the case in the population complement, this work aims to verify if star, or “high-end”, scientists
play a preponderant role in determining higher performance among males. The study reveals the existence of a greater relative
concentration of males among star scientists, as well as a performance gap between male and female star scientists that is
greater than for the rest of the population. In the latter subpopulation the performance gap between the two sexes is seen
as truly marginal.
National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up national databases of publications by individual authors, derived from Web of Science or Scopus databases, would allow much better, cheaper and more frequent national research assessments.
It is widely recognized that collaboration between the public and private research sectors should be stimulated and supported,
as a means of favoring innovation and regional development. This work takes a bibliometric approach, based on co-authorship
of scientific publications, to propose a model for comparative measurement of the performance of public research institutions
in collaboration with the domestic industry collaboration with the private sector. The model relies on an identification and
disambiguation algorithm developed by the authors to link each publication to its real authors. An example of application
of the model is given, for the case of the academic system and private enterprises in Italy. The study demonstrates that for
each scientific discipline and each national administrative region, it is possible to measure the performance of individual
universities in both intra-regional and extra-regional collaboration, normalized with respect to advantages of location. Such
results may be useful in informing regional policies and merit-based public funding of research organizations.
The literature dedicated to the analysis of the difference in research productivity between the sexes tends to agree in indicating
better performance for men. Through bibliometric examination of the entire population of research personnel working in the
scientific-technological disciplines of Italian university system, this study confirms the presence of significant differences
in productivity between men and women. The differences are, however, smaller than reported in a large part of the literature,
confirming an ongoing tendency towards decline, and are also seen as more noticeable for quantitative performance indicators
than other indicators. The gap between the sexes shows significant sectorial differences. In spite of the generally better
performance of men, there are scientific sectors in which the performance of women does not prove to be inferior.
The study presents a time-series analysis of field-standardized average impact of Italian research compared to the world average. The approach is purely bibliometric, based on census of the full scientific production from all Italian public research organizations active in 2001–2006 (hard sciences only). The analysis is conducted both at sectorial level (aggregated, by scientific discipline and for single fields within disciplines) and at organizational level (by type of organization and for single organizations). The essence of the methodology should be replicable in all other national contexts. Its offers support to policy-makers and administrators for strategic analysis aimed at identifying strengths and weaknesses of national research systems and institutions.
National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the “rules of the game” should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the “known” rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004–2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period.