Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Flavia Di Costa x
Clear All Modify Search

Abstract

This work analyses the links between individual research performance and academic rank. A typical bibliometric methodology is used to study the performance of all Italian university researchers active in the hard sciences, for the period 2004–2008. The objective is to characterize the performance of the ranks of full (FPs), associate and assistant professors (APs), along various dimensions, in order to verify the existence of performance differences among the ranks in general and for single disciplines.

Restricted access

Abstract

National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the “rules of the game” should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the “known” rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004–2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period.

Restricted access

Abstract  

In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as journal impact factor. The present work continues this line of research, examining whether it is valid that the use of the impact factor should always be avoided in favour of citations, or whether the use of impact factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.

Restricted access

Abstract

Research policies in the more developed nations are ever more oriented towards the introduction of productivity incentives and competition mechanisms intended to increase efficiency in research institutions. Assessments of the effects of these policy interventions on public research activity often neglect the normal, inherent variation in the performance of research institutions over time. In this work, we propose a cross-time bibliometric analysis of research performance by all Italian universities in two consecutive periods (2001–2003 and 2004–2008) not affected by national policy interventions. Findings show that productivity and impact increased at the level of individual scientists. At the level of university, significant variation in the rank was observed.

Restricted access

Abstract

Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.

Restricted access