Search Results

You are looking at 11 - 18 of 18 items for

  • Author or Editor: Giovanni Abramo x
Clear All Modify Search

Abstract

National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the “rules of the game” should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the “known” rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004–2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period.

Restricted access

Abstract

The study presents a time-series analysis of field-standardized average impact of Italian research compared to the world average. The approach is purely bibliometric, based on census of the full scientific production from all Italian public research organizations active in 2001–2006 (hard sciences only). The analysis is conducted both at sectorial level (aggregated, by scientific discipline and for single fields within disciplines) and at organizational level (by type of organization and for single organizations). The essence of the methodology should be replicable in all other national contexts. Its offers support to policy-makers and administrators for strategic analysis aimed at identifying strengths and weaknesses of national research systems and institutions.

Restricted access

Abstract  

In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as journal impact factor. The present work continues this line of research, examining whether it is valid that the use of the impact factor should always be avoided in favour of citations, or whether the use of impact factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.

Restricted access

Abstract

Policy makers, at various levels of governance, generally encourage the development of research collaboration. However the underlying determinants of collaboration are not completely clear. In particular, the literature lacks studies that, taking the individual researcher as the unit of analysis, attempt to understand if and to what extent the researcher's scientific performance might impact on his/her degree of collaboration with foreign colleagues. The current work examines the international collaborations of Italian university researchers for the period 2001–2005, and puts them in relation to each individual's research performance. The results of the investigation, which assumes co-authorship as proxy of research collaboration, show that both research productivity and average quality of output have positive effects on the degree of international collaboration achieved by a scientist.

Restricted access
Authors: Giovanni Abramo, Ciriaco Andrea D'Angelo and Flavia Di Costa

Abstract

Research policies in the more developed nations are ever more oriented towards the introduction of productivity incentives and competition mechanisms intended to increase efficiency in research institutions. Assessments of the effects of these policy interventions on public research activity often neglect the normal, inherent variation in the performance of research institutions over time. In this work, we propose a cross-time bibliometric analysis of research performance by all Italian universities in two consecutive periods (2001–2003 and 2004–2008) not affected by national policy interventions. Findings show that productivity and impact increased at the level of individual scientists. At the level of university, significant variation in the rank was observed.

Restricted access

Abstract

An increasing number of nations allocate public funds to research institutions on the basis of rankings obtained from national evaluation exercises. Therefore, in non-competitive higher education systems where top scientists are dispersed among all the universities, rather than concentrated among a few, there is a high risk of penalizing those top scientists who work in lower-performance universities. Using a 5 year bibliometric analysis conducted on all Italian universities active in the hard sciences from 2004 to 2008, this work analyzes the distribution of publications and relevant citations by scientists within the universities, measures the research performance of individual scientists, quantifies the intensity of concentration of top scientists at each university, provides performance rankings for the universities, and indicates the effects of selective funding on the top scientists of low-ranked universities.

Restricted access

Abstract

Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.

Restricted access

Abstract

National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers’ productivity rankings with the length of the publication period within the period 2003–2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above 3 years. Then we will show the strong randomness of performance rankings over publication periods under 3 years. We conclude that the choice of a 3 year publication period would seem reliable, particularly for physics, chemistry, biology and medicine.

Restricted access