Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: Ciriaco Andrea D’Angelo x
  • Refine by Access: All Content x
Clear All Modify Search

Abstract

There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.

Restricted access

Abstract

National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up national databases of publications by individual authors, derived from Web of Science or Scopus databases, would allow much better, cheaper and more frequent national research assessments.

Restricted access

Abstract

National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the “rules of the game” should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the “known” rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004–2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period.

Restricted access

Abstract

The study presents a time-series analysis of field-standardized average impact of Italian research compared to the world average. The approach is purely bibliometric, based on census of the full scientific production from all Italian public research organizations active in 2001–2006 (hard sciences only). The analysis is conducted both at sectorial level (aggregated, by scientific discipline and for single fields within disciplines) and at organizational level (by type of organization and for single organizations). The essence of the methodology should be replicable in all other national contexts. Its offers support to policy-makers and administrators for strategic analysis aimed at identifying strengths and weaknesses of national research systems and institutions.

Restricted access

Abstract

Policy makers, at various levels of governance, generally encourage the development of research collaboration. However the underlying determinants of collaboration are not completely clear. In particular, the literature lacks studies that, taking the individual researcher as the unit of analysis, attempt to understand if and to what extent the researcher's scientific performance might impact on his/her degree of collaboration with foreign colleagues. The current work examines the international collaborations of Italian university researchers for the period 2001–2005, and puts them in relation to each individual's research performance. The results of the investigation, which assumes co-authorship as proxy of research collaboration, show that both research productivity and average quality of output have positive effects on the degree of international collaboration achieved by a scientist.

Restricted access

Abstract

Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.

Restricted access

Abstract

Research policies in the more developed nations are ever more oriented towards the introduction of productivity incentives and competition mechanisms intended to increase efficiency in research institutions. Assessments of the effects of these policy interventions on public research activity often neglect the normal, inherent variation in the performance of research institutions over time. In this work, we propose a cross-time bibliometric analysis of research performance by all Italian universities in two consecutive periods (2001–2003 and 2004–2008) not affected by national policy interventions. Findings show that productivity and impact increased at the level of individual scientists. At the level of university, significant variation in the rank was observed.

Restricted access

Abstract

The present study proposes a bibliometric methodology for measuring the grade of correspondence between regional industry's demand for research collaboration and supply from public laboratories. The methodology also permits measurement of the intensity and direction of the regional flows of knowledge in public–private collaborations. The aim is to provide a diagnostic instrument for regional and national policy makers, which could add to existing ones to plan interventions for re-balancing sectorial public supply of knowledge with industrial absorptive capacity, and maximizing appropriability of knowledge spillovers. The methodology is applied to university–industry collaborations in the hard sciences in all Italian administrative regions.

Restricted access

Abstract

An increasing number of nations allocate public funds to research institutions on the basis of rankings obtained from national evaluation exercises. Therefore, in non-competitive higher education systems where top scientists are dispersed among all the universities, rather than concentrated among a few, there is a high risk of penalizing those top scientists who work in lower-performance universities. Using a 5 year bibliometric analysis conducted on all Italian universities active in the hard sciences from 2004 to 2008, this work analyzes the distribution of publications and relevant citations by scientists within the universities, measures the research performance of individual scientists, quantifies the intensity of concentration of top scientists at each university, provides performance rankings for the universities, and indicates the effects of selective funding on the top scientists of low-ranked universities.

Restricted access

Abstract

National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers’ productivity rankings with the length of the publication period within the period 2003–2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above 3 years. Then we will show the strong randomness of performance rankings over publication periods under 3 years. We conclude that the choice of a 3 year publication period would seem reliable, particularly for physics, chemistry, biology and medicine.

Restricted access