An approach for evaluation of research is described that integrates output indicators of four stages downstream the innovation process: immediate, intermediate, pre- ultimate and ultimate outputs. Indexes of leading output indicators are constructed. The indexes are integrated cumulatively to form an overall index of key output indicators, which is the integrated figure of merit (IFM). Data for the indicators are obtained from records and key informants, and the indicators are grouped by normalized weights. The paper also discusses the limitations and the methodological, conceptual and political/organizational issues of such an approach to research evaluation.
Bibliometric data are being used by leading and fast-growing countries in science for researchevaluation purposes. In UK the allocation of public funds to the universities will be mainly carried out according to these data. “The Government
Since more than 10 years, the obligation to perform a research evaluation about the JRC activities is included in Council decisions on research programmes. From 1984 to 1986 eight Peer Panels reviews were performed, one for each programme, and they were followed by an overall assessment by the JRC Scientific Council. For the research programme 1988–1991, a mid-term and a final evaluations were entrusted to expert Panels for the all JRC. For the last programme, 1992–1994, a new approach was introduced by charging Visiting Groups to perform an evaluation of each JRC Institute. Internal evaluation through questionnaires and bibliometric analyses were also attempted at JRC. The merits of the various appoaches are highlighted and specific considerations are shortly discussed concerning the control and the support function of the evaluations, quantitative and qualitative assessments, distributed or centralised evaluations, single or multi-stage evaluations.
A bibliometric analysis was made of an area of veterinary research, avian virology, in the context of seeking quantitative indicators to assist research evaluation for the UK Agricultural and Food Research Council (AFRC). In one approach, a list was made of world publications in avian virology using the CAB database which is the most appropriate literature source in terms of subject specificity and breadth of coverage. Means were sought to minimise the labour input required for citation studies of this kind; results based on peak-year citations only were similar to those from the more widely used four-year count, in terms of country-ranking and time trends. In the second method, the publication outputs of several avian virology research groups were assessed in terms of expected citations i.e. the average number of citations per paper received by the journals in which the groups published, as compared to the actual citations received. The rankings of the groups were the same in both methods. This second approach, while giving only approximate citation rates, has the advantage of requiring only in-house data. It seems more appropriate for the ex-post evaluation of the output of research groups in the context of agricultural and food research, and it is suggested that further studies on journal-based indicators are warranted.
arguably more significant, influence of research but they too bypass any judgment of fundamental quality (Adams and Smith 2007 ).
Researchevaluation makes great use of indicators of research activity and performance. The focus of such evaluation
Authors:Kretschmer Hildrun, Pudovkin Alexander and Stegmann Johannes
Pudovkin , A. , Kretschmer , H. , Stegmann , J. , & Garfield , E. ( 2012 ). Researchevaluation. Part I: Productivity and citedness of a German medical research institution . Scientometrics . doi: 10.1007/s11192-012-0659-z
The present paper addresses some of the many possible uses of citations, including bookmark, intellectual heritage, impact
tracker, and self-serving purposes. The main focus is on the applicability of citation analysis as an impact or quality measure.
If a paper's bibliography is viewed as consisting of a directed (research impact or quality) component related to intellectual
heritage and random components related to specific self-interest topics, then for large numbers of citations from many different
citing paper, the most significant intellectual heritage (research impact or quality) citations will aggregate and the random
author-specific self-serving citations will be scattered and not accumulate. However, there are at least two limitations to
this model of citation analysis for stand-alone use as a measure of research impact of quality. First, the reference to intellectual
heritage could be positive or negative. Second, there could be systemic biases which affect the aggregate results, and one
of these, the “Pied Piper Effect”, is described in detail. Finally, the results of a short citation study comparing Russian
and American papers in different technical fields are presented. The questions raised in interpreting this data highlight
a few of the difficulties in attempting to interpret citation results without supplementary information.
Leydesdorff (Leydesdorff, 1998) addresses the history of citations and citation analysis, and the transformation of a reference mechanism into a purportedly
quantitive measure of research impact/quality. The present paper examines different facets of citations and citation analysis,
and discusses the validity of citation analysis as a useful measure of research impact/quality.
Authors:Giovanni Abramo, Ciriaco Andrea D'Angelo and Fulvio Viel
entire economic systems. These countries are turning to national researchevaluation exercises, to pursue all or parts of the following objectives: (i) stimulus of greater efficiency in research activity; (ii) resource allocation based on merit; (iii
Authors:Luigi Di Caro, Mario Cataldi and Claudio Schifanella
“ Results ” section we report real case scenarios (taken from the web application presented in the “ Appendix ” section), user studies and analyses of the data that highlight both the validity and the reliability of our researchevaluation approach