Search Results

You are looking at 1 - 10 of 474 items for :

  • "Excellence" x
  • Refine by Access: All Content x
Clear All

citizenship. Altbach ( 2007 ) described “world class universities” in a more specific way, indicating that the key elements that a world class university should consist of include excellence in research, top professors, academic freedom, governance, adequate

Restricted access

-KM excellence framework for an industry is a lifetime experience. Thus, dynamics inside the industry are quickly changing. There is a huge divergence in managerial and workforce direction in the industry. With the expansion in the technology revolution, the

Open access

benchmarking some European universities included in Excellence Initiatives . The rest of the paper is organized as follows. “ Methodological aspects in building robust ranks ” section presents the methodological aspects in obtaining robust ranks by

Restricted access

. 2004 ; Gazni and Didegah 2011 ; Gorraiz et al. 2011 ). Our study on RC is an attempt to estimate the degree of internationalization of academic institutions and regions. Furthermore potential influences of RC on excellence initiatives of

Restricted access

Abstract  

This paper introduces a citation-based "systems approach" for analyzing the various institutional and cognitive dimensions of scientific excellence within national research systems. The methodology, covering several aggregate levels, focuses on the most highly cited research papers in the international journal literature. The distribution of these papers across institutions and disciplines enables objective comparisons their (possible) international-level scientific excellence. By way of example, we present key results from a recent series of analyses of the research system in the Netherlands in the mid 1990s, focussing on the performance of the universities across the various major scientific disciplines within the context of the entire system"s scientific performance. Special attention is paid to the contribution in the world"s top 1% and top 10% most highly cited research papers. The findings indicate that these high performance papers provide a useful analytical framework - both in terms of transparency, cognitive and institutional differentiation, as well as its scope for domestic and international comparisons - providing new indicators for identifying "world class" scientific excellence at the aggregate level. The average citation scores of these academic "Centres of Scientific Excellence" appear to be an inadequate predictor of their production of highly cited papers. However, further critical reflection and in-depth validation studies are needed to establish the true potential of this approach for science policy analyses and evaluation of research performance.

Restricted access

Summary  

A high level of citation to an author's work is, in general, a testimony to the fact that the author's work has been noted and used by his peers. High citation is seen to be correlated with other forms of recognition and rewards, and is a key indicator of research performance, among other bibliometric indicators. The Institute for Scientific Information (ISI) defines a 'highly cited researcher' (HCR) as one of 250 most cited authors of journal papers in any discipline. Citation data for 20 years (1981-1999) is used to calculate the share of HCRs for countries in 21 subject areas. We find that the US dominates in all subject areas (US share ~ 40-90%). Based on the number of highly cited researchers in a country, an index of citation excellence is proposed. We find that rank order of countries based on this index is in conformity with our general understanding of research excellence, whereas the more frequently used indicator, citations per paper, gave an unacceptable rank order due to an inherent bias toward very small countries. Additionally, a high value of the index of citation excellence was found to be associated with higher concentration of highly cited researchers in affiliating organizations.

Restricted access

Abstract  

Evaluation studies of scientific performance conducted during the past years more and more focus on the identification of research of the 'highest quality', 'top' research, or 'scientific excellence'. This shift in focus has lead to the development of new bibliometric methodologies and indicators. Technically, it meant a shift from bibliometric impact scores based on average values such as the average impact of all papers published by some unit to be evaluated towards indicators reflecting the topof the citation distribution, such as the number of 'highly cited' or 'top' articles. In this study we present a comparative analysis of a number of standard and new indicators of research performance or 'scientific excellence', using techniques applied in studies conducted by CWTS in recent years. It will be shown that each type of indicator reflects a particular dimension of the general concept of research performance. Consequently, the application of one single indicator only may provide an incomplete picture of a unit's performance. It is argued that one needs to combine the various types of indicators in order to offer policy makers and evaluators valid and useful assessment tools.

Restricted access

excellence. 1 One of us recently (Jan. 20, 2011) received access to this data in response to a request of the Dean of the Academic Medical Center of the University of Amsterdam. This communication was first submitted

Open access

and level of excellence, devoted to sensors, actuators and microsystems. The objective is to instil a culture of business collaboration in the academic world. The MESA + LAB complex also functions as a test bank for small firms, pharmaceutical products

Restricted access

Summary As citation practices strongly depend on fields, field normalisation is recognised as necessary for fair comparison of figures in bibliometrics and evaluation studies. However fields may be defined at various levels, from small research areas to broad academic disciplines, and thus normalisation values are expected to vary. The aim of this project was to test the stability of citation ratings of articles as the level of observation - hence the basis of normalisation - changes. A conventional classification of science based on ISI subject categories and their aggregates at various scales was used, namely at five levels: all science, large academic discipline, sub-discipline, speciality and journal. Among various normalisation methods, we selected a simple ranking method (quantiles), based on the citation score of the article in each particular aggregate (journal, speciality, etc.) it belonged to at each level. The study was conducted on articles in the full SCI range, for publication year 1998 with a four-year citation window. Stability is measured in three ways: overall comparison of article rankings; individual trajectory of articles; survival of the top-cited class across levels. Overall rank correlations on the observed empirical structure are benchmarked against two fictitious sets that keep the same embedded structure of articles but reassign citation scores either in a totally ordered or in a totally random distribution. These sets act respectively as a 'worst case' and 'best case' for the stability of citation ratings. The results show that: (a) the average citation rankings of articles substantially change with the level of observation (b) observation at the journal level is very particular, and the results differ greatly in all test circumstances from all the other levels of observation (c) the lack of cross-scale stability is confirmed when looking at the distribution of individual trajectories of articles across the levels; (d) when considering the top-cited fractions, a standard measure of excellence, it is found that the contents of the 'top-cited' set is completely dependent on the level of observation. The instability of impact measures should not be interpreted in terms of lack of robustness but rather as the co-existence of various perspectives each having their own form of legitimacy. A follow-up study will focus on the micro levels of observation and will be based on a structure built around bibliometric groupings rather than conventional groupings based on ISI subject categories.

Restricted access