As a first step, it is important to clearly define the scope of the present bibliometricassessment. Other assessments on related topics (Costanza et al. 2004 ; Kajikawa et al. 2007 ; Ma and Stern 2006 ) seem to have been too inclusive
Bibliometric research assessment has matured into a quantitative phase using more meaningful measures and analogies. In this paper, we propose a thermodynamic analogy and introduce what are called the energy, exergy and entropy terms associated with a bibliometric sequence. This can be displayed as time series (variation over time), or in event terms (variation as papers are published) and also in the form of phase diagrams (energy–exergy–entropy representations). It is exergy which is the most meaningful single number scalar indicator of a scientist's performance while entropy then becomes a measure of the unevenness (disorder) of the publication portfolio.
Authors:Jonas Lundberg, Anette Fransson, Mats Brommels, John Sk?r and Inger Lundkvist
This study demonstrates that the choice of search strategy for article identification has an impact on evaluation and policy
analysis of research areas. We have assessed the scientific production in two areas at one research institution during a ten-year
period. We explore the recall and precision of three article identification strategies: journal classifications, keywords
and authors. Our results show that the different search strategies have varying recall (0.38-1.00) and precision (0.50-1.00).
In conclusion, uncritical analysis based on rudimentary article identification strategies may lead to misinterpretation of
the development of research areas, and thus provide incorrect data for decision-making.
In 1987, an analysis of the CHI/NSFScience Literature Indicators Data-Base by the author and his colleagues suggested that the UK's percentage share of the world publication and citation totals had continued to fall over 1981–84, although at a slower rate than previously. That finding has recently been challenged byBraun, Glänzel andSchubert who, by combining 28 publication-based indicators, concluded that there was no statistically significant evidence for such a decline. This paper examines the reasons for the discrepancy. It is argued that the methodology ofBraun et al. is seriously flawed, as well as being inconsistent with work that they have published elsewhere. By adopting a more consistent and realistic set of indicators and applying them to the data ofBraun et al., one arrives at results entirely consistent with those derived from the CHI/NSF data-base.
No new arguments or evidence that undermine our conviction that available scientometric measures do not indicate a statistically significant decline of British science in the first half of the eighties have been found inMartin's reply.
This paper examines the contribution of Indian universities to the mainstream scientific literature during 1987–1989 along two distinct, but inter-related dimensions of quantity and quality of research output. The quantity of output is assessed through the number of articles published in journals covered byScience Citation Index, while the quality of output is assessed through the impact factors of journals in which the articles are published. The impact factors are normalized to eliminate the confounding effects of their covariates,viz. the subject field and the nature of journal. A number of relative indicators are constructed for inter-field and inter-institution comparisons,viz. publication effectiveness index,1 relative quality index,2 activity index3 and citability index4. Inter-field comparisons are made at the level of eight macrofields: Mathematics, Physics, Chemistry, Biology, Earth & Space Sciences, Agriculture, Medical Sciences and Engineering & Technology. Inter-institution comparisons cover thirty three institutions which had published at least 150 articles in three years. The structure of correlations of these institutions with eight macrofields is analyzed through correspondence analysis of the matrices of activity and citability profiles. Correspondence analysis yields a mapping of institutions which reveals the structure of science as determined by the cumulative effect of resource allocation decisions taken in the past for different fields and institutions i.e. the effect of national science policy.
For many years, the ISI Web of Knowledge from Thomson Reuters was the sole publication and citation database covering all
areas of science thus becoming an invaluable tool in bibliometric analysis. In 2004, Elsevier introduced Scopus and this is
rapidly becoming a good alternative. Several attempts have been made at comparing these two instruments from the point of
view of journal coverage for research or for bibliometric assessment of research output.
This paper attempts to answer the question that all researchers ask, i.e., what is to be gained by searching both databases?
Or, if you are forced to opt for one of them, which should you prefer? To answer this question, a detailed paper by paper
study is presented of the coverage achieved by ISI Web of Science and by Scopus of the output of a typical university. After
considering the set of Portuguese universities, the detailed analysis is made for two of them for 2006, the two being chosen
for their comprehensiveness typical of most European universities. The general conclusion is that about 2/3 of the documents
referenced in any of the two databases may be found in both databases while a fringe of 1/3 are only referenced in one or
the other. The citation impact of the documents in the core present in both databases is higher, but the impact of the fringe
that are present only in one of the databases should not be disregarded as some high impact documents may be found among them.
Authors:Giovanni Abramo, Ciriaco Andrea D'Angelo and Tindaro Cicero
appropriate compromise between greater frequency of execution for evaluation exercise and greater robustness of performance rankings, we simulate several bibliometricassessment scenarios, varying the length of the publication period, but fixing the citation