For many years, the ISI Web of Knowledge from Thomson Reuters was the sole publication and citation database covering all
areas of science thus becoming an invaluable tool in bibliometric analysis. In 2004, Elsevier introduced Scopus and this is
rapidly becoming a good alternative. Several attempts have been made at comparing these two instruments from the point of
view of journal coverage for research or for bibliometric assessment of research output.
This paper attempts to answer the question that all researchers ask, i.e., what is to be gained by searching both databases?
Or, if you are forced to opt for one of them, which should you prefer? To answer this question, a detailed paper by paper
study is presented of the coverage achieved by ISI Web of Science and by Scopus of the output of a typical university. After
considering the set of Portuguese universities, the detailed analysis is made for two of them for 2006, the two being chosen
for their comprehensiveness typical of most European universities. The general conclusion is that about 2/3 of the documents
referenced in any of the two databases may be found in both databases while a fringe of 1/3 are only referenced in one or
the other. The citation impact of the documents in the core present in both databases is higher, but the impact of the fringe
that are present only in one of the databases should not be disregarded as some high impact documents may be found among them.
The assessment of individual researchers using bibliometric indicators is more complex than that of a region, country or university. For large scientific bodies, averages over a large number of researchers and their outputs is generally believed to give indication of the quality of the research work. For an individual, the detailed peer evaluation of his research outputs is required and, even this, may fail in the short term to make a final, long term assessment of the relevance and originality of the work. Scientometrics assessment at individual level is not an easy task not only due to the smaller number of publications that are being evaluated, but other factors can influence significantly the bibliometric indicators applied. Citation practices vary widely among disciplines and sub disciplines and this may justify the lack of good bibliometric indicators at individual level. The main goal of this study was to develop an indicator that considers in its calculation some of the aspects that we must take into account on the assessment of scientific performance at individual level. The indicator developed, the hnf index, considers the different cultures of citation of each field and the number of authors per publication. The results showed that the hnf index can be used on the assessment of scientific performance of individual researchers and for following the performance of a researcher.
This paper presents the journal relative impact (JRI), an indicator for scientific evaluation of journals. The JRI considers in its calculation the different culture of citations presented by the Web of Science subject categories. The JRI is calculated considering a variable citation window. This citation window is defined taking into account the time required by each subject category for the maturation of citations. The type of document considered in each subject category depends on its outputs in relation to the citations. The scientific performance of each journal in relation to each subject category that it belongs to is considered allowing the comparison of the scientific performance of journals from different fields. The results obtained show that the JRI can be used for the assessment of the scientific performance of a given journal and that the SJR and SNIP should be used to complement the information provided by the JRI. The JRI presents good features as stability over time and predictability.