Search Results

You are looking at 1 - 10 of 53 items for :

  • "Journal ranking" x
  • Refine by Access: All Content x
Clear All

Abstract

Two paradigmatic approaches to the normalisation of citation-impact measures are discussed. The results of the mathematical manipulation of standard indicators such as citation means, notably journal Impact Factors, (called a posteriori normalisation) are compared with citation measures obtained from fractional citation counting (called a priori normalisation). The distributions of two subfields of the life sciences and mathematics are chosen for the analysis. It is shown that both methods provide indicators that are useful tools for the comparative assessment of journal citation impact.

Restricted access

Abstract  

For the first time the impact of different ranking parameters on one and the same experimentally achieved set of 610 journals is studied. Significance of the three journal ranking parameters Selective Journal Productivity, Selective Impact, and Collectivity is established. Significant parameters cause strong re-ranking in journal rank distributions and, in the transition between individual and collective parameters, also in the shape of the comulated curves. No parameter can replace an other one, each carries essential information on the communication process. The author's concept is open for more parameters and pronounces the role of man in decision making. The connection between simple behavioral principles and scientometrics is emphasized. The holography principle and the maximum speed principle are claimed to be most promising.

Restricted access

Abstract  

Selecting an appropriate set of scientific journals which best meets the users' needs and the dynamics of science requires usage of weight parameters by which journals can be ranked. Previous methods are based on the simple counting of relevant articles, or hits in SDI runs. The new method proposed combines hit numbers in SDI runs and journals' impact factors to a weight parameter called Selective Impact. The experimental results obtained show that ranking by Selective Impact leads to a higher quality of the conclusions to be drawn from journal rank distributions.

Restricted access

Summary  

The inter-citation journal group is defined as a group of journals with inter-citation relations. In this paper, according to the 2003 JCR, an inter-citation relation matrix of 10 medical journals is established. Based on the transfer function model of the disturbed citing process, the calculation formula of journal impact factor disturbed by publication delays of certain journal in the group is deduced and a changing process of every journal's impact factor caused by the increase of each journal's average publication delay is simulated. In the inter-citation journal group, when a journal's publication delay increase, impact factors of all journals will be decreased and rankings of journals according to the impact factor may be changed. The closer a citation relation between two journals, the stronger the interaction of them and the larger the decrease of their impact factors caused by the increase of their publication delays.

Restricted access

Abstract  

In the course of the study of scientific journal's rank distributions two new parameters are defined reflecting collective properties of journals in a network where the journals are linked to each other through co-usage of user profiles for which they contain relevant papers. The first, Collectivity C is a mere structure parameter whereas Selective Collectivity N·C uses C of a journal as a weight factor for the number of hits N produced in a retrospective search in a data file. The corresponding rank distributions show besides the expected reranking effect considerable deviations from a distribution where ranking is done according to the parameter Selective Journal Productivity N.

Restricted access

Abstract  

Using strictly the same parameters (identical two publication years (2004–2005) and identical one-year citation window (2006)), IF 2006 was compared with h-index 2006 for two samples of “Pharmacology and Pharmacy” and “Psychiatry” journals computed from the ISI Web of Science. For the two samples, the IF and the h-index rankings of the journals are very different. The correlation coefficient between the IF and the h-index is high for Psychiatry but lower for Pharmacology. The linearity test performed between the h-index and
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\,IF^{{{\frac{\alpha }{\alpha \, + 1}}}} .\,n\,^{{{\frac{1}{\alpha \, + 1}}}}$$ \end{document}
showed the great sensitivity of the model compared with α. The IF and h-index can be completely complementary when evaluating journals of the same scientific discipline.
Restricted access

Abstract  

Citation analyses were performed for Australian social science journals to determine the differences between data drawn from Web of Science and Scopus. These data were compared with the tier rankings assigned by disciplinary groups to the journals for the purposes of a new research assessment model, Excellence in Research for Australia (ERA), due to be implemented in 2010. In addition, citation-based indicators including an extended journal impact factor, the h-index, and a modified journal diffusion factor, were calculated to assess whether subsequent analyses influence the ranking of journals. The findings suggest that the Scopus database provides higher number of citations for more of the journals. However, there appears to be very little association between the assigned tier ranking of journals and their rank derived from citations data. The implications for Australian social science researchers are discussed in relation to the use of citation analysis in the ERA.

Restricted access

Abstract  

The qualitative label ‘international journal’ is used widely, including in national research quality assessments. We determined the practicability of analysing internationality quantitatively using 39 conservation biology journals, providing a single numeric index (IIJ) based on 10 variables covering the countries represented in the journals’ editorial boards, authors and authors citing the journals’ papers. A numerical taxonomic analysis refined the interpretation, revealing six categories of journals reflecting distinct international emphases not apparent from simple inspection of the IIJs alone. Categories correlated significantly with journals’ citation impact (measured by the Hirsch index), with their rankings under the Australian Commonwealth’s ‘Excellence in Research for Australia’ and with some countries of publication, but not with listing by ISI Web of Science. The assessments do not reflect on quality, but may aid editors planning distinctive journal profiles, or authors seeking appropriate outlets.

Restricted access
Scientometrics
Authors: J. A. García, Rosa Rodriguez-Sánchez, J. Fdez-Valdivia, and J. Martinez-Baena

attempt made to differentiate between the quality, visibility or impact of the different journals when funding is allocated, there is little incentive to strive for publication in a prestigious journal. In journal ranking models, the ranking score is a

Restricted access

discernable influence as “flops”. Overall, the blockbuster hypothesis suggests that common performance measures, such as journal rankings, are driven by the relative number of blockbusters and flops in each journal. Our central premise is that the creation of

Restricted access