Search Results

You are looking at 1 - 8 of 8 items for

  • Author or Editor: Loet Leydesdorff x
  • Refine by Access: Content accessible to me x
Clear All Modify Search

Abstract  

The two Journal Citation Reports of the Science Citation Index 2004 and the Social Science Citation Index 2004 were combined in order to analyze and map journals and specialties at the edges and in the overlap between the two databases. For journals which belong to the overlap (e.g., Scientometrics), the merger mainly enriches our insight into the structure which can be obtained from the two databases separately; but in the case of scientific journals which are more marginal in either database, the combination can provide a new perspective on the position and function of these journals (e.g., Environment and Planning B — Planning and Design). The combined database additionally enables us to map citation environments in terms of the various specialties comprehensively. Using the vector-space model, visualizations are provided for specialties that are parts of the overlap (information science, science & technology studies). On the basis of the resulting visualizations, “betweenness” — a measure from social network analysis — is suggested as an indicator for measuring the interdisciplinarity of journals.

Open access

Abstract

Using aggregated journal–journal citation networks, the measurement of the knowledge base in empirical systems is factor-analyzed in two cases of interdisciplinary developments during the period 1995–2005: (i) the development of nanotechnology in the natural sciences and (ii) the development of communication studies as an interdiscipline between social psychology and political science. The results are compared with a case of stable development: the citation networks of core journals in chemistry. These citation networks are intellectually organized by networks of expectations in the knowledge base at the specialty (that is, above-journal) level. The “structuration” of structural components (over time) can be measured as configurational information. The latter is compared with the Shannon-type information generated in the interactions among structural components: the difference between these two measures provides us with a measure for the redundancy generated by the specification of a model in the knowledge base of the system. This knowledge base incurs (against the entropy law) to variable extents on the knowledge infrastructures provided by the observable networks of relations.

Open access

Abstract

Journal impact factors (IFs) can be considered historically as the first attempt to normalize citation distributions by using averages over 2 years. However, it has been recognized that citation distributions vary among fields of science and that one needs to normalize for this. Furthermore, the mean—or any central-tendency statistics—is not a good representation of the citation distribution because these distributions are skewed. Important steps have been taken to solve these two problems during the last few years. First, one can normalize at the article level using the citing audience as the reference set. Second, one can use non-parametric statistics for testing the significance of differences among ratings. A proportion of most-highly cited papers (the top-10% or top-quartile) on the basis of fractional counting of the citations may provide an alternative to the current IF. This indicator is intuitively simple, allows for statistical testing, and accords with the state of the art.

Open access

Katy Börner: Atlas of science: visualizing what we know

The MIT Press, Cambridge, MA/London, UK, 2010, US$20

Scientometrics
Author:
Loet Leydesdorff
Open access

Abstract  

The Journal Citation Reports of the Science Citation Index 2004 were used to delineate a core set of nanotechnology journals and a nanotechnology-relevant set. In comparison with 2003, the core set has grown and the relevant set has decreased. This suggests a higher degree of codification in the field of nanotechnology: the field has become more focused in terms of citation practices. Using the citing patterns among journals at the aggregate level, a core group of ten nanotechnology journals in the vector space can be delineated on the criterion of betweenness centrality. National contributions to this core group of journals are evaluated for the years 2003, 2004, and 2005. Additionally, the specific class of nanotechnology patents in the database of the U. S. Patent and Trade Office (USPTO) is analyzed to determine if non-patent literature references can be used as a source for the delineation of the knowledge base in terms of scientific journals. The references are primarily to general science journals and letters, and therefore not specific enough for the purpose of delineating a journal set.

Open access

Abstract

Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.

Open access

Abstract

In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.

Open access

Abstract  

This article explores the emergence of knowledge from scientific discoveries and their effects on the structure of scientific communication. Network analysis is applied to understand this emergence institutionally as changes in the journals; semantically as changes in the codification of meaning in terms of words; and cognitively as the new knowledge becomes the emergent foundation of further developments. The discovery of fullerenes in 1985 is analyzed as the scientific discovery that triggered a process which led to research in nanotubes.

Open access