Browse Our Mathematics and Statistics Journals

Mathematics and statistics journals publish papers on the theory and application of mathematics, statistics, and probability. Most mathematics journals have a broad scope that encompasses most mathematical fields. These commonly include logic and foundations, algebra and number theory, analysis (including differential equations, functional analysis and operator theory), geometry, topology, combinatorics, probability and statistics, numerical analysis and computation theory, mathematical physics, etc.

Mathematics and Statistics

You are looking at 131 - 140 of 169 items for

  • Refine by Access: Content accessible to me x
Clear All

Abstract

The obsolescence and “durability” of scientific literature have been important elements of debate during many years, especially regarding the proper calculation of bibliometric indicators. The effects of “delayed recognition” on impact indicators have importance and are of interest not only to bibliometricians but also among research managers and scientists themselves. It has been suggested that the “Mendel syndrome” is a potential drawback when assessing individual researchers through impact measures. If publications from particular researchers need more time than “normal” to be properly acknowledged by their colleagues, the impact of these researchers may be underestimated with common citation windows. In this paper, we answer the question whether the bibliometric indicators for scientists can be significantly affected by the Mendel syndrome. Applying a methodology developed previously for the classification of papers according to their durability (Costas et al., J Am Soc Inf Sci Technol 61(8):1564–1581, 2010a; J Am Soc Inf Sci Technol 61(2):329–339, 2010b), the scientific production of 1,064 researchers working at the Spanish Council for Scientific Research (CSIC) in three different research areas has been analyzed. Cases of potential “Mendel syndrome” are rarely found among researchers and these cases do not significantly outperform the impact of researchers with a standard pattern of reception in their citations. The analysis of durability could be included as a parameter for the consideration of the citation windows used in the bibliometric analysis of individuals.

Open access

Abstract

In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment.

Open access

Katy Börner: Atlas of science: visualizing what we know

The MIT Press, Cambridge, MA/London, UK, 2010, US$20

Scientometrics
Author:
Loet Leydesdorff
Open access

Abstract

Using aggregated journal–journal citation networks, the measurement of the knowledge base in empirical systems is factor-analyzed in two cases of interdisciplinary developments during the period 1995–2005: (i) the development of nanotechnology in the natural sciences and (ii) the development of communication studies as an interdiscipline between social psychology and political science. The results are compared with a case of stable development: the citation networks of core journals in chemistry. These citation networks are intellectually organized by networks of expectations in the knowledge base at the specialty (that is, above-journal) level. The “structuration” of structural components (over time) can be measured as configurational information. The latter is compared with the Shannon-type information generated in the interactions among structural components: the difference between these two measures provides us with a measure for the redundancy generated by the specification of a model in the knowledge base of the system. This knowledge base incurs (against the entropy law) to variable extents on the knowledge infrastructures provided by the observable networks of relations.

Open access

Abstract

We applied a set of standard bibliometric indicators to monitor the scientific state-of-arte of 500 universities worldwide and constructed a ranking on the basis of these indicators (Leiden Ranking 2010). We find a dramatic and hitherto largely underestimated language effect in the bibliometric, citation-based measurements of research performance when comparing the ranking based on all Web of Science (WoS) covered publications and on only English WoS covered publications, particularly for Germany and France.

Open access

Abstract

This exploratory study aims at answering the following research question: Are the h-index and some of its derivatives discriminatory when applied to rank social scientists with different epistemological beliefs and methodological preferences? This study reports the results of five Tobit and two negative binomial regression models taking as dependent variable the h-index and six of its derivatives, using a dataset combining bibliometric data collected with the PoP software with cross-sectional data of 321 Quebec social scientists in Anthropology, Sociology, Social Work, Political Science, Economics and Psychology. The results reveal an epistemological/methodological effect making positivists and quantitativists globally more productive than constructivists and qualitativists.

Open access

Abstract

Scientific authorship has important implications in science since it reflects the contribution to research of the different individual scientists and it is considered by evaluation committees in research assessment processes. This study analyses the order of authorship in the scientific output of 1,064 permanent scientists at the Spanish CSIC (WoS, 1994–2004). The influence of age, professional rank and bibliometric profile of scientists over the position of their names in the byline of publications is explored in three different research areas: Biology and Biomedicine, Materials Science and Natural Resources. There is a strong trend for signatures of younger researchers and those in the lower professional ranks to appear in the first position (junior signing pattern), while more veteran or highly-ranked ones, who tend to play supervisory functions in research, are proportionally more likely to sign in the last position (senior signing pattern). Professional rank and age have an effect on authorship order in the three fields analysed, but there are inter-field differences. Authorship patterns are especially marked in the most collaboration-intensive field (i.e. Biology and Biomedicine), where professional rank seems to be more significant than age in determining the role of scientists in research as seen through their authorship patterns, while age has a more significant effect in the least collaboration-intensive field (Natural Resources).

Open access
Scientometrics
Authors:
Ludo Waltman
,
Nees Jan van Eck
,
Thed N. van Leeuwen
,
Martijn S. Visser
, and
Anthony F. J. van Raan

Abstract

We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.

Open access

Abstract

The h-index has received an enormous attention for being an indicator that measures the quality of researchers and organizations. We investigate to what degree authors can inflate their h-index through strategic self-citations with the help of a simulation. We extended Burrell's publication model with a procedure for placing self-citations, following three different strategies: random self-citation, recent self-citations and h-manipulating self-citations. The results show that authors can considerably inflate their h-index through self-citations. We propose the q-index as an indicator for how strategically an author has placed self-citations, and which serves as a tool to detect possible manipulation of the h-index. The results also show that the best strategy for an high h-index is publishing papers that are highly cited by others. The productivity has also a positive effect on the h-index.

Open access