Browse

You are looking at 41 - 50 of 70 items for :

  • Mathematics and Statistics x
  • Refine by Access: Content accessible to me x
Clear All

Abstract

The h-index has received an enormous attention for being an indicator that measures the quality of researchers and organizations. We investigate to what degree authors can inflate their h-index through strategic self-citations with the help of a simulation. We extended Burrell's publication model with a procedure for placing self-citations, following three different strategies: random self-citation, recent self-citations and h-manipulating self-citations. The results show that authors can considerably inflate their h-index through self-citations. We propose the q-index as an indicator for how strategically an author has placed self-citations, and which serves as a tool to detect possible manipulation of the h-index. The results also show that the best strategy for an high h-index is publishing papers that are highly cited by others. The productivity has also a positive effect on the h-index.

Open access

Abstract

Inventions combine technological features. When features are barely related, burdensomely broad knowledge is required to identify the situations that they share. When features are overly related, burdensomely broad knowledge is required to identify the situations that distinguish them. Thus, according to my first hypothesis, when features are moderately related, the costs of connecting and costs of synthesizing are cumulatively minimized, and the most useful inventions emerge. I also hypothesize that continued experimentation with a specific set of features is likely to lead to the discovery of decreasingly useful inventions; the earlier-identified connections reflect the more common consumer situations. Covering data from all industries, the empirical analysis provides broad support for the first hypothesis. Regressions to test the second hypothesis are inconclusive when examining industry types individually. Yet, this study represents an exploratory investigation, and future research should test refined hypotheses with more sophisticated data, such as that found in literature-based discovery research.

Open access

Abstract

This study presents a historical overview of the International Conference on Human Robot Interaction (HRI). It summarizes its growth, internationalization and collaboration. Rankings for countries, organizations and authors are provided. Furthermore, an analysis of the military funding for HRI papers is performed. Approximately 20% of the papers are funded by the US Military. The proportion of papers from the US is around 65% and the dominant role of the US is only challenged by the strong position of Japan, in particular by the contributions by ATR.

Open access

Abstract  

This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance.

Open access
Scientometrics
Authors: Reindert K. Buter, Ed. C. M. Noyons, and Anthony F. J. Van Raan

Abstract

We define converging research as the emergence of an interdisciplinary research area from fields that did not show interdisciplinary connections before. This paper presents a process to search for converging research using journal subject categories as a proxy for fields and citations to measure interdisciplinary connections, as well as an application of this search. The search consists of two phases: a quantitative phase in which pairs of citing and cited fields are located that show a significant change in number of citations, followed by a qualitative phase in which thematic focus is sought in publications associated with located pairs. Applying this search on publications from the Web of Science published between 1995 and 2005, 38 candidate converging pairs were located, 27 of which showed thematic focus, and 20 also showed a similar focus in the other, reciprocal pair.

Open access

Abstract  

Collaboration between researchers and between research organizations is generally considered a desirable course of action, in particular by some funding bodies. However, collaboration within a multidisciplinary community, such as the Computer–Human Interaction (CHI) community, can be challenging. We performed a bibliometric analysis of the CHI conference proceedings to determine if papers that have authors from different organization or countries receive more citations than papers that are authored by members of the same organization. There was no significant difference between these three groups, indicating that there is no advantage for collaboration in terms of citation frequency. Furthermore, we tested if papers written by authors from different organizations or countries receive more best paper awards or at least award nominations. Papers from only one organization received significantly fewer nominations than collaborative papers.

Open access

Abstract  

In science, a relatively small pool of researchers garners a disproportionally large number of citations. Still, very little is known about the social characteristics of highly cited scientists. This is unfortunate as these researchers wield a disproportional impact on their fields, and the study of highly cited scientists can enhance our understanding of the conditions which foster highly cited work, the systematic social inequalities which exist in science, and scientific careers more generally. This study provides information on this understudied subject by examining the social characteristics and opinions of the 0.1% most cited environmental scientists and ecologists. Overall, the social characteristics of these researchers tend to reflect broader patterns of inequality in the global scientific community. However, while the social characteristics of these researchers mirror those of other scientific elites in important ways, they differ in others, revealing findings which are both novel and surprising, perhaps indicating multiple pathways to becoming highly cited.

Open access

Abstract  

A collection of coauthored papers is the new norm for doctoral dissertations in the natural and biomedical sciences, yet there is no consensus on how to partition authorship credit between PhD candidates and their coauthors. Guidelines for PhD programs vary but tend to specify only a suggested range for the number of papers to be submitted for evaluation, sometimes supplemented with a requirement for the PhD candidate to be the principal author on the majority of submitted papers. Here I use harmonic counting to quantify the actual amount of authorship credit attributable to individual PhD graduates from two Scandinavian universities in 2008. Harmonic counting corrects for the inherent inflationary and equalizing biases of routine counting methods, thereby allowing the bibliometrically identifiable amount of authorship credit in approved dissertations to be analyzed with unprecedented accuracy. Unbiased partitioning of authorship credit between graduates and their coauthors provides a post hoc bibliometric measure of current PhD requirements, and sets a de facto baseline for the requisite scientific productivity of these contemporary PhD’s at a median value of approximately 1.6 undivided papers per dissertation. Comparison with previous census data suggests that the baseline has shifted over the past two decades as a result of a decrease in the number of submitted papers per candidate and an increase in the number of coauthors per paper. A simple solution to this shifting baseline syndrome would be to benchmark the amount of unbiased authorship credit deemed necessary for successful completion of a specific PhD program, and then monitor for departures from this level over time. Harmonic partitioning of authorship credit also facilitates cross-disciplinary and inter-institutional analysis of the scientific output from different PhD programs. Juxtaposing bibliometric benchmarks with current baselines may thus assist the development of harmonized guidelines and transparent transnational quality assurance procedures for doctoral programs by providing a robust and meaningful standard for further exploration of the causes of intra- and inter-institutional variation in the amount of unbiased authorship credit per dissertation.

Open access
Scientometrics
Authors: Cathelijn J. F. Waaijer, Cornelis A. van Bochove, and Nees Jan van Eck

Abstract

Bibliometric mapping of scientific articles based on keywords and technical terms in abstracts is now frequently used to chart scientific fields. In contrast, no significant mapping has been applied to the full texts of non-specialist documents. Editorials in Nature and Science are such non-specialist documents, reflecting the views of the two most read scientific journals on science, technology and policy issues. We use the VOSviewer mapping software to chart the topics of these editorials. A term map and a document map are constructed and clusters are distinguished in both of them. The validity of the document clustering is verified by a manual analysis of a sample of the editorials. This analysis confirms the homogeneity of the clusters obtained by mapping and augments the latter with further detail. As a result, the analysis provides reliable information on the distribution of the editorials over topics, and on differences between the journals. The most striking difference is that Nature devotes more attention to internal science policy issues and Science more to the political influence of scientists.

Open access

Abstract  

The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.

Open access