Lindsey recently examined the precision of the manuscript review process using a stochastic model. The study reported that the low reliability found by previous studies results in journals publishing a large number of papers that should otherwise be rejected and rejecting an equally large number of papers that should be accepted.Hargens andHerting have criticized this view. This paper addresses their criticisms. The paper includes an examination of sociology journals usingimpact scores. The differences between journals is noted. Part of the variation between sociology journals derives from their editorial operations. Central to their editorial operations is the reviewing of manuscripts for publication. Not all journals perform this task equally well. The consequences of poor editorial management are discussed. To improve the quality of published work journals need to reduce the low reliability of the current manuscript review process.
The credibility of the publication system in science is determined in large part by the precision of the manuscript review process. Studies on the precision of the review process in scientific journals have reported conflicting results. This paper reviews those studies and re-examines the data reported. The findings indicate that highly selective decision-making with imprecise reviewers results in outcomes that are only slightly better than chance.
In this paper we investigate the retrieval capabilities of six Internet search engines on a simple query. As a case study
the query “Erdos” was chosen. Paul Erdos was a world famous Hungarian mathematician, who passed away in September 1996. Existing
work on search engine evaluation considers only the first ten or twenty results returned by the search engine, therefore approximation
of the recalls of the engines has not been considered so far. In this work we retrieved all 6681 documents that the search
engines pointed at and thoroughly examined them. Thus we could calculate the precision of the whole retrieval process, study
the overlap between the results of the engines and give an estimate on the recall of the searches. The precision of the engines
is high, recall in very low and the overlap is minimal.
Authors:Morteza Maghrebi, Ali Abbasi, Saeid Amiri, Reza Monsefi, and Ahad Harati
, an explicit definition is suggested. Then through case study of articles retrieved by the previous LQs, a collective and abridged lexical query (CALQ) is proposed. It will be shown, that CALQ has both high precision and recall
Authors:Thomas Gurney, Edwin Horlings, and Peter van den Besselaar
corpus of work, with an optimal balance between precision and recall when querying the larger dataset in which their corpus resides. This is especially important where bibliometrics is used for evaluation purposes. The most common problem encountered is
Authors:Jian Wang, Kaspars Berzins, Diana Hicks, Julia Melkers, Fang Xiao, and Diogo Pinheiro
. Applying the boosted trees algorithm on low false rate authors’ papers (i.e. 3,862 papers of 91 authors), a one-run experiment achieves a testing misclassification error of 0.55%, testing recall of 99.84%, and testing precision of 99.60%. The testing
Distribution and statistical assumptions
Non-normal distribution (Weale et al. 2004 ). Journals cannot be ranked with great precision (Greenwood 2007 ). No statistics to inform signifance (Leydesdorff and Opthof 2010
Authors:Abraão D. C. Nascimento, Kássio F. Silva, Gauss M. Cordeiro, Morad Alizadeh, Haitham M. Yousof, and G. G. Hamedani
We study some mathematical properties of a new generator of continuous distributions called the Odd Nadarajah-Haghighi (ONH) family. In particular, three special models in this family are investigated, namely the ONH gamma, beta and Weibull distributions. The family density function is given as a linear combination of exponentiated densities. Further, we propose a bivariate extension and various characterization results of the new family. We determine the maximum likelihood estimates of ONH parameters for complete and censored data. We provide a simulation study to verify the precision of these estimates. We illustrate the performance of the new family by means of a real data set.
Authors:Jean-Charles Lamirel, Claire Francois, Shadi Al Shehabi, and Martial Hoffmann
The information analysis process includes a cluster analysis or classification step associated with an expert validation of the results. In this paper, we propose new measures of Recall/Precision for estimating the quality of cluster analysis. These measures derive both from the Galois lattice theory and from the Information Retrieval (IR) domain. As opposed to classical measures of inertia, they present the main advantages to be both independent of the classification method and of the difference between the intrinsic dimension of the data and those of the clusters. We present two experiments on the basis of the MultiSOM model, which is an extension of Kohonen's SOM model, as a cluster analysis method. Our first experiment on patent data shows how our measures can be used to compare viewpoint-oriented classification methods, such as MultiSOM, with global cluster analysis method, such as WebSOM. Our second experiment, which takes part in the EICSTES EEC project, is an original Webometrics experiment that combines content and links classification starting from a large non-homogeneous set of web pages. This experiment highlights the fact that break-even points between our different measures of Recall/Precision can be used to determine an optimal number of clusters for web data classification. The content of the clusters obtained when using different break-even points are compared for determining the quality of the resulting maps.
topic would be a wide-scale study of the changes in the number and proportion of citations and self-citations from past to more recent years (i.e. years that are used to compute the Impact Factor).
Vanclay notes that