Search Results
You are looking at 1 - 10 of 19 items for
- Author or Editor: Judit Bar-Ilan x
- Refine by Access: All Content x
Abstract
In this paper the reactions of Usenet News users' to “mad cow disease” is examined. Thousands of newsgroups on an extremely wide variety of subjects exist, and anyone, having access to the Internet, can express his/her thoughts freely on this medium. We collected information on the news items relevant to “mad cow disease” for a period of one hundred days starting very close to the eruption of the crisis. The analysis of the collected information reveals some similarities between the bibliometric characteristics of news items on an electronic medium and the physically printed scientific literature. As far as we know, this is one of the first attempts to systematically apply bibliometric methods to the Internet.
Abstract
Google Scholar and Scopus are recent rivals to Web of Science. In this paper we examined these three citation databases through the citations of the book “Introduction to informetrics” by Leo Egghe and Ronald Rousseau. Scopus citations are comparable to Web of Science citations when limiting the citation period to 1996 and onwards (the citation coverage of Scopus)—each database covered about 90% of the citations located by the other. Google Scholar missed about 30% of the citations covered by Scopus and Web of Science (90 citations), but another 108 citations located by Google Scholar were not covered either by Scopus or by Web of Science. Google Scholar performed considerably better than reported in previous studies, however Google Scholar is not very “user-friendly” as a bibliometric data collection tool at this point in time. Such “microscopic” analysis of the citing documents retrieved by each of the citation databases allows us a deeper understanding of the similarities and the differences between the databases.
Abstract
This paper compares the h-indices of a list of highly-cited Israeli researchers based on citations counts retrieved from the Web of Science, Scopus and Google Scholar respectively. In several case the results obtained through Google Scholar are considerably different from the results based on the Web of Science and Scopus. Data cleansing is discussed extensively.
Abstract
In this paper we examine the applicability of the concept of h-index to topics, where a topic has index h, if there are h publications that received at least h citations and the rest of the publications on the topic received at most h citations. We discuss methodological issues related to the computation of h-index of topics (denoted h-b index by BANKS [2006]). Data collection for computing the h-b index is much more complex than computing the index for authors, research groups and/or journals, and has several limitations. We demonstrate the methods on a number of informetric topics, among them the h-index.
Abstract
In September 2008 Thomson Reuters added to the ISI Web of Science (WOS) the Conference Proceedings Citation Indexes for Science and for the Social Sciences and Humanities. This paper examines how this change affects the publication and citation counts of highly cited computer scientists. Computer science is a field where proceedings are a major publication venue. The results show that most of the highly cited publications of the sampled researchers are journal publications, but these highly cited items receive more than 40% of their citations from proceedings papers. The paper also discusses issues related to double-counting, i.e., when a given work is published both in a proceedings and later on as a journal paper.
Abstract
In this paper we investigate the retrieval capabilities of six Internet search engines on a simple query. As a case study the query “Erdos” was chosen. Paul Erdos was a world famous Hungarian mathematician, who passed away in September 1996. Existing work on search engine evaluation considers only the first ten or twenty results returned by the search engine, therefore approximation of the recalls of the engines has not been considered so far. In this work we retrieved all 6681 documents that the search engines pointed at and thoroughly examined them. Thus we could calculate the precision of the whole retrieval process, study the overlap between the results of the engines and give an estimate on the recall of the searches. The precision of the engines is high, recall in very low and the overlap is minimal.
Abstract
In this study we carried out a content analysis of Web pages containing the search term "S&T indicators", which were located by an extensive search of the Web. Our results clearly show that the Web is a valuable information source on this topic. Major national and international institutions and organizations publish the full text of their reports on the Web, or allow free downloading of these reports in non-html formats. In addition to direct information, a number of pages listing and linking to major reports, programs and organizations were also located.
Abstract
We present different methods of data collection from the Web for informetric purposes. For each method, some studies utilizing it are reviewed, and advantages and shortcomings of each technique are discussed. The paper emphasizes that data collection must be carried out with great care. Since the Web changes constantly, the findings of any study are valid only in the time frame in which it was carried out, and are dependent on the quality of the data collection tools, which are usually not under the control of the researcher. At the current time, the quality and the reliability of most of the available search tools are not satisfactory, thus informetric analyses of the Web mainly serve as demonstrations of the applicability of informetric methods to this medium, and not as a means for obtaining definite conclusions. A possible solution is for the scientific world to develop its own search and data collection tools.
Abstract
Paul Erdos was a world famous Hungarian mathematician, who passed away in September 1996. Documents on the World Wide Web, mentioning Paul Erdos's name were systematically collected. These documents were categorized using the method of content analysis. This work enables us to draw some conclusions about the ways authors of Internet documents picture Paul Erdos. This is the first work we know of that thoroughly examines the content of a huge collection of documents on a specific topic on the Internet.
Links analysis proved to be very fruitful on the Web. Google's very successful ranking algorithm is based on link analysis. There are only a few studies that analyzed links qualitatively, most studies are quantitative. Our purpose was to characterize these links in order to gain a better understanding why links are created. We limited the study to the academic environment, and as a specific case we chose to characterize the interlinkage between the eight Israeli universities.