To examine whether primary-citation indexing can be taken as an unbiased representation of all-author indexing, we compared
the cited first-author counts (straight counts) with the cited all-author counts (complete counts)in two psychological journals
over two publication years. Although rather high correlations were found between straight counts and complete counts, correlations
differ with journals of the same discipline, with different publication years of the same journal, and according to seniority
of cited authors. No effect of alphabetical name ordering was found. Results are discussed against the background of the possible
use of weighting procedures for all-author indexing.
Scientific results of empirical research depend on the methods used. The selection of empirical methods by scientists is not solely determined by the subject of research or by theory. Social and historical (in our investigation national) conditions also affect the application of methods. This hypothesis has been corroborated with the help of journals in psychology, psychiatry, and sociology from different countries. The national impact on method preference varies among these disciplines. Conclusions are drawn concerning the generalizability of empirical results beyond disciplines and beyond countries.
The theoretical introductions in empirical journal articles have been analyzed looking for factors determining citation habits. Own-country-biases and English-American predominance in citations were not regularly found. Preferred language of the cited publications and absolute citation frequencies were dependent upon both the disciplines and the countries where the journals are published. However, relative citation frequencies (citations related to the length of the text available) have been found to be rather constant across countries (within psychology and psychiatry, respectively) which indicates no such dependence.
We investigated three rival hypotheses concerning scientific communication and recognition: the performance hypothesis and
two alternative assumptions, the reputation hypothesis and the resource hypothesis. The performance hypothesis reflects the
norm of universalism in the sense given byMerton, the reputation hypothesis predicts a Matthew Effect (scientists receive communications and recognition on the basis of their
reputation), and the resource hypothesis assumes that communication with other scientitis is used as a form of asset to defend
one's own research results.
Using bibliometric methods, we assessed whether assuming an important scientific position enhances scientific impact and prestige.
Specifically, we explored whether a person's assumption of editorship responsibilities of a psychology journal increases the
frequency with which that person is cited in theSocial Sciences Citation Index. The data base consisted of ten psychology journals, seven premier American and three German journals, covering the years
1981 to 1995. Citation rates for the years prior to, during, and following periods of editorship were compared for three groups:
editors cited in the journal they edited, editors cited in a journal they did not edit, and non-editors. The results showed
that during their editorship, editors showed an increased citation rate in the journal edited; this result was found for American
journals, but not for German journals. These findings indicate that, for American journals, assuming editorship responsibilities
for a major psychology journal increases one's scientific impact, at least as reflected by a measure of citation rate. A careful
examination of ages of the non-editors' citations reveals that the post-editorship citation rates of editors and comparable
non-editors do not differ significantly. The reputation hypothesis (Matthew Effect) is therefore preferred for interpreting
the results, because it shows the cumulative nature of prestige-oriented citations. The results contradict the convention
of using citation rates as pure performance measures.