An analysis of three major problems in the application of bibliometric research performance indicators is made in three separate sections. In the first section, the influence of field-dependent citation practices is analysed. The results indicate that rankings of publications from different fields, based on citation counts, can be affected seriously by differences between citation characteristics in those fields. If certain assumptions hold, one should expect high (short term) citation levels in Biochemistry, Celbiology and Biophysics. Medium citation levels are to be expected in Experimental and Molecular Physics, Physical and Organic Chemistry, Pharmacology and Plant Physiology, and low citation levels in Mathematics, Taxonomy, Pharmacognosy and Inorganic Solid State Chemistry. In the second section time-dependent factors are studied. It is shown that trend-analyses of output and impact based on bibliometric scores can be disturbed by changes in theSCI-database and in publication and citation practices. One of the disturbing factors is shown to be the inclusion of so called Books into theSCI data-base in 1977. Finally, in the third section a case is presented which illustrates the consequences of operating on incomplete bibliometric data in the evaluation of scientific performance. A completeness percentage of 99% for publication data is proposed as a standard in evaluations of the performance of small university research groups).