Nine intercomparisons of the determination of mercury present as impurity in different materials have been organised. 24 laboratories
in 10 different countries took part, mainly using activation analysis and atomic absorption techniques. The results show that
the reliability of the data produced and reported in this area leaves much to be desired, and that continuous control of the
quality of mercury analysis in the extremely low concentration range is essential.
We have developed a novel advanced enrichment apparatus for environmental tritium analysis called SPET (Solid Polymer Electrolysis
for Tritium Water). It generates no explosive gas, requires no aqueous electrolyte, terminates enrichment rapidly, and the
volume reduction is infinite. It has an automatic shutdown system which gives uniform conditions on every run, making the
handling and determination of tritium concentrations very easy. The reliability of SPET was studied using environmental concentration
standard water and the reproducibility error was found to be within 4%, which is sufficient for environmental measurements.
As it is known, the immune system found in higher evolutional level biological organisms is a distributed and multilayered system that is robust and able to identify infectious pathogens, injury, diseases, or other harmful effects. Therefore, their properties and abilities — like self-healing or surviving — would be more advantageous in many mechatronic applications, where often are imposed robustness and also high reliability operation requirements. Founded by these observations, the paper is focused on modeling and simulation artificial embryonic structures, with the purpose to develop VLSI hardware architectures able to imitate cells or organism operation mode, with similar robustness like their biological equivalents from nature. Self-healing algorithms and artificial immune properties implementation is investigated and experimented on the developed models. The presented theoretical and simulation approaches were tested on a FPGA-based embryonic network architecture (embryonic machine), built with the purpose to implement on silicon fault-tolerant and surviving properties of living organisms.
There are different criteria for designing a geodetic network in an optimal way. An optimum network can be regarded as a network having high precision, reliability and low cost. Accordingly, corresponding to these criteria different single-objective models can be defined. Each one can be subjected to two other criteria as constraints. Sometimes the constraints can be contradictory so that some of the constraints are violated. In this contribution, these models are mathematically reviewed. It is numerically shown how to prepare these mathematical models for optimization process through a simulated network. We found that the reliability model yields small position changes between those obtained using precision respectively. Elimination of some observations may happen using precision and cost model while the reliability model tries to save number of observations. In our numerical studies, no contradictions can be seen in reliability model and this model seems to be more suitable for designing of the geodetic and deformation networks.
A method for the rapid evaluation of90Sr in environmental matrices has been developed. It is a one-day time-consuming procedure compared with 20–25 d of the traditional methods. It is based on a partial measurement of the sample activity and on data extrapolation. This work reports the methodology and its reliability.
The aim of this research work is the analysis of the dimensioning of EC’s of the vertical bearing structures of buildings, within the framework. In the beginning the bearing resistance of spread foundation is analyzed. The aim is to evaluate the method of EC-7 in the light of the safety to be demanded according to the current standard and reliability method. The current work describes the opportunity of determining this latter.
This paper examines the peer review procedure of a national science funding organization (Swiss National Science Foundation)
by means of the three most frequently studied criteria reliability, fairness, and validity. The analyzed data consists of
496 applications for project-based funding from biology and medicine from the year 1998. Overall reliability is found to be
fair with an intraclass correlation coefficient of 0.41 with sizeable differences between biology (0.45) and medicine (0.20).
Multiple logistic regression models reveal only scientific performance indicators as significant predictors of the funding
decision while all potential sources of bias (gender, age, nationality, and academic status of the applicant, requested amount
of funding, and institutional surrounding) are non-significant predictors. Bibliometric analysis provides evidence that the
decisions of a public funding organization for basic project-based research are in line with the future publication success
of applicants. The paper also argues for an expansion of approaches and methodologies in peer review research by increasingly
focusing on process rather than outcome and by including a more diverse set of methods e.g. content analysis. Such an expansion
will be necessary to advance peer review research beyond the abundantly treated questions of reliability, fairness, and validity.
A statistical model for citation processes, a particular version of a non-homogeneous birth process, is analysed in the context
of predictions of future citation rates. Important properties of the process were already studied by the author in earlier
papers. Although the applicability of the model was demonstrated by several examples, practical aspects of predictions and
questions of statistical reliability were not tackled so far. The present study is focused on the demonstration of the possibility
of true predictions and on the analysis of the statistical reliability of predictions based on the mean value functionE(X(t)−X(s)/X(s)=i) of citation processes. The citation rates for papers published in 1980 and 1991 were recorded in the period 1980 through
1995, and 1991 through 1995, respectively, in all science areas. It is shown that parameters of mean value functions estimated
for earlier time periods can be applied to more recent years, too. As a by-product, the model may serve as a validation tool
for the particular choice of citation windows in evaluation studies.
Authors:Luca Laura Kummer, Jan Govaere, and Borisz Egri
de Bruijn , C. M. , Wensing , T. and van Nieuwstadt , R. A. ( 2003 ): Reliability of the glutaraldehyde test to measure gamma-globulin levels in foals and the use of this test to check colostrum intake of foals . Tijdschr. Diergeneeskd. 8
According to the definition of reliability-based citation impact factor (R-impact factor) proposed by KUO & RUPE and the cumulative
citation age distribution model, a mathematical expression of the relationship between R-impact factor and impact factor is
established in this paper. By simulation of the change processes of the R-impact factor and impact factor in the manipulation
process of the impact factor, it is found that the effect of manipulation can be partly corrected by the R-impact factor in
some cases. Based on the Journal Citation Report database, impact factors of 4 normal journals and 4 manipulated journals
were collected. The journals’ R-impact factors and self-cited rates in the previous two years were calculated for each year
during the period 2000 to 2007, and various characteristics influenced by the manipulation were analyzed. We find that the
R-impact factor has greater fairness than the impact factor for journals with relatively short cited half-lives. Finally,
some issues about using the R-impact factor as a measure for evaluating scientific journals are discussed.