Authors:R. Lindstrom, R. Zeisler, and R. Greenberg
The basic assumptions of activation analysis are that the induced radioactivity is proportional to the amount of analyte,
and that the quantity of radioactivity can be related simply to the number of counts observed. Quantitative measurement of
activity (and of its uncertainty) is not always simple, especially when accuracy better than a few percent is sought. Recent
work with 77Ge and 76As has demonstrated that the accuracy of half-lives in the literature is sometimes insufficient. Despite these and other problems,
quantitative understanding and documentation of uncertainties can be accomplished, providing demonstrable quality assurance
and supporting claims of traceability to the Système International.
Four different approaches to PIXE data obtained in repeated measurements on thick standards have been evaluated in terms of precision and accuracy. Both were found to be the best when determinations relative to an external standard were normalized to a composition assumed to be 100% oxides.
Authors:Sabrina Oebel, Alexander Gotschy, Ingo Paetsch, Cosima Jahnke, Sven Plein, Rolf Gebker, Sandra Hamada, Michael Frick, Jochen von Spiczak, Malgorzata Polacin, Frank Enseleit, Nikolaus Marx, Thomas F. Lüscher, Frank Ruschitzka, Sebastian Kozerke, Hatem Alkadhi, and Robert Manka
studies showed a high sensitivity and diagnostic accuracy of CMR perfusion imaging in the setting of single- and multi-center investigations, which also included a direct comparison with single-photon emission computed tomography (SPECT) [ 5, 6 ]. Ongoing
This paper describes some highlights from the author’s efforts to improve neutron activation analysis (NAA) detection limits
through development and optimization of radiochemical separations, as well as to improve the overall accuracy of NAA measurements
by identifying, quantifying and reducing measurement biases and uncertainties. Efforts to demonstrate the metrological basis
of NAA, and to establish it as a “Primary Method of Measurement” will be discussed.
The different approaches of the monostandard activation analysis are evaluated critically in order to put them into a common
formulation. The nuclear data relevant to the method, which are selected and verified by experiment, are presented for general
application. The accuracy of the method for multielement analysis is discussed by comparing the analytical results of the
various reference materials from different methods and laboratories with those from monostandard activation analysis.
The results of measuring elevations leveling using an optical beam straightness, contain, besides the desired constant height, variable part caused by the influence of refraction. The latter has traditionally been seen as an error (random and partly systematic). However, these “errors” due to physical causes, are not subject to statistical regularities, but because they can not provide a mean-square error. Fluctuations in the heights under the influence of refraction caused by physical laws, which makes use of classical methods to evaluate the accuracy and altitude adjustment flawed.
Gamma-ray spectrometry losses through pulse processing dead time and pile-up are best assayed with an external pulse technique. In this work, the virtual pulse generator technique as implemented commercially with the Westphal loss free counting (LFC) module is set up and tested with four high resolution gamma-ray spectrometers. Dual source calibration and decaying source techniques are used in the evaluation of the accuracy of the correction technique. Results demonstrate the reliability of the LFC with a standardized conventional pulse processing system. The accurate correction during high rate counting, including during rapid decay of short lived activities, has been the basis for highly precise determinations in reference materials studies.
Sequence of neural networks has been applied to high accuracy regression in 3D as data representation in form z = f(x,y). The first term of this series of networks estimates the values of the dependent variable as it is usual, while the second term estimates the error of the first network, the third term estimates the error of the second network and so on. Assuming that the relative error of every network in this sequence is less than 100 %, the sum of the estimated values converges to the values to be estimated, therefore the estimation error can be reduced very significantly and effectively. To illustrate this method the geoid of Hungary was estimated via RBF type network. The computations were carried out with the symbolic - numeric integrated system Mathematica.
The newly revised ANSI N42.141.2 has provided analysis software developers with a set of well defined, consistent and unbiased procedures designed to evaluate the accuracy and limitations of peak search and peak area analysis programs. This work uses two of the procedures outlined in this standard to evaluate five peak analysis algorithms currently in use in Canberra and Nuclear Data software packages. The first procedure examines a program's behavior as the centroid separation and peak height ratio of a doublet are varied. A previous review of these data3 demonstrated significant peak area inaccuracies at peak separations at or below 1.5 FWHM. We will discuss improvements made to some of these programs and the impact on the doublet results. The second procedure examines a program's behavior as the Compton continuum beneath a fixed peak area is increased. For the same five algorithms we will discuss the dependence of peak area on Compton continuum and also explore the limits of peak detectability.
Errors in preparing standards, especially multielemental standards, are extremely important if accurate results are desired from neutron activation analysis (NAA). It is often convenient to prepare standards for NAA from single or multi-element solutions which are then deposited onto (or into) a suitable matrix, such as filter paper or quartz vials. There are many potential sources of error in preparing single-element standards including: impurities and non-stoichiometric composition of the element or compound used to prepare the standard solutions; evaporative losses of solvent; inaccuracy of calibration, and imprecision of the pipettes used; moisture content of elements or compounds used; contamination from reagents, equipment, laboratory environment, or final matrix of the standard; instability of standard solutions (i.e., to losses via precipitation or adsorption), and losses of volatile elements during dissolution and/or irradiation. Additional sources of error in preparing multielement standards includes: instability of mixed, multielement solutions, and cross-contamination of one element by the addition of a second element. Procedures previously used by the author at NIST to prepare multielement standards with concentrations accurate to about one percent are described. Additional techniques needed to prepare multielement standards with accuracies better than 1 percent will be discussed.