The method for absolute estimation of software quality for doublet deconvolution based on a system of penalty points is presented. The method developed is an effective means as for independable and correspondence competition of programs for doublet processing and for fine tuning of these programs. The application of the suggested method is demonstrated and discussed on the example of test spectrum No. 400 by IAEA.
The software ASPRO-NUC is based on new improved algorithms suggested and tested in the laboratory and intended for routine analysis. The package consists of the program ASPRO for gammaray spectra processing (peak search, multiplets deconvolution by means of method of moments, computation of correction coefficient for geometry and material of radioactive source), a program for isotope identification and a program for NAA by means of relative standardization. All output information is loaded into a data base (Paradox v.3.5 format) for supporting of queries, creation of reports, planning of routine analysis, estimation of expenses, supporting of network of analytical survey, etc. The ASPRO-NUC package also includes a vast nuclear data base containing evaluated decay and activation data (reactor, generator of fast neutrons, Cf-252 source). The data base environment allows for easy integration of a gamma spectrometer into a flexible information shell and the creation of a logical system for information management.
A new computing algorithm has been developed to calculate the correction coefficient for a voluminous radiation source. The algorithm is based on the conception of a virtual efficiency center whose detector location depending on the radiation energy can be computed relatively to a point arbitrarily fixed on the detector axis. A scheme for the interaction of the gamma-radiation with the detector is proposed to minimize the number of preliminary experiments for the determination of empirical functions needed for realization of the method. Data are presented for computation of the linear attenuation coefficient of any material with known chemical composition and density. The validity of the developed approach has been proved by comparison of real and computed radioactivities for 5 different radiation energies in 12 cylindrical source-detector geometries.
New equations have been derived and new ways of the analytical procedures developed which allowed to carry out determinations
in cases, when the conditions of the substoichiometric separations are not fulfilled.
To achieve the highest possible sensitivity of analysis for environmental samples it is common practice to use both a high
efficiency detector and a close measurement geometry with a large sample size (e.g. Marinelli beaker). Under such conditions,
the typical efficiency calibration procedure results in a biased activity value for many nuclides due to the true coincidence
summing effect. While there are a few methods to correct for this effect with special calibration standards, such calibrations
can be both time consuming and expensive. Due to these calibration difficulties, the true coincidence summing effect is often
simply ignored. Recently, it has been demonstrated that the coincidence summing correction can be performed mathematically
even for voluminous sources. This new method consists of an integration of the coincidence correction factor over the sample
volume while taking into account its chemical composition and the container. In this paper, we will discuss the latest approaches
for establishing the peak efficiency and peak-to-total efficiency curves, which are required for this method. These approaches
have been tested for HPGe detectors of two different relative efficiencies.
The separation of fluoride by extraction with toluene solution of triphenyltin chloride has been studied. Quantitative isolation of fluoride from solutions with a wide acidity range (pH 4.0–11.5) has been established. It is suggested that interferences by Ca, Mg, Fe, and Al can be avoided by masking these elements using sulfate and hydroxyde ions. Interference by phosphate ions can be overcome in a similar fashion. The halogenated species can be masked by mercury nitrate. Detection limit for fluorine determination is about 3 g for a neutron generator flux of 2·1111 n·cm–1·s–1. A method for fluorine assay in water using a neutron generator with a detection limit of 1 ppm has been developed.
An electronic data base (DB) containing recently evaluated k0 and related data has been developed. The tables composing the DB are relationally linked to support data integrity. The purpose of the DB development is to make an official source of data used for electronic synchronization of the input parameters needed for the k0 methodology, which is developing in numerous laboratories. Such solution saves time when updating, ensures the quality of the primary data and hence of the analysis results, and due to the recording of the updating history preserves traceability of the data in time.
Authors:V. Kolotov, V. Atrashkevich, and S. Gelsema
To analyze a voluminous radioactive source with the highest possible sensitivity, it is necessary to use both a high efficiency detector and an optimal measurement geometry. The optimal geometry implies positioning the source as close to the detector as possible. It also implies selection of the shape of the source in order to reach the highest efficiency possible (e.g., Marinelli beaker). Under such conditions of measurements, true coincidences may cause systematic errors that can reach levels of more than ten percent for some radionuclides. A method for estimation of the effect of these coincidences was developed. It is based on direct computation of the effect by means of integration of a function which involves the experimentally obtained detection efficiency for the place around the detector. It was found that for the tested detector with a relative efficiency of 15%, the so-called intrinsic peak-to-total calibration may be used in the course of such an integration: It has been shown that theP/T-ratio for the given energy in the working space around the detector may be considered a constant value. Some results from a peak-to-total calibration study in the presence of scattering material are also given.
Authors:M. Koskelo, R. Venkataraman, and V. Kolotov
We have shown that it is sufficiently accurate to use the MCNP peak-to-total calibration results to correct for cascade summing effects in a gamma-spectrum. Also, it is sufficient to use only approximate detector characterization data with empirical peak-to-total to obtain good cascade summing correction results. The intrinsic P/T-curve for detectors with the same efficiency is very similar and it may be considered a common characteristic of the whole detector's family with given efficiency.
It has been demonstrated that pixel-by pixel processing of series of autoradiography images for revealing the dynamics of
decay of the induced radionuclides is an efficient approach for mapping of radionuclides in the sample in activation autoradiography.
Concepts of virtual scanner and corresponding software for linearization of dependence of optical density on scanner response
(luminosity) have been introduced. The concept provides unification of the subsequent processing of autoradiograms, irrespective
of the method as to how the digital image has been obtained. Algorithms and the software for estimation of decay parameters
of a radionuclide mixture for each pixel using a series of coaxially positioned images have been developed. The software is
able generate a set of the derivative meta-images allowing a conclusion to be made about the presence of the inclusions in
question. To increase the reliability of radionuclide mapping it is suggested to use analysis of distribution of half-life
values estimated for pixels of image zone(s) pointed by a special mask.