Search Results

You are looking at 1 - 6 of 6 items for :

  • Author or Editor: L. Sjöberg x
  • Earth and Environmental Sciences x
  • Refine by Access: All Content x
Clear All Modify Search

In precise geoid modelling the combination of terrestrial gravity data and an Earth Gravitational Model (EGM) is standard. The proper combination of these data sets is of great importance, and spectral combination is one alternative utilized here. In this method data from satellite gravity gradiometry (SGG), terrestrial gravity and an EGM are combined in a least squares sense by minimizing the expected global mean square error. The spectral filtering process also allows the SGG data to be downward continued to the Earth’s surface without solving a system of equations, which is likely to be ill-conditioned. Each practical formula is presented as a combination of one or two integral formulas and the harmonic series of the EGM.Numerical studies show that the kernels of the integral part of the geoid and gravity anomaly estimators approach zero at a spherical distance of about 5°. Also shown (by the expected root mean square errors) is the necessity to combine EGM08 with local data, such as terrestrial gravimetric data, and/or SGG data to attain the 1-cm accuracy in local geoid determination.

Restricted access

The modification of Stokes’ formula allows the user to compensate the lack of a global coverage of gravity data by a combination of terrestrial gravity and a global geopotential model. The minimization of the errors of truncation gravity data and potential coefficients could be treated in a least-squares sense as is the basic ingredient in the Royal Institute of Technology (KTH) approach as proposed by Sjöberg in 1984. This article presents the results from a joint project between KTH and the National Land Survey of Sweden, whose main purpose is to evaluate the KTH approach numerically and to compute a gravimetric geoid model for Sweden. The new geoid model (KTH06) was computed based on the least-squares modification of Stokes’ formula, the GRACE global geopotential model, a high-resolution digital terrain model and the NKG gravity anomaly database. The KTH06 was fitted to 1162 GPS/levelling points by a 7-parameter transformation, yielding an all-over fit of 19 mm and 0.17 ppm. The fit is even smaller than the estimated internal accuracy for the geoid model (28 mm). If we assume that the accuracy of the GPS and levelling heights are 10 mm and 5 mm, respectively, it follows that the accuracy of the expected gravimetric geoid heights are of the order of 11 mm. Also, we found a significant expected difference between the KTH06 and NKG2004 models in rough topographic areas (up to 36 cm). As the major ground data and global geopotential model were almost same in the two models, we believe that there are different reasons that come into play for interpreting the discrepancies between them, as the method for eliminating outliers from the gravity database, the interpolated denser gravity observations using the high-resolution digital elevation model before Stokes’ integration, the potential of the LSM kernel, which matches the errors of the terrestrial gravity data, GGM and the truncation error in an optimum way, and the effect of applying more precise correction terms in the KTH approach compared to the remove-compute-restore method. It is concluded that the least-squares modification method with additive corrections is a very promising alternative for geoid computation.

Restricted access

There are numerous methods to modify Stokes’ formula with the usually common feature of reducing the truncation error committed by the lack of gravity data in the far-zone, resulting in an integral formula over the near-zone combined with an Earth Gravity Model that mainly contributes with the long-wavelength information. Here we study the reverse problem, namely to estimate the geoid height with data missing in a cap around the computation point but available in the far-zone outside the cap. Secondly, we study also the problem with gravity data available only in a spherical ring around the computation point. In both cases the modified Stokes formulas are derived using Molodensky and least squares types of solutions. The numerical studies show that the Molodensky type of modification is useless, while the latter method efficiently depresses the various errors contributing to the geoid error. The least squares methods can be used for estimating geoid heights in regions with gravity data gaps, such as in Polar Regions, over great lakes and in some developing countries with lacking gravity data.

Restricted access

The Moho depth can be determined using seismic and/or gravimetric methods. These methods will not yield the same result as they are based on different hypotheses as well as different types, qualities and distributions of data. Here we present a new global model for the Moho computed based on a stochastic combination of seismic and gravimetric Moho models. This method employs condition equations in the spectral domain for the seismic and gravimetric models as well as degree-order variance component estimation to optimally weight the corresponding harmonics in the combination. The preliminary data for the modelling are the seismic model CRUST2.0 and a new gravimetric Moho model based on the inverse solution of the Vening Meinez-Moritz isostatic hypothesis and the global Earth Gravitational Model EGM08. Numerical results show that this method of stochastic combination agrees better with the seismic Moho model (3.6 km rms difference) than the gravimetric one. The model should be a candidate for dandifying the frequently sparsely data CRUST2.0. We expect that this way of combining seismic and gravimetric data would be even more fruitful in a regional study.

Restricted access

In the global navigation satellite system (GNSS) carrier phase data processing, cycle slips are limiting factors and affect the quality of the estimators in general. When differencing phase observations, a problem in phase ambiguity parameterization may arise, namely linear relations between some of the parameters. These linear relations must be considered as additional constraints in the system of observation equations. Neglecting these constraints, results in poorer estimators. This becomes significant when ambiguity resolution is in demand. As a clue to detect the problem in GNSS processing, we focused on the equivalence of using undifferenced and differenced observation equations. With differenced observables this equivalence is preserved only if we add certain constraints, which formulate the linear relations between some of the ambiguity parameters, to the differenced observation equations. To show the necessity of the additional constraints, an example is made using real data of a permanent station from the network of the international GNSS service (IGS). The achieved results are notable to the GNSS software developers.

Restricted access

The problem of handling outliers in a deformation monitoring network is of special importance, because the existence of outliers may lead to false deformation parameters. One of the approaches to detect the outliers is to use robust estimators. In this case the network points are computed by such a robust method, implying that the adjustment result is resisting systematic observation errors, and, in particular, it is insensitive to gross errors and even blunders. Since there are different approaches to robust estimation, the resulting estimated networks may differ. In this article, different robust estimation methods, such as the M-estimation of Huber, the “Danish”, and the L 1 -norm estimation methods, are reviewed and compared with the standard least squares method to view their potentials to detect outliers in the Tehran Milad tower deformation network. The numerical studies show that the L 1 -norm is able to detect and down-weight the outliers best, so it is selected as the favourable approach, but there is a lack of uniqueness. For comparison, Baarda’s method “data snooping” can achieve similar results when the outlier magnitude of an outlier is large enough to be detected; but robust methods are faster than the sequential data snooping process.

Restricted access