Modelling residual systematic errors in GPS positioning: Methodologies and comparative studies C. Satirapod, J. Wang, C. Rizos School of Surveying and Spatial Information Systems, The University of New South Wales, Sydney, NSW 2052, Australia Abstract. Since its introduction to civilian users in the early 1980's, the Global Positioning System (GPS) has been playing an increasingly important role in high precision surveying and geodetic applications. Like traditional geodetic network adjustment, data processing for precise GPS static positioning is invariably performed using the least squares method. To employ the least squares method for GPS relative positioning, both the functional and stochastic models of the GPS measurements need to be defined. The functional model describes the mathematical relationship between the GPS observations and the unknown parameters, while the stochastic model describes the statistical characteristics of the GPS observations. The stochastic model is therefore dependent on the choice of the functional model. A double- differencing technique is commonly used for constructing the functional model. In current stochastic models, it is usually assumed that all the one-way measurements have equal variance, and that they are statistically independent. The above functional and stochastic models have therefore been used in standard GPS data processing algorithms. However, with the use of such GPS data processing algorithms, systematic errors in GPS measurements cannot be eliminated completely, nor accounted for satisfactorily. These systematic errors can have a significant effect on both the ambiguity resolution process and the GPS positioning results. This is a potentially critical problem for high precision GPS positioning applications. It is therefore necessary to develop an appropriate data processing algorithm, which can effectively deal with systematic errors in a non-deterministic manner. Recently several approaches have been suggested to mitigate the impact of systematic errors on GPS positioning results: the semi- parametric model, the use of wavelets and new stochastic modelling methodologies. These approaches use different bases and have different implications for data processing. This paper aims to compare the above three methods, both theoretically and numerically. Keywords. Iterative stochastic modelling, wavelet- based approach, semi-parametric, Precise GPS relative positioning _________________________________________ 1. Introduction The classical least squares technique has been widely used in the processing of GPS data. It is well known that the least squares procedure is based on the formulation of a mathematical model, consisting of the functional model and the stochastic model. To achieve optimal results both the functional model and the stochastic model have to be correctly defined. However, it is impossible to model all systematic errors within the functional model due to the lack of knowledge of the phenomena causing these errors. The unmodelled systematic errors (e.g. orbital error, atmospheric errors, multipath error) would still remain in the measurements, and thus have an impact on the accuracy and reliability of positioning results. It is therefore necessary to develop a technique that can satisfactorily take the unmodelled systematic errors into account through the enhancement of either (or both) the functional and stochastic models. Several data processing techniques have been recently developed in an attempt to effectively deal with unmodelled systematic errors. Examples of such techniques include the semi-parametric and penalised lest-square technique introduced by Jia et al. (2000), the iterative stochastic modelling procedure proposed by Wang et al. (2001), and the wavelet-based approach suggested by Satirapod et al. (2001). These techniques use different bases and have different implications for data processing.