Journal of Glaciology, Vol. 34, No. 117, 1988 SNOW-SLOPE STABILI1Y - A PROBABILISTIC APPROACH By H. CONWAY· and J. ABRAHAMSON (Department of Chemical and Process Engineering, University of Canterbury, Christchurch, New Zealand) ABSTRACT. Measurements of snow properties across and down snow slopes have been used to calculate a safety margin - the difference between the basal shear strength and the applied static stress. Areas of basal deficit exist when the applied shear stress exceeds the basal shear strength (the safety margin is negative), and basal areas are pinned when the safety margin is positive. As the size of deficit increases, stresses within the overlying slab also increase, and these may be sufficient to cause an avalanche. Measurements made on five slopes (four of which had avalanched) were characterized by considerable spatial variability, and the safety margin has been treated as a random function which varies over the slope. Statistical models of Vanmarcke (1977[a], 1983) have been applied to determine the most likely size of deficit required for avalanching (95% confidence). In one case, an avalanche occurred when the length of deficit was only 2.9 m, and in the other cases the length was always less than 7 m. This size of deficit is small compared with the total area of many avalanche slopes which suggests that avalanches initiate from small zones of deficit, and makes it difficult to locate a defic it with just a few tests. The optimum sampling interval and number of tests required to yield an adequate estimate of the statistical parameters of the safety margin are also discussed. INTRODUCTION Because we can hope to make only a few measurements of snow strength and stress over any particular slope, we need to estimate the continuous spatially varying properties from a fin ite number of "point" measurements. Furthermore, we would like to minimize the number of measurements required to represent adequately a slope. Several techniques have been used to extrapolate properties over large areas from just a few measurements (e.g. Kriging - see Krige, 1966; or Monte Carlo simulations - see Harr, 1977, 503-54; or Nguyen and Chowdhury, 1985), and we have chosen a technique proposed by Vanmarcke (1977[a], 1983). A series df contiguous point measurements made at intervals over a slope · may be treated as a "stationary random process". Such a process can be characterized by a mean value, a variance or standard deviation, and a measure of the influence between adjacent measurements. The influence (or correlation structure) of a random process is commonly represented either by a correlation function or by a Fourier transform, but Vanmarcke (1977[a], [b], 1983) has proposed a new approach using a "moving average" technique. A small amount of local averaging is allowed and this serves to smooth micro-scale fluctuations which can give rise to excessive sensitivity. Adjacent measurements are averaged to generate a locally averaged random function with a changed variance. The averaging procedure can be • Present address: Geophysics Program AK-50 , University of Washington, Seattle, Washington 98195, U.S.A. 170 extended over larger lengths to produce a family of functions. As an example of the extended averaging procedure, measurements of basal shear strength (taken from case 2 discussed below) have been averaged and plotted in Figure la. The figure shows the original "point" measurements and also the measurements averaged over 2.67 and 6.23 m. As the averaging length is increased, fluctuations about the mean tend to cancel, causing the variance to diminish . The manner in which the variance diminishes is a reflection of the correlation between adjacent measurements, and a variance function is defined as the ratio of the variance of the locally averaged process to the variance of the original point process. For the example shown in Figure la, the variance function is: r 2 = var Cf hI. var Cfbo (I) where var Cfbo and var CfbL are respectively the variance of point measurements of shear strength, and the variance of measurements which have been averaged over length L. This function (plotted in Figure Ib) fully characterizes the correlation structure of the original process and contains the same information as a correlation function or a spectral density function. The decay in variance can be described in terms of a "scale of fluctuation", 6, which defines the scale at which increased averaging commences to ha ve a significant influence on the variance. Numerous specific analytical models of the variance function can be used , and one of the simplest takes the form: r 2 = I 6/ L for L 6, for L > 6. (2) This function provides a reasonable approximation of the variance function , especially when the averaging length exceeds about 26, and is also plotted for the example in Figure lb. Random processes for which 6 ... 0 have no long- term memory, and under extended local averaging the variance decay is in inverse proportion to L 2 (rather than L) . This further simplifies calculations because values can be assumed to be independent . However, any random process may be sufficiently described by a mean , variance, and scale of fluctuation, making for simple derivations of the mean- square derivative, mean threshold-crossing rates, and the probability distribution of extreme values . Physically, there exists an approximate relationship between 6 and the average length the point process is above (or below) its mean value. In fact, the half wavelength of the process, "1../2, can be approximated by (Vanmarcke, 1977[a], 1983): ).. /2 = (n/ 2}Yl6 (3) z 1.256. https://www.cambridge.org/core/terms. https://doi.org/10.3189/S0022143000032196 Downloaded from https://www.cambridge.org/core. IP address: 168.151.1.126, on 23 Sep 2017 at 19:04:23, subject to the Cambridge Core terms of use, available at