J13.3 VERIFYING MESOSCALE MODEL PRECIPITATION FORECASTS USING AN ACUITY-FIDELITY APPROACH Stephen F. Marshall*, Peter J. Sousounis, and Todd A. Hutchinson WSI Corporation Andover, MA 1. INTRODUCTION Precipitation has been a target of verification for mesoscale numerical model forecasts since the early days of numerical weather prediction. For daily or longer time periods, a mean square error or threat score approach may be suitable. However, for verifying forecasts of hourly precipitation, particularly from models with very fine grid spacing, such traditional approaches can unfairly penalize one model while unfairly rewarding another. A classic example is the decrease in skill of a model’s precipitation forecast as its grid spacing is decreased. To the human eye, simulations on a finer mesh often appear to have better representation of mesoscale features than coarser simulations. However, traditional metrics often rank such fine mesh simulations as inferior to coarser runs from the same model (Mass et al. 2002). Subjectively, this effect appears to be caused by improper location or timing of the mesoscale features in the fine mesh model. The fine mesh model is penalized heavily both for a lack of precipitation at the locations and times where it was observed and for having too much precipitation at nearby times and locations where precipitation was not observed. By contrast a coarser forecast tends to predict smoother fields, e.g. weaker precipitation over greater areas, potentially leading to smaller point-by-point penalties and hence a superior skill score. Recently, a verification strategy has been developed at WSI Corporation to more fairly evaluate forecasts with fine grid spacings and short temporal frequencies. This method differs from traditional techniques in the way it associates forecast and observational data to form forecast-observation pairs. In this scheme, the skill of the forecast is measured by two metrics called acuity and fidelity. Acuity represents the model’s skill at detecting the features of the observed data. The acuity of a forecast is calculated for each observed data point by finding the best matching forecast for that observation. Instead of automatically associating an observation with the forecast that shares its location and time, the best match is obtained by minimizing a cost function calculated between the target observation and many candidate forecast data. The candidate forecast datum that produces the smallest penalty is deemed the best match, and is therefore associated with the observation. *Corresponding Author Address: Stephen F. Marshall, WSI Corporation, 400 Minuteman Road, Andover, MA 01810. Email: smarshall@wsi.com. Fidelity represents the faithfulness of the model’s predictions to the observed data. The fidelity of a forecast is calculated much like the acuity, except the roles of the observations and forecasts are reversed. Thus for each target forecast datum, the best matching observation is found within a multidimensional field of candidate observations. In this paper, we develop a cost function for verifying precipitation forecasts using the acuity-fidelity method and explore the sensitivities of this cost function’s parameters. The validity of the verification scheme is demonstrated by visually comparing precipitation output from different models with corresponding acuity-fidelity scores. The utility of the acuity-fidelity technique is demonstrated by comparing its results to those from a more traditional threat score approach. 2. METHODOLOGY To assess precipitation forecasts, a cost function was defined with four components: one each for errors in distance, time, and intensity, and a fourth term to account for missed events. J = J s + J t + J i + J e (1) In this study, intensity refers to one-hour accumulated precipitation; in general it could be any dependent variable. To calculate a total acuity or fidelity penalty, all the cost function components must be converted into common units. We chose to convert the time, intensity and event penalties into equivalent distances using the following component definitions: J s = x (2) J t = U e t (3) J i = DI I (4) J e = f(J miss , Intensity regimes) (5) In these equations, the variables x, t, and I represent the absolute difference in position, time, and intensity, respectively, between an observed datum and a forecast datum. U e is the characteristic event velocity used to relate temporal and spatial errors. DI is the distance-intensity ratio used to relate intensity and spatial errors. J miss is the maximum value of J, and represents the worst possible penalty; the minimum penalty is 0. The intensity regimes are a list of intensity values that define categories within the intensity continuum.