LETTER Communicated by Steven Nowlan Neural Network Uncertainty Assessment Using Bayesian Statistics: A Remote Sensing Application F. Aires faires@giss.nasa.gov Department of Applied Physics and Applied Mathematics, Columbia University, NASA Goddard Institute for Space Studies, New York, NY 10025, U.S.A., and CNRS/IPSL/ Laboratoire de M´ et´ eorologie Dynamique, ´ Ecole Polytechnique, 91128 Palaiseau Cedex, France C. Prigent catherine.prigent@obspm.fr CNRS, LERMA, Observatoire de Paris, Paris 75014, France W.B. Rossow wrossow@giss.nasa.gov NASA Goddard Institute for Space Studies, New York, NY 10025, U.S.A. Neural network (NN) techniques have proved successful for many re- gression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to eval- uate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point es- timation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluat- ing any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used ef- fectively to represent highly nonlinear, multivariate functions. In this sit- uation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model Neural Computation 16, 2415–2458 (2004) c 2004 Massachusetts Institute of Technology