Model-based Type B uncertainty evaluations of measurement towards more objective evaluation strategies Marcel Boumans Department of Economics, University of Amsterdam, Netherlands EIPE, Erasmus University Rotterdam, Netherlands article info Article history: Available online 10 April 2013 Keywords: Grey-box model Objectivity Uncertainty evaluation Model validation abstract This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction GUM [1] distinguishes between two types of evalua- tions of uncertainties. Type A evaluation is a method of evaluation of uncertainty by the statistical analysis of ser- ies of observations. Type B evaluation is a method of eval- uation of uncertainty by means other than the statistical analysis of series of observations. The purpose of the Type A and Type B classification is to indicate the two different ways of evaluating uncertainty components, but both types of evaluation are based on probability distributions, and the uncertainty components resulting from either type are quantified by variances or standard deviations of these distributions. So, the main difference between these two types is the way we arrive at these quantified variances. In case of a Type A evaluation, these variances are calcu- lated from series of repeated observations and is the famil- iar statistically estimated variance. In case of a Type B evaluation, the variance is evaluated by ‘‘using available knowledge’’ ([1], pp. 6–7). Thus a Type A variance is ob- tained from a probability density function derived from an observed frequency distribution, while a Type B vari- ance is obtained from ‘‘an assumed probability density function based on the degree of belief that an event will oc- cur’’ ([1], p. 7), an a priori distribution, usually called a sub- jective probability distribution. A note in GUM [1] to the description of Type B evalua- tion of an uncertainty component clarifies this notion of ‘‘available knowledge’’ by describing it as being ‘‘usually based on a pool of comparatively reliable information’’ ([1], p. 7). This pool consists of ([1], p. 11): previous measurement data; experience with or general knowledge of the behaviour and properties of relevant materials and instruments; manufacturer’s specifications; data provided in calibration and other certificates; uncertainties assigned to reference data taken from handbooks. Moreover, to explicate this idea of a Type B evaluation, it is noted that ‘‘The proper use of the pool of available information for a Type B evaluation of standard uncer- tainty calls for insight based on experience and general knowledge, and is a skill that can be learned with practise.’’ So, even though a Type B evaluation is meant to be a ‘‘sci- entific judgement’’ ([2], p. 5), it is apparently more subjec- tive than a Type A evaluation. 0263-2241/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.measurement.2013.04.003 Address: University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam, Netherlands. E-mail address: m.j.boumans@uva.nl Measurement 46 (2013) 3775–3777 Contents lists available at SciVerse ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement