346 INTERNATIONAL COMMITTEES Accred Qual Assur (2000) 5 : 346–348 1 Members of the working group at the time of publication are as follows: A Wil- liams (Chairman), S Ellison (Secretary), M Berglund, W Haesselbarth, K Hede- gaard, R Kaarls, M Mansson, M Rosslein, R Stephany, A van der Veen, W Weg- scheider, H van de Wiel, R Wood. The group includes representatives from other bodies as follows: CITAC: Pan Xiu Rong, M Salit, A Squirrell, K Yasuda., AOAC International: R Johnson, Jung-Keun Lee, D Mowrey. IAEA: P De Regge, A Fajgelj. EA: D Galsworthy. Stephen L.R. Ellison Uncertainties in qualitative testing and analysis The following document has been devel- oped by the EURACHEM Measurement Uncertainty Working Group 1 . It is pre- sented with a view to developing policy and promoting work on the topic. Com- ments on the content and the issues raised are invited, and should be ad- dressed to the working group secretary (above). 1 Introduction Uncertainties associated with quantitative measurement results have been the sub- ject of considerable activity since the publication of the ISO Guide on the top- ic [1]. By comparison, the issue of uncer- tainties in qualitative testing and analysis (referred to elsewhere as “identification certainty” [2]) has received less attention. With the publication of ISO 17025 : 1999, however, interest in uncertainties in test- ing operations has increased. The prob- lems of establishing uncertainty asso- ciated with qualitative tests, such as ‘pass/ fail’, identity and comparative identity tests have accordingly become more im- portant. This paper sets out some of the main issues arising for analysts in testing labo- ratories and accreditation bodies inter- ested in the assessment of uncertainty in qualitative testing. While it does not pro- vide detailed statistical methods for the characterisation of uncertainties in quali- tative testing, it does provide general guidance on the main issues. 2 Importance of uncertainty in qualitative testing Broadly, qualitative testing provides a simple statement or categorisation of a test item or material. Decisions are in- variably taken as a result; for example, whether or not to issue a batch of fertilis- er, whether water is fit to drink, whether a person is in possession of controlled substance or not, or whether a newly syn- thesised material has the desired struc- ture. Clearly, incorrect classifications – such as ‘passing’ a product when in fact it is unfit for use – carry risks to all parties. To control those risks, professionals in- volved in testing take pains to ensure that their methods lead to acceptably low risks of incorrect classification. It follows that, at some point in the development of any such test method, an evaluation must be made as to the risk of incorrect classification. For most such methods, therefore, it is reasonable to ex- pect a laboratory to establish, or have ac- cess to, information on the risks of incor- rect results. An important exception is the use of standard test methods, established by groups outside the particular laboratory as fit for the purpose in question. The la- boratory may well have limited, or even no access to the risk information leading to that decision. However, such methods invariably specify a test procedure in some detail, and the laboratory will gen- erally be expected to show that those fac- tors which are within its control do in- deed meet the requirements of the test method. That, in turn, may involve de- monstrating that the uncertainty of refer- ence values, calibration operations or in- 2 Partial class membership is used exten- sively in “fuzzy logic” systems, but the relevant terminology and treatment is very rare in ordinary testing activities. termediate measurements leading to a de- cision is sufficiently small. 3 Forms of uncertainty information in qualitative testing Qualitative testing generally relates to ca- tegorical statements, such as ‘present/ab- sent’, ‘pass/fail’, chemical species, or per- haps membership of a class of com- pounds. Such classification statements are not usually associated with a range of ex- pression; one does not, in general report- ing, generally speak of an artefact or ma- terial being a 90% pass, or 99% present 2 . The typical form of uncertainty informa- tion is, as a result, typically probabilistic in nature. That is, one gives an indication of the probability of a given classification being correct. The most familiar and widely used form of such information is, at present, the use of false response rates, particular- ly “false positive rates” and “false nega- tive rates”. Probably the most important alterna- tive to simple statements of false re- sponse rates is the use of values derived from Bayes’ theorem (a summary of Bayes’ theorem is given in reference 2). Examples include likelihood ratio (an in- dication of the additional information provided by a test result) and posterior probability, an indication of the probabil- ity of an object fitting a given category given a test result. Bayesian estimates are particularly widely used in evaluating fo- rensic evidence, for example DNA matching or blood group matching. Fur- ther details can be found elsewhere [ref. 2 and references cited therein]. Bayesian estimates can be calculated by appro- priate combination of false positive and false negative rates. 4 Nomenclature relating to qualitative testing uncertainties The nomenclature for qualitative testing is not fully developed. An example will illustrate a current problem. The term ‘false negative rate’ can, in principle, have two quite different interpretations: i) The chance, or frequency, of negative responses given that the response should be positive. Broadly, this is the fraction of ‘true positive’ test items that return negative responses.