ON THE LINEAR SEPARABILITY OF DIAGNOSTIC MODELS John W. Sheppard Stephyn G. W. Butcher Department of Computer Science Department of Computer Science The Johns Hopkins University The Johns Hopkins University 3400 N. Charles Street 3400 N. Charles Street Baltimore, MD 21218 Baltimore, MD 21218 jsheppa2@jhu.edu stephyn@jhu.edu Abstract—As new approaches and algorithms are developed for system diagnosis, it is important to reflect on existing approaches to determine their strengths and weaknesses. Of concern is identifying potential reasons for false pulls during maintenance. Within the aerospace community, one approach to system diagnosis—based on the D-matrix derived from test dependency modeling—is used widely, yet little has been done to perform any theoretical assessment of the merits of the approach. Past assessments have been limited, largely, to empirical analysis and case studies. In this paper, we provide a theoretical assessment of the representation power of the D-matrix and suggest algorithms and model types for which the D-matrix is appropriate. Finally, we relate the processing of the D-matrix with several diagnostic approaches and suggest how to extend the power of the D-matrix to take advantage of the power of those approaches. INTRODUCTION Within the aerospace community and similar communities producing large, complex systems (e.g., the Department of Defense), considerable attention has been given to developing diagnostic systems based on a specific modeling paradigm— dependency modeling. Many available tools map their models into the so-called “D-matrix” (from “dependency” matrix) and derive diagnostic strategies from this matrix. Recent research has even demonstrated a functional “equivalence” between a variety of graphical diagnostic models such as the behavioral Petri net, the bipartite Bayesian network, and the multi-signal flow model [13]. The multi-signal flow model is of particular interest because it is one of those examples where tools have been developed that use the D- matrix [5]. Motivated by the widespread use of models based on the D-matrix, the derivative “diagnostic inference model” has been proposed by the IEEE as a standard representation of this kind of model in IEEE Std 1232-2002 [11], and this standard is a candidate for inclusion in the DoD’s automatic test system framework [7] and the associated Automatic Test Markup Language (ATML) initiative [2]. Previously, Sheppard and Kaufman asserted that false alarms generally arise from multiple sources: human error, unpredictable or unmodeled environmental conditions, instrument uncertainty, or test design issues [20], and such false alarms can lead to false pulls and unnecessary maintenance actions. The test community’s desire to identify causes for false pulls during system maintenance motivates the work in this paper. In addition to false alarms, false pulls can be attributed to ineffective diagnostics arising from incomplete models, inaccurate models, or erroneous reasoning. We focus on incomplete diagnostic models in this paper. In the following, we will consider the diagnostic problem from the perspective of pattern classification [8] and prove a significant result on the representation power of models based on the D-matrix. We believe this result is well known in the pattern classification community, but for some reason, the result is not as well known within the automatic test community. Specifically, we will prove that a diagnostic model that is based upon the D-matrix instantiates a linearly separable classification problem. Given this characteristic, we will then assess a number of diagnostic inference algorithms that, when applied to the D- matrix, either indicate limitations in diagnostic