Predetection Fusion: Resolution Cell Grid Effects CONSTANTINO RAGO PETER WILLETT, Senior Member, IEEE University of Connecticut MARK ALFORD Rome Laboratory If members of a suite of sensors from which fusion is to be carried out are not colocated, it is unreasonable to assume that they share a common resolution cell grid; this is generally ignored in the data fusion community. We explore the effects of such “noncoincidence,” and we find that what at first seems to be a problem can in fact be exploited. The idea is that a target is known to be confined to an intersection of overlapping resolution cells, and this overlap is generally small. We examine noncoincidence from two viewpoints: tracking and detection. With respect to tracking our analysis is first static, by which is meant that we establish the decrease in measurement error; and then dynamic, meaning that the overall effect in the tracking problem is quantified. The detection viewpoint considers noncoincidence as it has impact on a predetection fusion system. Specifically, the role of the fusion rule is examined, and the use of noncoincidence to improve detection performance (rather than that of tracking) is explored. Manuscript received January 19, 1995; revised July 17 and October 20, 1997. IEEE Log No. T-AES/35/3/06400. This research was supported through Rome Laboratory under AFOSR Contract F30602-93-C-0183. Authors’ current addresses: C. Rago, Scientific Systems, 500 W. Cummings Park, Ste. 300, Woburn, MA 01801; P. Willett, Dept. of Electrical and Systems Engineering, U-157, University of Connecticut, Storrs, CT 06269, E-mail: (willett@eng2.uconn.edu); M. Alford, OCTM, Rome Laboratory, Griffiss AFB, NY 13441. 0018-9251/99/$10.00 c ° 1999 IEEE I. INTRODUCTION A. Background Observations from a suite of two or more radars can be fused at three levels: as tracks, as detections, and as observations. In the first, each sensor performs its own tracking, with the resulting estimated target trajectories and covariances combined. In the second, all sensor detections are combined as input to a single tracking algorithm. It is common to refer to these as track fusion and post-detection fusion, and schemes for each are available in [1, 2] and its references. In the third approach, often called predetection fusion, target reports from the individual sensors are combined and evaluated prior to being passed to a tracking routine; an excellent treatment and bibliography of the topic is in [3]. The distinction between the second and third approaches is in the location of the hit/no-hit decision-making: in the former this is local, and in the latter it is fused. The three schemes are hierarchical in the location of fusion within the data-processing chain, and it is apparent that the predetection case, with its fusion of the “rawest” of the data, should offer the best performance. There are challenges, however, as follows. 1) The resolution cell “grids” from the various sensors in general do not line up with each other, and hence it is not immediately clear how to associate reports from resolution cells that overlap but do not coincide. 2) Unless particular care is taken, the sensors will not be time-synchronized. As such, due to target motion, reports from the same target can appear in resolution cells which do not overlap. These problems are daunting, and perhaps explain the concentration of algorithmic results on the upper parts of the hierarchy, and the parallel abundance of theoretical results on decentralized decision-making among common hypotheses (i.e., all sensors test the same resolution cell at the same time). It is our intention in this work to examine the first of the above problems, that of resolution-cell disagreement. We show, in fact, that the noncoincidence among grids amounts, somewhat surprisingly, to enhanced resolution: making proper use of overlap can actually improve performance. To do so we must ignore the second problem, and synchronicity among observations of the sensors is assumed. We are concerned that the discussion may alienate practically minded readers at this point. It is not realistic that each member of a suite of sensors has its attention focused on the same target at the same time. With reference to Fig. 1, if the two near sensors are assumed to scan in lock-step and are surveying a 778 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 35, NO. 3 JULY 1999