A Novel ATR Classifier Exploiting Pose Information Jose C. Principe, Qun Zhao, Dongxin Xu Computational NeuroEngineering Laboratory University of Florida Gainesville, FL 32611 principe@cnel.ufl.edu 1.0 Abstract This paper describes a new architecture for ATR classifiers based on the premise that the pose of the target is known within a precision of 10 degrees. We recently developed such a pose estimator. The advantage of our classifier is that the input space complexity is decreased by the information of the pose, which enables fewer features to classify tar- gets with higher degree of accuracy. Moreover, the training of the classifier can be done discrimi- nantly, which also improves performance. Although our work is very preliminary, perfor- mance comparable with the standard template matcher was obtained in the MSTAR database. 2.0 Introduction ATR classifiers can be broadly divided into two types following the taxonomy in [1]: one class in one network (OCON) and all class in one network (ACON). Conceptually we can think that the OCON network eliminates all the cross-class con- nections in the upper layers of the ACON topology. In the first group we have the template matchers, while the MLP (multilayer perceptron) classifiers appear in the second class. They are developed with very different data models in mind and there- fore they have very different characteristics. We can also expect different performance, with the performance edge tilting towards the ACON. Template matchers create a classifier by taking information exclusively from a single class. There- fore one needs one template per target, and the classifier is linear followed by a winner take all network. It is easy to increment the number of classes by just developing extra templates. No modification is needed on the developed templates. However, the big problem is that the more tem- plates we add, the more likely the classifier is to make mistakes. The second class is trained with exemplars of every class using the rules of statistical pattern recogni- tion. Systems are normally nonlinear, such as qua- dratic classifiers or multilayer perceptrons. The big advantages is that training is discriminant, i.e. one class is trained with all the other classes. This means that the classifier has to be retrained from scratch if one more class is added to the testing suit, but every time the classifier has the possibility of choosing “features” that best individualize each class with respect to all the others. Although per- formance also degrades with the number of classes this degradation is slower than with the template matchers. There is an intermediate class of systems where some of the properties of the other classes are brought into the training as a penalty term for the template matcher. The minimum average correla- tion energy (MACE) filter appears as the best example of such a technique. As we commented in a previous paper [2] the MACE is still a compro- mise between purely template matchers and classi- fiers, and it is not easy to pick criteria to decide how to best train the MACE. Conventional design “guidelines” are sub-optimal.