MP-SET-228 - 1 NATO UNCLASSIFIED RELEASABLE TO AUSTRALIA AND SOUTH AFRICA NATO UNCLASSIFIED RELEASABLE TO AUSTRALIA AND SOUTH AFRICA On the use of image moments for ATR from SAR images Carmine Clemente 1 , Luca Pallotta 2 , Domenico Gaglione 1 , Antonio De Maio 3 and John J. Soraghan 1 1 University of Strathclyde, CESIP, EEE, 204, George Street, G1 1XW, Glasgow, UNITED KINGDOM 2 Centro Regionale Information Communication Technology (CeRICT) scrl, via Cinthia c/o Complesso Universitario di Monte Sant’Angelo, Fabbr. 8B – 80126 Napoli, ITALY. 3 Università di Napoli “Federico II”, DIETI, via Claudio 21, I-80125 Napoli, ITALY carmine.clemente@strath.ac.uk, luca.pallotta@unina.it, domenico.gaglione@strath.ac.uk, ademaio@unina.it, j.soraghan@strath.ac.uk ABSTRACT Enhancing target recognition from Synthetic Aperture Radar (SAR) images is a challenging task that cannot be generally solved through a unique and specific sensor configuration or signal processing solution. In particular, solutions exploiting physical target modelling not always are able to deal with complex targets or with small differences between classes. This issue can be solved if image processing techniques are exploited in order to represent the target in a reference domain where small differences and complex structures can have a significant contribution to the target recognition task. The aim of this paper is to provide an overview on the use of image moments for Automatic Target Recognition (ATR) from SAR images. In particular two families of image moments will be considered, pseudo-Zernike and Krawtchouk. Both image moments are computed from orthogonal two-dimensional polynomials that are used as basis to represent the targets’ images. The use of image moments introduces advantages in the sense of computational cost, flexibility, reliability and capabilities to identify different targets. Furthermore, these representations can be made rotational, scale and translational invariant, thus allowing operational robustness of algorithms, for example mitigating the lack of image registration between training and test observations. The capabilities of the image moments are discussed together with experimental validation of algorithms. In particular the performance on the MSTAR dataset of military vehicles will be discussed while the Gotcha 3D dataset will be considered for the civilian vehicles case. 1. Introduction Target recognition of vehicles is a topic of increasing interest and demanding requirements. The knowledge of the vehicles deployed in a specific area of interest is fundamental to understand what kind of threat has to be fought (e.g: Small Intercontinental Ballistic Missile launcher rather than a theatre missile launcher), or to understand the activities in a specific site. Nowadays, of greater interest is to bring the level of knowledge to an identification or characterization stage, where the actual capabilities of the vehicle can be understood based on its equipment. For this reason an Automatic Target Recognition (ATR) algorithm should include the capability to identify small differences among targets, like a specific configuration of a multirole vehicle. Furthermore, the ATR task represents one of the multiple tasks in which modern platforms are involved, for example a UAV (Unmanned Aerial Vehicle) will be acquiring the radar echoes, perform the imaging using High Performance Computing (HPC) capabilities [1], maintain constant communication with a control centre or other platforms, manage other systems like Electro-Optical (EO) sensors. For this reason the processing and the information extraction have to comply with the low Size Weight And Power (SWAP) paradigm. In order to address the identification capability, reliability and low computational cost