OBPDC 2022 8 th INTERNATIONAL WORKSHOP ON ON-BOARD PAYLOAD DATA COMPRESSION (OBPDC 2022) 28-30 SEPTEMBER 2022 Benchmarking Deep Neural Networks on space compatible hardware for EO data extraction and reduction François de Vieilleville (1) , Adrien Lagrange (1) , Mahieu Verm (1) , Nicolas Dublé (1) , Nicolas-Marcel Lemoine (1) , Roberto Camarero (2) , Bertrand le Saux (3) (1) Agenium Space Campus 1, 1 Avenue de l’Europe Bâtiment 1 31400 Toulouse, France Email: firstname.name@agenium.com (2) ESA, ESTEC Keplerlaan 1, PO box 299, NL-2200 AG Noordwijk, The Netherlands Email: roberto.camarero@esa.int (2) ESA, PHILAB Via Galileo Galilei, 1, 00044 Frascati RM, Italy Email: bertrand.le.saux@esa.int ABSTRACT The current trend for institutional satellite missions is to push for an increase of the spatial and spectral resolution of image products, possibly with a higher revisit time, thus increasing the volume of acquired data. Moreover, the development of small satellites missions introduces new configurations where resources are more limited. Therefore, the downlink capacities become a bottleneck when the objective is to forward all the data to ground stations for their analysis. One of the most radical solutions to lighten this burden is to directly bring on board some of the tools used to extract information. This will leave the possibility of discarding useless or noisy images or even to analyse completely the images on board and downlink only the images with specific objects of interest for example. However, bringing efficient analysis algorithms on board is not a straightforward operation. We focus here on deep learning algorithms known to be very efficient for many images analysis tasks. Many constraints due to the context of on-board processing must be considered. First, the neural networks must be simplified and compressed in order to fit on the available hardware. It means simplifying the architecture of the networks and also quantifying all variables and computations with short floating-point numbers or even integers with a very low number of bits. Then, the networks must be translated and deployed on the hardware available whether it is a small CPU, a SoC FPGA or even an ASIC. Finally, the processing carried out by the algorithm must achieve acceptable performances in terms of throughput and power consumption. It is also essential to verify that there is no significant loss of accuracy compared to the same processing carried out on ground. The work presented aims at evaluating the performance in terms of throughput, power consumption and accuracy that can be achieved by such applications. Several use cases are considered starting from cloud segmentation, then cloud versus snow discrimination, to more specific applications, namely forest segmentation and vessel detection. All these applications have been implemented to compare the performances between the large networks used on ground and the small networks usable on board. All these simplified algorithms were then tested on a set of devices including Xilinx Zynq UltraScale+, Xilinx Zynq 7000 Series, AMD G-Series, Xilinx Kintex Ultrascale KU040 and Intel Myriad 2. This set of device gives a representative view on the performances that can be achieved with different family of device (SoC FPGA, CPU, ASICs). It is important to note that these performances are directly linked to the software tools delivered by the constructors of the devices or the available specific backends. Limited results can thus be linked to inherent limitation of the device or to a lack of maturity of the software used to run neural networks on the device. This study allows us to present important figures regarding achievable power consumptions, throughput, and performances of deep neural networks at the edge for essential EO use cases. 1. INTRODUCTION Over the last decade, the technological improvements in compute capabilities and miniaturization of processing payloads have enabled the development of affordable platforms, with the production of small satellites below 10kg featuring a