Floating-Point Bitwidth Analysis via Automatic Differentiation Altaf Abdul Gaffar 1 , Oskar Mencer 2 , Wayne Luk 1 , Peter Y.K. Cheung 3 and Nabeel Shirazi 4 1 Department of Computing, Imperial College, London SW7 2BZ, UK. 2 Lucent, Bell Labs, Murray Hill, NJ 07974, USA. 3 Department of EEE, Imperial College, London SW7 2BT, UK. 4 Xilinx Inc., 2100 Logic Drive, San Jose, USA. Abstract Automatic bitwidth analysis is a key ingredient for high- level programming of FPGAs and high-level synthesis of VLSI circuits. The objective is to find the minimal number of bits to represent a value in order to minimize the circuit area and to improve efficiency of the respective arithmetic oper- ations, while satisfying user-defined numerical constraints. We present a novel approach to bitwidth – or precision – analysis for floating-point designs. The approach involves analysing the dataflow graph representation of a design to see how sensitive the output of a node is to changes in the outputs of other nodes: higher sensitivity requires higher precision and hence more output bits. We automate such sensitivity analysis by a mathematical method called au- tomatic differentiation, which involves differentiating vari- ables in a design with respect to other variables. We illus- trate our approach by optimising the bitwidth for two ex- amples, a Discrete Fourier Transform implementation and a Finite Impulse Response filter implementation. 1. Introduction FPGAs are starting to provide sufficient area to im- plement floating-point computations. The large size of floating-point arithmetic units, still remains the main lim- itation on floating-point computations on FPGAs. One way to deal with this difficulty is to minimize the number of bits in the operands, which in turn minimizes the area and pos- sibly latency of the arithmetic operation. Floating-point numbers consist of a fixed point man- tissa (m) and an integer exponent (e) representing a num- ber m · 2 e . As a consequence, the number of bits for the exponent represents the range of possible values, while the number of bits in the mantissa determines the available pre- cision for a particular variable. We can split the problem of minimizing the bits in the operands into two parts: (1) range analysis, and (2) preci- sion analysis. Range analysis has received much attention within recent integer bitwidth analysis work [2], [9], [11]. Precision analysis is a separate problem. In precision anal- ysis, we are interested in the “sensitivity” of the output of a computation to a slight change to the inputs, or more specif- ically, the sensitivity of an output to the precision within an arithmetic unit. So far research into precision analysis has mainly focused on fixed point implementations [3], [4], [5], [8], [10]. The most straight-forward method for minimizing the number of bits is to try out various bitwidths and observe the output for each configuration space [6]. This technique, however involves an enormous search space. In this work we focus on a more scalable method that dynamically com- putes the derivatives of the computed function based on a method known as automatic differentiation. Automatic differentiation is well-known within the optimization com- munity; it enables the computation of all derivatives of the functions in a program. Since our initial evaluation reveals that available automatic differentiation packages are very powerful but are too slow for our purposes, we implement our own version of automatic differentiation which is fully specialised to the task of precision analysis for floating- point computations. The remainder of the paper is organized as follows. In Section 2 we explain the mathematical foundation of sen- sitivity analysis via differentiation, and the connection of sensitivity analysis to the minimal bitwidth of the mantissa of a floating-point number. Section 3 details our implemen- tation of automatic differentiation within a C++ library with user defined types and overloaded operators. Section 4 de- scribes the application examples and Section 5 gives de- tailed results of our bitwidth analysis, including estimated area savings. 2. Approach In this section, we provide a description of our approach to bitwidth analysis together with a presentation of the 1