International Journal of Science and Research (IJSR), India Online ISSN: 23197064 Volume 2 Issue 3, March 2013 www.ijsr.net Design of IEEE - 754 Floating point Arithmetic Processor J. Laxmi 1 , R. Ramprakash 2 1 M.Tech Student CVSR College of Engineering laxmi402.jatti@gmail.com 2 Assistant Professor, ECE Department CVSR College of Engineering ramprakash.rampelli@gmail.com Abstract: In this paper, we deal with the designing of a 32-bit floating point arithmetic processor for RISC/DSP processor applications. It is capable of representing real and decimal numbers. The floating point operations are incorporated into the design as functions. The logic for these is different from the ordinary arithmetic functions. The numbers in contention have to be first converted into the standard IEEE floating point standard representation before any sorts of operations are conducted on them. The floating point representation for a standard single precision number is a 32-bit number that is segmented to represent the floating point number. The IEEE format consists of four fields, the sign of the exponent, the next seven bits are that of the exponent magnitude, and the remaining 24 bits represent the mantissa sign. The exponent in this IEEE standard is represented in excess-127 format all the arithmetic functions like addition, subtraction, multiplication and division will be design by the processor. The main functional blocks of floating point arithmetic processor design includes, Arithmetic logic unit(ALU), Register organization, control & decoding unit, memory block, 32-bit floating point addition, subtraction, multiplication and division blocks. This processor IP core can be embedded many places such as co- processor for embedded DSP and embedded RISC controller. The overall system architecture will be designed using HDL language and simulation, synthesis. Keywords: single, dual precision, floating point, ALU, FPGA 1. Introduction A. Floating point In C, an operation is the effect of an operator on an expression. Specific to floating-point numbers, a floating- point operation is any mathematical operation (such as +, -, *, /) or assignment that involves floating-point numbers (as opposed to binary integer operations).Floating-point numbers have decimal points in them. The number 2.0 is a floating- point number because it has a decimal in it. The number 2 (without a decimal point) is a binary integer. Floating-point operations involve floating-point numbers and typically take longer to execute than simple binary integer operations. For this reason, most embedded applications avoid wide-spread usage of floating-point math in favor of faster, smaller integer operations. In computing, floating point describes a method of representing an approximation to real numbers in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits (the mantissa) and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form: Significant digits × base exponent The idea of floating-point representation over intrinsically integer fixed-point numbers, which consist purely of significant, is that expanding it with the exponent component achieves the greater range. For instance, to represent large values, e.g. distances between galaxies, there is no need to keep all 39 decimal places down to femtometre-resolution, employed in particle physics. Assuming that the best resolution is in light years, only 9 most significant decimal digits matter whereas 30 others bear pure noise and, thus, can be safely dropped. This is 100-bit saving in storage. Instead of these 100 bits, much fewer are used to represent the scale (the exponent), e.g. 8 bits or 2 decimal digits. Now, one number can encode the astronomic and subatomic distances with the same 9 digits of accuracy. But, because 9 digits is 100 times less accurate than 9+2 digits reserved for scale, this is considered as precision-for-range trade-off. The example also explains that using scaling to extend the dynamic range results in another contrast with usual fixed- point numbers: their values are not uniformly spaced. Small values, the ones close to zero, can be represented with much higher resolution (1 femtometre) than distant ones because greater scale (light years) must be selected for encoding significantly larger values. That is, floating-point cannot represent point coordinates with atomic accuracy in the other galaxy, only close to the origin. The term floating point refers to the fact that their radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component in the internal representation and floating-point can thus be thought of as a computer realization of scientific notation. Over the years, a variety of floating-point representations have been used in computers. However, since the 1990s, the most commonly encountered representation is that defined by the IEEE 754 Standard. The speed of floating-point operations, commonly referred to in performance measurements as Flops, is an important machine characteristic, especially in software that performs large-scale mathematical calculations. A number representation (called a numeral system in mathematics) specifies some way of storing a number that may be encoded as a string of digits. The arithmetic is defined as a set of actions on the representation that simulate classical 190