International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 01 | Jan 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 958
IMPLEMENTATION OF FLOATING POINT FFT PROCESSOR WITH SINGLE
PRECISION FOR REDUCTION IN POWER
R. Balasaraswathi
1
, D. Divya
2
, M. Harinikalayani
3
, I. Vivek Anand M.E
4
, and
Dr. T.S. Arun Samuel
5
1,2,3
Student, Department of ECE, National Engineering College, Kovilpatti, India
4,5
Department of ECE, National Engineering College, Kovilpatti, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract - The advance technology of VLSI has been the
enhancing special feature in the appearance of VLSI circuits
that can handle floating point (FP) arithmetic. Depending on
the various processor applications requirements also differ, i.e.
some processors have a high repertoire of functions but results
in low performance, while some processors aim at achieving
the highest throughput that leads to use more operations such
as multiply and add and that can produces more latency. For
real-time processing requirements, performing a large amount
of FP operations are considered as a major bottleneck due to
the excessively long run time required. In many cases FP
arithmetic requires additional operations such as alignment,
normalization and rounding, giving rise to some significant
increase in terms of area, power consumption and
computational latency .Such a problem might be mitigated by
employing the fused FP add-subtract and dot-product units
specially designed to perform those tedious tasks.
For achieving high performance with minimizing hardware
complexities, existing rounding algorithms like mantissa,
exponent and sign are used to generate two consecutive values
in parallel, and compute the rounded product by using these
values. This research work focuses on reducing computation
time, area and the power compared to many existing floating
point adder consumption by developing a new floating-point
architecture. Fourier analysis converts a signal from its
original domain (often time or space) to a representation in
the frequency domain representation and vice versa. The FFT
processor architecture exploits the superior area utilization
efficiency existing with the single-path delay feedback (SDF) in
memory and the single-path delay commutate (SDC) in adder.
The circuits are designed by Encounter RTL (digital design)
using Cadence and the simulation results will be observed
using cadence tool.
Key Words: Floating-point, ALU, Pipelining, Precision
1. INTRODUCTION
In domain of digital signal processing the number
representation in the form of fixed-point or floating-
point[
1
].The contribution deals with a binary representation
of real-numbers. The advantage of floating-point
representation over fixed-point representation that can
support much a wider range of values in integer
representation. The representation of real numbers in
Floating Point Unit (FPU) typically used binary floating-
point format numbers [
2
] which is used to increase the speed
and efficiency compared to fixed-point representation. For
achieving accuracy and efficiency in digital and radar
imaging and to reduce the complexities during the
processing, floating-point representation played a major
role.
Floating-point unit designed for applications such as space
craft, launching rockets and big data. Since integer
arithmetic lacks the range and precision for the accuracy,
VLSI technology making it to be possible. There are many
processors with fixed or floating-point representation and
there are also several blocks used for arithmetical
operations. In high resolution radar imaging applications for
performing the task of pulse compression, Floating-Point
(FP) Fast Fourier Transform (FFT) processors are often
used.
2. FLOATING POINT
A system which describes the representing numbers that
would be too large or too small as integers is called as floating
point number. Compared to fixed point representation,
floating point representation is able to retain its resolution
and accuracy[
3
]. The sign, mantissa and exponent can make a
floating-point number which shown in Fig 1.S is the Sign bit
(0 is positive and 1 is negative).The sign bit is represented
either as sign or magnitude. E is the exponent bit, very large
numbers have large positive exponent and Very small close-
to-zero numbers have negative exponents. The range of
values is increased in exponent field. M is the Fraction bit or
Mantissa (fraction after binary point). The precision of FP
numbers can be improve by having More bits in fraction field.
Fig. 1. Representation of Floating Point
In 1985, Institute of Electrical and Electronics Engineers
(IEEE) established a technical standard for floating point
arithmetic (IEEE standard 754) [
4
]. The IEEE 754 standard
addressed many problems found in the diverse floating- point
implementations that made them difficult to use reliably and
portably. Many hardware floating-point units use the
standard.
In accordance with IEEE standard 754, Conversion of decimal
to the floating point consists of three steps such as
SIGN EXPONENT MANTISSA