International Journal of Computer Science Trends and Technology (IJCST) – Volume 3 Issue 3, May-June 2015 ISSN: 2347-8578 www.ijcstjournal.org Page 89 Arithmetic Coding for Lossless Data Compression – A Review Ezhilarasu P [1] , Krishnaraj N [2] , Dhiyanesh B [3] Associate Professor [1] , Assistant Professor [3] Department of Computer Science and Engineering Hindusthan College of Engineering and Technology Coimbatore Head of the Department [2] Department of Information Technology Sree Sastha Institute of Engineering and Technology Chennai Tamil Nadu – India ABSTRACT In this paper, Arithmetic Coding data compression technique reviewed. Initially, arithmetic encoding performed for the taken input. Then decoding for the obtained result done. It regenerates the original uncompressed input data. Its compression ratio, space savings, and average bits also calculated. Keywords:- Arithmetic Coding, Compression, Encoding, Decoding. I. INTRODUCTION Data compression defined as the representation of data in such a way that, the storage area needed for target data is less than that of the size of the input data. The decompression technique regenerates the source data. After decompression, if there is some loss of data, then the compression called as lossy compression. If none of the data missed, then the compression named as lossless compression. The Arithmetic coding comes under lossless compression. Each compression technique looks for two important aspects. Those are complexity in terms of space along with time. Arithmetic coding generates variable length codes. It bypasses traditional methods of replacing input characters by specific code, like code words. It uses the combination of both integers and floating-point numbers. The integers used initially to represent two limits. Those are high limit one and low limit zero. Then in the subsequent steps these limits change into floating-point numbers. The floating- point numbers used to represent the input. The output of an arithmetic encoding is the collection of bits derived from the floating-point number. The binary converted into fractional number, then regenerates the input in arithmetic decoding. II. RELATED WORK Shannon [1948] showed that it was possible to generate better compression code for the probability model. He produced minimum average bits per symbol for the given input [1]. Fano [1949] also provided optimal code by working on data compression [2]. Huffman, the student of Fano also worked on producing optimal code better than that of Shannon-Fano coding. The Shannon-Fano coding is a top-down approach. Huffman [1952] used the bottom- up approach to producing better optimal code than the work of his master [3]. The significant advantage of arithmetic coding is its flexibility and optimality. In most cases, Huffman coding produces very nearly optimal code [4, 5, 6, and 7]. The main limitation of arithmetic coding is its slowness. Huffman coding and Lempel-Ziv coding are faster [8, 9] than arithmetic coding. The approximation technique used to increase the speed of the coding [10, 11, 12, and 13]. III. ARITHMETIC ENCODING In the field of data compression, arithmetic coding is entropy encoding. The floating-point number calculated by the characters probability. In each step of arithmetic coding, the value of the lower limit increases or remains the same. Whereas, for the upper bound, the value decreases or remains the same. So lower limit value always greater than or equal to previous lower limit value. The top limit value, always less than or equal to the previous upper bound value. RESEARCH ARTICLE OPEN ACCESS