INTERNATIONAL JOURNAL FOR INNOVATIVE RESEARCH IN MULTIDISCIPLINARY FIELD ISSN: 2455-0620 Volume - 4, Issue - 10, Oct 2018 Monthly, Peer-Reviewed, Refereed, Indexed Journal with IC Value: 86.87 Impact Factor: 6.497 Publication Date: 31/10/2018 Available online on WWW.IJIRMF.COM Page 78 Design of Adaptive Compression Algorithm Elias Delta Code and Huffman 1 Eko Hariyanto, 2 Andysah Putera Utama Siahaan Faculty of Science and Technology, Universitas Pembangunan Panca Budi, Medan, Indonesia Email: 1 eko.hariyanto@dosen.pancabudi.ac.id, 2 andiesiahaan@gmail.com 1. INTRODUCTION: Data Compression is a branch of computer science derived from Information Theory. Information Theory itself is a branch of Mathematics that developed around the end of the 1940s. The main character of Information Theory is Claude Shannon from Bell Laboratory. Information Theory focuses on various methods of information including message storage and processing. Information theory also learns about redundancy in useless messages. The more redundancy the more significant the size of the message, the effort to reduce redundancy is what ultimately gives birth to the subject matter of Data Compression. Information theory uses entropy terminology as a measure of how much information can be retrieved from a message. The word "entropy" comes from thermodynamics. The higher the entropy of a message the more information contained in it. The entropy of a symbol is defined as the negative logarithm value of the probability of its occurrence. The following formula can be used to determine the information content of a message in the number of bits.    = −  2 () The entropy of the whole message is the sum of the whole entropy of the whole symbol. Data is an important thing that must be protected and safeguarded [1][7]. Data compression is the process that converts an input in the form of the source or raw data stream into another data stream. Based on the possibility of data after being compressed it can be reconstructed back to the data original, data compression techniques are divided into two parts, lossless compression, and lossy compression, Lossless compression allows data to be returned to the original data in full or without any information missing in the data, while Lossy compression cannot restore data that has been completely compressed from the original data during the decompression process [8]. Huffman is a method designed by Peter Fenwick in an experiment to improve the performance of the Burrows- Wheeler transform [9]. The term adaptive compression comes from the Error Control Codes (ECC). It consists of the original data plus some check bits [10]. If several check bits are removed, to shorten the series of codes, the results of the code are intended as Punctured. Adaptive coding is a variation of entropy encoding. Adaptive coding is suitable for data streams because it is dynamic and adapts to changes in data characteristics. Adaptive coding requires more complicated encoders and decoders to maintain a synchronized state, and also more computing power. Adaptive coding utilizes a model that is owned by a data compression method, which is a prediction of the composition of the data. Encoder transmits data contents by utilizing references to the model. 2. THEORIES: 2.1 Data Compression Data compression in the context of computer science is a science or art in representing information contained in data into a denser form. The development of computers and multimedia has resulted in data compression is very important and useful in technology today. The definition of data compression is a process that converts an input in the form of a data stream into another data stream which has a smaller size. Data flow can be a file or buffer in memory. Data in the context of data compression encompasses all digital forms of information, which can be processed by a Abstract: Compression aims to reduce data before storing or moving it into storage media. Huffman and Elias Delta Code are two algorithms used for the compression process in this research. Data compression with both algorithms is used to compress text files. These two algorithms have the same way of working. It starts by sorting characters based on their frequency, binary tree formation and ends with code formation. In the Huffman algorithm, binary trees are formed from leaves to roots and are called tree-forming from the bottom up. In contrast, the Elias Delta Code method has a different technique. Text file compression is done by reading the input string in a text file and encoding the string using both algorithms. The compression results state that the Huffman algorithm is better overall than Elias Delta Code. Key Words: Huffman, Elias Delta Code, adaptive, compression.