Topics in Intelligent Computing and Industry Design (ICID) 2(2) (2020) 40-44
Quick Response Code Access this article online
Website:
www.intelcomp-design.com
DOI:
10.26480/etit.02.2020.40.44
Cite The Article: Neha Sharma, Usha Batra(2020).Evaluation Of Lossless Algorithms For Data Compression.
Topics In Intelligent Computing And Industry Design, 2(2): 40-44.
ISBN: 978-1-948012-17-1
Ethics and Information Technology (ETIT)
DOI: http://doi.org/10.26480/etit.02.2020.40.44
EVALUATION OF LOSSLESS ALGORITHMS FOR DATA COMPRESSION
Neha Sharma*
a
, Usha Batra
b
a
Research Scholar, G D Goenka University, Gurugram, 122103, India
b
Assistant Dean, G D Goenka University, Gurugram, 122103, India
*Corresponding Author Email: nehasharma0110@gmail.com
This is an open access article distributed under the Creative Commons Attribution License CC BY 4.0, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
ARTICLE DETAILS ABSTRACT
Article History:
Received 25 October 2020
Accepted 26 November 2020
Available online 03 December 2020
Nowadays the communication and exchange of information over the internet which includes sending e-mails,
text messages via online apps e.g. messengers, has become the need of the hour. While transmitting the data,
some of the critical aspects like size of message or file, need to deal with extreme precaution as these are very
crucial. Furthermore, the transmitting time is directly proportional to the file size i.e. lesser file size always
take less time. Compression techniques are used to decrease the size of the file, meanwhile not impacting the
quality. This paper demonstrates that the use of two lossless compression techniques on images so that they
become suitable for information security using the techniques like steganography, cryptography etc. Thus,
the objective of the paper is to reduce the image size by using Huffman encoding and run-length encoding
algorithms. The algorithms are implemented and the performances are analysed by evaluating the results on
different parameters, such as compression ratio, compressed file size, and compression & decompression
time. The paper is concluded with the analysis of the results obtained. © 2012 Published by Elsevier Ltd.
Selection and/or peer-review under responsibility of Global Science and Technology Forum Pte Ltd.
KEYWORDS
Compressed File size, Compression Ratio, Compression Time, Decompression Time.
1. INTRODUCTION
Data compression is a procedure undertaken to convert the depicted data
from one form to another form post compression, which contains the same
information but with the reduced size (Patil, Kulat, 2017). The main
benefits of compressing the data are the transmission bandwidth and the
reduced storage capacity. This is very useful because storing and
transmitting the files requires huge resources. One of the major area of
application for the data compression techniques is on digital images. There
are a number applications for image processing, such as medical imaging,
satellite imaging, etc where the image size is large and it requires a more
storage capacity or high bandwidth for transmitting it in its original form
over a communication channel. After compression, when the size of data
is reduced, it gives us the leverage to send more data. Therefore reducing
the size to the half is equal to doubling its storage capacity.
After getting this extra storage space, we can store data hieratically at
better and higher levels that also avoids extra loads on input/output
devices of computer system.
2. LITERATURE REVIEW
RLE compression algorithm has been used widely because of its simplicity
and less complexity. The algorithm is easy to implement and some of the
researchers has made many modifications in it such as laying stress on the
way the pixels are scanned like either the pixels are to scanned in row
fashion, or column order (Karthikeyan, 2014). Some other authors have
focused on calculating the bit depth of the runs of repeated pixels and tried
to used enhance entropy coding to enhance (Suarjaya, 2012). The authors
(Albahadily and Tsviatkou, 2016) have modified run length encoding
algorithm in order to achieve reduced encoded size and reduced encoding
time. The authors (Canard et. al., 2017) have used the run length algorithm
in order to make it fit within the constraints of FHE execution and then
analyzing it to the achieve the optimized FHE execution efficiency. Hybrid
DWT-DCT algorithm is applied (Rafea and Salman, 2018) in order to
compress the medical image and an adaptive RLE algorithm is used to
encode the runs of zero created by the hybrid algorithm in order to achieve
better compression ratio results. The authors (Khassaweneh and
Alshorman, 2020) have used Frei-chen bases technique for compressing
the large image data and then used a modified RLE algorithm in order to
enhance the compression factor with out adding any distortion so as to
receive the high-quality decompressed image. The use of Huffman
encoding algorithm also brings the high-quality compression results. The
authors (Erdal and Erguzen, 2019) have used Huffman encoding and
arithmetic encoding in order to provide a solution to long bit sequences.
The Huffman encoding in general gives better compression results (Ajala
et al, 2018). The authors have combined to Huffman encoding with LZW
algorithm in order to achieve cheap, reliable and efficient system.
3. DATA COMPRESSION
There are two types of compression categories: Lossy Compression and
Lossless Compression.
3.1 Lossy Compression
Lossy compression technique is one that ignores the less important data.
The file compressed using lossy technique will not be exactly same as the
original file. After decompression we get the closer approximation of the
original file (Klein et. al., 2019). Lossy type compression technique shrinks
the bits by finding and eliminating unnecessary information
This paper was presented at
International Conference on Contemporary Issues in
Computing (ICCIC-2020) - Virtual
IETE Sector V, Salt Lake, Kolkata
From 25th-26th July 2020