Chaos, Solitons and Fractals 106 (2018) 16–22
Contents lists available at ScienceDirect
Chaos, Solitons and Fractals
Nonlinear Science, and Nonequilibrium and Complex Phenomena
journal homepage: www.elsevier.com/locate/chaos
Fractal image compression using upper bound on scaling parameter
Swalpa Kumar Roy
a,∗
, Siddharth Kumar
b
, Bhabatosh Chanda
c
, Bidyut B. Chaudhuri
a
,
Soumitro Banerjee
d
a
Computer Vision & Pattern Recognition Unit, Indian Statistical Institute, Kolkata 700108, India
b
Department of Computer Science, San Jose Sate University, San Jose, CA 95192, USA
c
Electronics & Communication Sciences Unit, Indian Statistical Institute, Kolkata 700108, India
d
Department of Physical Sciences, Indian Institute of Science Education and Research, Mohanpur Campus, Kolkata 741246, India
a r t i c l e i n f o
Article history:
Received 14 June 2017
Revised 29 October 2017
Accepted 9 November 2017
Keywords:
Fractal coding speedup
Scaling parameter upper-bound
Image data compression
a b s t r a c t
This paper presents a novel approach to calculate the affine parameters of fractal encoding, in order to
reduce its computational complexity. A simple but efficient approximation of the scaling parameter is
derived which satisfies all properties necessary to achieve convergence. It allows us to substitute to the
costly process of matrix multiplication with a simple division of two numbers. We have also proposed
a modified horizontal-vertical (HV) block partitioning scheme, and some new ways to improve the en-
coding time and decoded quality, over their conventional counterparts. Experiments on standard images
show that our approach yields performance similar to the state-of-the-art fractal based image compres-
sion methods, in much less time.
© 2017 Elsevier Ltd. All rights reserved.
1. Introduction
Fractal image compression (FIC) is one of the important meth-
ods of gray scale image coding, fully automated by Jacquin [1]. The
basic FIC is based on the observation that images usually exhibit
affine redundancy. Here an image is segmented into a number dif-
ferent sized blocks and the encoding process consists of approxi-
mating the small image blocks, called Range blocks (RBs), from the
larger blocks, called Domain blocks (DBs), of the image, by search-
ing the best matching affine transformation from a DB pool, much
akin to the image compression by vector quantization (VQ) method
[2]. In the encoding process, separate transformations for each RB
are obtained. For decoding, the set of affine transformations, when
iterated upon arbitrary initial image, produces a fixed point (attrac-
tor) that approximates back the target image. This scheme, named
Partitioned Iterative Function System (PIFS), was proposed by Fisher
[3]. Although block matching is time-consuming, FIC offers high
compression ratio and very fast decoding. Since domain search in-
volves pixel-by-pixel matching as well as contraction and rotation,
in order to find the suitable affine transform, FIC encoding presents
computational challenge. Hence, reducing the matching complexity
has been a subject of comprehensive research efforts. Such meth-
∗
Corresponding author.
E-mail addresses: swalpa@students.iiests.ac.in (S.K. Roy),
siddharth.kumar@sjsu.edu (S. Kumar), chanda@isical.ac.in (B. Chanda),
bbc@isical.ac.in (B.B. Chaudhuri), soumitro@iiserkol.ac.in (S. Banerjee).
ods may be grouped into the classification based approach, feature-
vector based approach, and meta-heuristic approach.
Classification based approaches use one or more common char-
acteristics like mean and standard deviation of pixel intensity of
the image blocks as features. These features are then used to clas-
sify a DB into one of a fixed number of classes. Then, the domain
inspection is limited only to its class of the RBs, thereby reduc-
ing the search time. Fisher’s [4] classification scheme is one such
approach which divides a block into four quadrants of the same
size. Then, for each quadrant, average pixel intensity and the corre-
sponding variances are calculated and used to classify the DBs into
one of the
4
P
4
= 24 classes, based on their relative ordering. Hurt-
gen & Stiller [5] and Bhattacharya [6] modified the Fisher’s method
and obtained some improved performance.
Saupe [7] proposed a technique based on feature vector for im-
proving the DB search. It achieves good output fidelity and bet-
ter compression efficiency with fewer number of comparisons. But
drawbacks of this method are that the encoding time is signif-
icantly increased and the feature vectors have larger dimension.
Saupe’s [8,9] idea of discarding low variance DBs gives a consid-
erable gain in the speed but it is still slower than some previ-
ous methods [10]. To further improve the selection of relevant DBs,
Tong [11] proposed an adaptive method to eliminate the DBs with
variance below a certain threshold, hence the name is Adaptive
Search. This method allows for a good speed-up by eliminating the
irrelevant blocks using the value of the scaling parameter only. Fur-
ther improvements in speed were obtained by discarding some of
https://doi.org/10.1016/j.chaos.2017.11.013
0960-0779/© 2017 Elsevier Ltd. All rights reserved.