Exponential Image Enhancement in Daytime Fog Conditions
Mihai Negru, Sergiu Nedevschi and Radu Ioan Peter
Abstract— The images captured in fog conditions have
degraded contrast, that makes current image processing
applications sensitive and error prone. We propose in this
paper an efficient single image enhancement algorithm
suitable for daytime fog conditions and based on an original
mathematical model, for computing the atmospheric veil,
that takes into account the variation in fog density to the
distance. This model is inspired by the functions that appear
in partition of unity in the differential geometry field. When
observing images captured in fog conditions, usually the
fog has a very low density in front of the camera and this
density has a non-linear increase with the distance, such
that objects are no longer visible at greater distances. By
using our mathematical model we are able to obtain superior
reconstructions of the original fog-free image, when comparing
to traditional methods. Another advantage of our method is
the ability to adapt the model in accordance to the density of
the fog. A quantitative and qualitative evaluation is performed
on both synthetic and real camera images. This evaluation
proves that our mathematical model is more suitable for
image enhancement in both homogeneous and heterogeneous
fog conditions. Our algorithm is able to perform image
enhancement in real time for both color and gray scale images.
I. INTRODUCTION
Different natural phenomena can reduce the quality of im-
ages and diminish the visibility. Such natural phenomena are
haze, fog, mist, rain, etc. In these situations the visibility dis-
tance is decreased because of the absorption and scattering of
light by the atmospheric particles. The light that is reflected
from objects in the captured scene is attenuated by scattering
along the line of sight of the camera. Images of outdoor
scenes, captured during fog conditions, are drastically de-
graded. This weather phenomenon is especially dangerous in
driving situations, because drivers tend to overestimate the
visibility distance while traveling in fog conditions and drive
with excessive speeds [1]. Due to the presence of fog, the
visibility distance decreases exponentially, thus making fog
one of the most dangerous weather condition for driving.
Some of the negative effects of fog on the quality of the
image are the loss of contrast and the alteration of the natural
colors in the captured image. In addition the scattering effect
of the transmitted light causes additional lightness in parts of
the image [2]. These effect is called air-light or atmospheric
Mihai Negru is with the Image Processing and Pattern Recognition Group,
Computer Science Department, Technical University of Cluj-Napoca, Ro-
mania. Mihai.Negru@cs.utcluj.ro
Sergiu Nedevschi is the head of the Image Processing and Pattern
Recognition Group, Computer Science Department, Technical University
of Cluj-Napoca, Romania. Sergiu.Nedevschi@cs.utcluj.ro
Radu Ioan Peter is with the Mathematics Depart-
ment, Technical University of Cluj-Napoca, Romania.
Ioan.Radu.Peter@math.utcluj.ro
veil. In order to overcome these impediments we must either
adapt the operating parameters of the camera or try to detect
the presence of fog and remove its effects from the images.
In this work we focus on the second approach, namely we are
dealing with restoring the contrast and enhancing the quality
of the original foggy image.
Several algorithms were proposed in literature for restor-
ing the contrast of foggy images. These methods can be
categorized in two groups: model and non-model based
enhancement techniques. Non-model based methods perform
image enhancement relying only on the information obtained
from the image; such as histogram equalization or adaptive
histogram equalization [3], approaches based on Retinex
theory [4]. Unfortunately, these methods do not maintain
color fidelity and are not suitable for real time computer
vision.
Model based contrast restoration techniques can be further
divided in two categories: with given depth and unknown
depth. When the depth is supposed to be known, this
information can be used to restore the original contrast of
the image. The authors in [5], [6] and [7] studied different
haze removal approaches based on given depth information.
The depth is inferred by using the altitude, tilt and position
of the camera [5], through the manual approximation of the
sky area and vanishing point in the captured image [6] or by
approximating the geometrical model of the analyzed image
scene [7]. Because the depth information is provided by the
user in all these above mentioned approaches and because
the obtained depth information is erroneous and unreliable,
these methods are not feasible for real world applications.
Methods for restoring contrast without depth information
are presented in [2], [8], [9] and [10]. They all use a single
image for performing image enhancement and a mathemat-
ical model that describes the fog in the image. Oakley [2]
assumes that the distance between camera and the points in
the scene is approximately constant, such that the air-light on
the whole image is uniform. He then estimates the air-light
by minimizing a cost function on the whole image. This
cost function is a scaled version of the standard deviation
of the normalized brightness in the image. This approach is
only suitable for simple contrast loss correction of broadcast
images, and fails in scenes where the distance to the scene
points is not constant, such as driving scenarios.
The method proposed by Tan in [8] restores the contrast of
the original image by using a cost function in Markov Ran-
dom Fields setting for estimating the air-light. The proposed
method can produce halos at depth discontinuities.
In [9] the authors introduce the dark channel prior (DCP),
which states that in most of non-sky scenes at least one
2014 IEEE 17th International Conference on
Intelligent Transportation Systems (ITSC)
October 8-11, 2014. Qingdao, China
978-1-4799-6078-1/14/$31.00 ©2014 IEEE 1675