Abstract — A multilayer neural network based on multi-
valued neurons (MLMVN) is a neural network with a
traditional feedforward architecture. At the same time this
network has a number of specific properties and advantages.
Its backpropagation learning algorithm does not require
differentiability of the activation function. The functionality of
MLMVN is higher than the ones of the traditional feedforward
neural networks and a variety of kernel-based networks. Its
higher flexibility and faster adaptation to the mapping
implemented make possible an accomplishment of complex
problems using a simpler network. The MLMVN can be used
to solve those non-standard recognition and classification
problems that cannot be solved using other techniques. In this
paper we use the MLMVN as a tool for the blur identification
problem. A prior knowledge about the distorting operator and
its parameter is of crucial importance in blurred image
restoration.
I. INTRODUCTION
multilayer neural network based on multi-valued
neurons (MLMVN) has been introduced in [1] and then
investigated and developed further in [2]. This network
consists of multi-valued neurons (MVN). That is a neuron
with complex-valued weights and an activation function,
defined as a function of the argument of a weighted sum.
This activation function was proposed in 1971 in the pioneer
paper of N. Aizenberg et al. [3].
The multi-valued neuron was introduced in [4]. It is based
on the principles of multiple-valued threshold logic over the
field of the complex numbers formulated in [5] and then
developed in [6]. A comprehensive observation of the
discrete-valued MVN, its properties and learning is
presented in [6]. A continuous-valued MVN and its learning
are considered in [1],[2]. The most important properties of
MVN are: the complex-valued weights, inputs and output
coded by the K
th
roots of unity (a discrete-valued MVN) or
lying on the unit circle (a continuous-valued MVN), and the
activation function, which maps the complex plane into the
unit circle. It is important that MVN learning is reduced to
the movement along the unit circle. The MVN learning
algorithm is based on a simple linear error correction rule
This work was supported in part by the Collaborative Research Center
for Computational Intelligence of the University of Dortmund (SFB 531,
Dortmund, Germany) and by the Academy of Finland, project No. 213462
(Finnish Centre of Excellence program (2006 - 2011).
Igor Aizenberg is with Texas A&M University-Texarkana, P.O. Box
5518, 2600 N. Robison Rd. Texarkana, Texas 75505 USA, e-mail:
igor.aizenberg@tamut.edu
Dmitriy Paliy and Jaakko T. Astola are with Tampere International
Center for Signal Processing, Tampere University of Technology, P.O. Box
553, 33101 Tampere, Finland, email: firstname.lastname@tut.fi
and it does not require differentiability of the activation
function.
Different applications of MVN have been considered
during recent years, e.g.: MVN as a basic neuron in the
cellular neural networks [6], as the basic neuron of the
neural-based associative memories [6],[7]-[10], as the basic
neuron in a variety of pattern recognition systems [10]-[12],
and as a basic neuron of the MLMVN [1],[2]. MLMVN
outperforms a classical multilayer feedforward network and
different kernel-based networks in the terms of learning
speed, network complexity, and classification/prediction rate
tested for such popular benchmarks problems as the parity n,
the two spirals, the sonar, and the Mackey-Glass time series
prediction [1],[2]. These properties of MLMVN show that it
is more flexible and adapts faster in comparison with other
solutions. In this paper we apply MLMVN to identify blur
and its parameters, which is a key problem in image
restoration.
Usually blur refers to the low-pass distortions introduced
into an image. It can be caused, e.g., by the relative motion
between the camera and the original scene, by the optical
system which is out of focus, by atmospheric turbulence
(optical satellite imaging), aberrations in the optical system,
etc. [13]. Any type of blur, which is spatially invariant, can
be expressed by the convolution kernel in the integral
equation [14],[15]. Hence, restoration (deblurring) of a
blurred image is an ill-posed inverse problem [16], and
regularization is commonly used when solving this problem
[16].
There is a variety of sophisticated and efficient deblurring
techniques such as deconvolution based on the Wiener filter
[13],[17], nonparametric image deblurring using local
polynomial approximation with spatially-adaptive scale
selection based on the intersection of confidence intervals
rule [17], Fourier-wavelet regularized deconvolution [18],
expectation-maximization algorithm for wavelet-based
image deconvolution [19], etc. All these techniques assume
a prior knowledge of the blurring kernel, characterized by
the point spread function (PSF), and its parameter.
When the blurring operator is unknown, the image
restoration becomes a blind deconvolution problem [20]-
[22]. Most of the methods to solve it are iterative, and,
therefore, they are computationally costly. Due to the
presence of noise they suffer from the stability and
convergence problems [23].
The original solution of blur identification problem that is
based on the use of MVN-based neural networks was
proposed in [12] and [24]. Any blur specifically distorts the
Multilayer Neural Network based on Multi-Valued Neurons and
the Blur Identification Problem
Igor Aizenberg, Member, IEEE, Dmitriy Paliy, and Jaakko T. Astola, Fellow, IEEE
A
0-7803-9490-9/06/$20.00/©2006 IEEE
2006 International Joint Conference on Neural Networks
Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada
July 16-21, 2006
473