Robust Identification of Uncertain Dynamical
Systems where Adaptation is Impossible
James T. Lo and Devasis Bassu
Department of Mathematics and Statistics
University of Maryland Baltimore County
Baltimore, MD 21250, U.S.A.
e-mail: jameslo@umbc.edu
Abstract - This paper shows that training with the risk-
averting error criterion yields a robust system identifier in the
presence of an uncertain environmental parameter that is impos-
sible to adapt to. Numerical results comparing least-squares and
risk-averting identifiers illustrate the efficacy of the proposed
method.
I. Introduction
Robust control and signal processing have been intensively
studied for averting excessively large or disastrous errors in
the past 20 years [1], [7], [6], [2].
There have been three situations for which the robust pro-
cessing has been used. First, an environmental parameter for
the processor (i.e. controller and signal processor) is observ-
able, but an adaptive processor is difficult to design. Second,
the environmental parameter is unobservable. Third, the oper-
ating environment involves a fine feature or a dynamics under-
represented in the measurements or is very complex (e.g. non-
linearity) so that an analytic closed-form solution is difficult
if not impossible.
The adaptive neural networks with long- and short-term
memories described in [3], [4] is simple, systematic, general
and effective for adaptation to an observable parameter and
even allows adaptation to unobservable parameter and elim-
inates the need for robust processing as long as the parame-
ter stays constant long enough for a proper adjustment of the
short-term memory. It is shown in [5] that fine features and
under-represented dynamics can be treated effectively by ro-
bust neural processors. This leaves open only the problem of
processing in the presence of an uncertain environmental pa-
rameter that is impossible to adapt to.
We note that robust control and signal processing have been
studied mainly for linear environment, and robust processors
This work was supported in part by the National Science Foundation un-
der Grant No. ECS0114619 and the Army Research Office under Contract
DAAD19-99-C-0031. The contents of this paper do not necessarily reflect
the position or the policy of the Government.
Devasis Bassu is also with the Applied Research, Telcordia Technologies,
445 South Street, Morristown, NJ 07960. (e-mail: dbassu@telcordia.com).
for the same have been developed. However, the same cannot
be said about the nonlinear environments. Moreover, robust-
ness in the literature usually refers to that with respect to the
pessimistic H
∞
and minimax criteria. Although risk-sensitive
criterion has been used, the idea of different degrees of robust-
ness has not been found in the control and signal processing
literatures by the present authors.
The purpose of this paper is to show that the foregoing re-
maining problem can be easily but effectively resolved by the
use of a risk-averting error criterion that is slightly but signif-
icantly different from the ordinary risk-sensitive criterion. An
adaptive training method based on the risk-averting criteria to
train robust neural networks is used here. A description of the
method can be found in a companion paper also presented at
the IJCNN’02 by the authors.
A fundamental problem used for illustration is the identifi-
cation of a dynamical system, called a plant,
y
t
= f (y
t-1
,...,y
t-p
,u
t
,...,u
t-q
,ξ
t
,c) A (1)
in the presence of an environmental parameter c that changes
so fast that adaptation to it is impossible, where u (t) and y (t)
are the system input and output of the dynamical system at
time t; and ξ (t) is a random driver that has unbiase effect on
the system output.
The primary objective is to compare the performances of
the neural networks (NNs) trained as identifiers of (1), called
neural identifier, with respect to the RA (risk-averting) error
criterion J
λ,p
and the risk-neutral error criterion J
0,p
(i.e. L
p
)
given below:
J
λ,p
(w) =
ω∈S
T
t=1
exp λ |e (t, w, ω)|
p
(2)
J
0,p
(w) =
ω∈S
T
t=1
|e (t, w, ω)|
p
(3)
e (t, w, ω) := y (t, ω) -
ˆ
f (t, w, ω) (4)
where
ˆ
f (t, w, ω) denotes the output of the neural identi-
fier with weights w subject to the input u (t, ω); u (t, ω)
0-7803-7278-6/02/$10.00 ©2002 IEEE