Copyright C> IFAC Fault Detection, Supervision and Safety of
Technical Processes, Washington, D.C., USA, 2003
IFAC
c: 0 C>
Publications
www.elsevier.comllocatelifac
DYNAMIC NEURAL NETWORKS FOR
ACTUATOR FAULT DIAGNOSIS:
APPLICATION TO THE DAMADICS
BENCHMARK PROBLEM
Krzysztof Patan .,1 Thomas Parisini··
• Institute of Control and Computation Engineering,
University of Zielona Gora
ul. Podgoma 50, 65-246 Zielona Gora, Poland,
K.PatanDissi.uz.zgora.pl
•• Dept. of Electrical, Electronic and Computer
Engineering, DEEI-University of 7heste
Via Valerio 10, 94127 7heste, Italy,
parisiniDuniv.trieste.it
Abstract : The paper presents results achieved during realization of the interna-
tional project DAMADICS (Development and Application of Methods for Actuator
Diagnosis in Industrial Control Systems). The proposed fault detection and isola-
tion system is designed using a bank of dynamic neural networks. Each network is
trained using a stochastic approximation method, which can be viewed as a fast
alternative to back-propagation based algorithm. Simulation results are carried
out using the real process data recorded at the Lublin Sugar Factory, Poland.
Copyright © 2003 IFAC
Keywords: Actuators, benchmark examples, fault diagnosis, dynamic models,
neural networks, stochastic approximation, performance indexes.
1. INTRODUCTION
Methods of FDI based on system identification
and residual generation have been intensively
studied for the last two decades. One of the most
important classes of the FDI methods is neural
modelling (Frank and Koppen-Seliger, 1997; Chen
and ratton, 1999; Pat ton et al., 2000). Artificial
neural networks can be applied to nonlinear sys-
tems. They are useful when there are no math-
ematical models of the diagnosed system, hence,
analytical models and parameter-identification al-
gorithms cannot be applied. One of the most inter-
esting solutions of the dynamic system identifica-
tion problem is the application of neural networks
1 This work was supported by the EU FP 5 project
DAMADlCS
975
composed of Dynamic Neuron Models (DNM)
(Ayoubi, 1994; Patan and Parisini, 2002b). Such
neuron models consists of an adder module, a
linear dynamic system - Infinite Impulse Re-
sponse (HR) filter, and nonlinear activation mod-
ule. Thus, the DNM activation depends on its
actual inputs as well as inputs and outputs in pro-
ceeding time. The relatively complex DNM allows
one to design a neural network of a feed-forward
multi-layer structure (Patan and Parisini, 2002a) .
Derivation of the optimal neural network param-
eters, however, is not a trivial problem. This pro-
cess seems to be an optimization problem, which is
intrinsically related to a very rich topology. The
effectiveness of the gradient based algorithms in
many cases is very poor, because it usually finds
one of the local minima.