World Applied Sciences Journal 5 (4): 517-521, 2008
ISSN 1818-4952
© IDOSI Publications, 2008
Corresponding Author: M. Mehrabinezhad, Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
517
A Discretisation Method for Solving Time Optimal Control Problems
M.H. Farahi, A.V. Kamyad, M. Mehrabinezhad and M.R. Zarrabi
Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
Abstract: Classical methods are mostly deficient for solving nonlinear time optimal control problems. In
this paper an approach to solve this kind of problems is considered. This method was first presented by
Badakhshan et al . in [1] and in this paper, this method is expanded to solve time optimal control problems.
In this approach the time optimal control problem is changed to a problem in calculus of variations and
then is solved by a discretisation method.
Key words: Nonlinear programming • optimal control • time optimal control • discretisation
INTRODUCTION
Optimal control problems are widely used in
industry and the goal is to control a dynamical system
from a given initial point, to a given target, which one
may try to minimize energy, time or any indicated
costs. In time optimal control problems, a dynamical
system is going to be controlled in a minimum part of
time. A neural network approach for controlling such
systems is proposed in [2] and also time-optimal control
of disturbance-rejection tracking systems is considered
in [3]. The Bang-Bang principle of time optimal
controls for the heat equation is presented in [4] and a
discussion on time optimal control of integrator
switched systems is considered in [5]. This paper deals
with this class of problems and a discrete method is
proposed to solve such problems.
Definition 1: A classical control problem has the
following general form,
a b
x(t) g(x(t),u(t),t)
s.t.
x(a) x, x(b) x
=
= =
(1)
where g is a nonlinear continuous functional such
that [ ]
n
g:A U a,b R, × × → t (a,b) R ,
n
x(t) A R ,
m
u(t) U R. Sets A and U are given compact sets,
1 n
x(t) (x(t), ,x (t)), = K is the continuous state function
and
1 m
u(t) (u(t), ,u (t)), = K is the control function, which
is assumed to be a measurable function on [a,b]. Also x
a
(initial point) and x
b
(end point), are given [6, 7].
Definition 2: A classical optimal control problem has
the following general form
b
0
a
n
m
a
b
Min J(x(t),u(t),t) f (x(t),u(t),t)dt
s.t.
x(t) g(x(t),u(t),t) x(t) A R,
x(a) x, u(t) U R ,
x(b) x, t (a,b) R,
=
=
=
=
∫
(2)
where [ ]
0
f :A U a,b R × × → is a continuous function and
all the other functions and variables are as the same as
defined previously in Definition 1.
PRELIMINARIES
Consider again the nonlinear system (1). One define an
error function as follow
p
E(x(t),x(t),u(t),t) x (t) g(x(t),u(t),t) , = -
where the norm function || ||
p
is defined as:
1
n
p
p
i
p
i1
f f , p 1
=
= ≥
∑
By the use of the above error function and
considering the control problem (1), one may write the
following optimization problem
(
b
a
m n
a
n
b
Min J(x(t),x(t),u(t),t) E(x(t),x(t),u(t),t )dt
s.t. x(a) x , u(t) U R , x(t) A R,
x(b) x, x(t) B R, t a,b R.
=
=
=
∫
(3)
Lemma 1: If E(x(t),x(t),u(t),t) be a continuous
function on B× A× U× [a,b], then