Optimal Control of Switching Surfaces Y. Wardi ⋆ , M. Egerstedt ⋆ , M. Boccadoro † , and E. Verriest ⋆ ⋆ {ywardi,magnus,verriest}@ece.gatech.edu School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332, USA † boccadoro@diei.unipg.it Dipartimento di Ingegneria Elettronica e dell’Informazione Universit` a di Perugia 06125, Perugia – Italy Abstract— This paper studies the problem of optimal switch- ing surface design for hybrid systems. In particular, a formula is derived for computing the gradient of a given integral performance cost with respect to the switching surface param- eters. The formula reflects the hybrid nature of the system in that it is based on a costate variable having a discrete element and a continuous element. A numerical example with a gradient descent algorithm suggests the potential viability of the formula in optimization. Keywords. Optimal Control, Hybrid Systems, Switching Surfaces, Gradient Descent, Numerical Algorithms I. I NTRODUCTION This paper investigates an optimal-control approach to hybrid dynamical systems, where modal switching occurs whenever the state reaches a suitable switching surface. The switching surfaces are controlled by free variables (parameters), which have to be determined so as to optimize (minimize) a cost-performance functional defined on the state trajectory. Application domains of such optimal control problems include robotics [1], [9], manufacturing systems [3], [8], power converters [10], and scheduling of medical treatment [18]. The problem addressed here is how to characterize the gradient of the cost functional with respect to the switching-surface control parameters, and then use them in optimization algorithms. The special structure of the hybrid dynamical system lends itself to an especially simple computation of the gradient, and holds out promise of effective optimization in the aforementioned (as well as other) application areas. The general framework for optimal control of hybrid dynamical systems, that has influenced many subsequent developments, had been defined in [6]. Following this work, Refs. [16], [17] derived variants of the maximum principle. At the same time, the question of numerical optimization The work of Wardi has been partly supported by a grant from the Georgia Tech Manufacturing Research Center. The work of Egerstedt has been partly supported by the National Science Foundation under Grant # 0237971 ECS NSF-CAREER, and by a grant from the Georgia Tech Manufacturing Research Center. algorithms has received a significant interest. In particular, the problem of computing optimal control laws given a partition of the state space [13], or a fixed set of switching surfaces [15], [16], [17], [19], has been investigated. Refs. [2], [12], [15] addressed a timing optimization problem in piecewise-linear systems with quadratic costs, and derived homogeneous regions in the state space that determine the optimal switching times. However, the problem of optimal design of switching surfaces has not yet been fully addressed. In this paper, the switching surfaces are defined by solution sets of equations of the form g(x,a)=0, where x ∈ R n is the state variable and a ∈ R k is the control parameter; here 0 ∈ R k , and g : R n × R k → R k is a continuously differentiable function. In fact, we assume that there are a number of such switching points, with possibly different switching surfaces and control parame- ters. The main challenge is to develop a formula for the gradient of the cost functional with respect to these control parameters, that is computationally simple so that it can be deployed in an iterative optimization procedure. A first attempt resulted in a fairly complicated and time-consuming formula [4]. This paper derives a much simpler formula by defining an appropriate costate equation. We point out that the associated optimality condition is based on variational principles and hence may be derivable from classical results on optimal control (e.g., [7], Ch. 3), but here we provide a direct derivation and proof based on the problem’s specific structure. The gradient formula will then be used in a descent algorithm to optimize an example problem. The rest of the paper is organized as follows. Section 2 derives the formula for the gradient, Section 3 presents an example, and Section 4 concludes the paper. II. PROBLEM FORMULATION AND GRADIENT FORMULA Consider the following dynamical system defined on the interval [0,T ], ˙ x(t)= f (x(t),t), x(0) = x 0 , (1)