Automatica 42 (2006) 777 – 782 www.elsevier.com/locate/automatica Brief paper Approximate robust dynamic programming and robustly stable MPC Jakob Björnberg a , Moritz Diehl b, ∗ a Centre for Mathematical Sciences, Wilberforce Road, CB3 0WB Cambridge, UK b IWR, University of Heidelberg, Im Neuenheimer Feld 368, D-69120 Heidelberg, Germany Received 20 January 2005; received in revised form 30 June 2005; accepted 22 December 2005 Available online 14 February 2006 Abstract We present a technique for approximate robust dynamic programming that is suitable for linearly constrained polytopic systems with piecewise affine cost functions. The approximation method uses polyhedral representations of the cost-to-go function and feasible set, and can considerably reduce the computational burden compared to recently proposed methods for exact robust dynamic programming [Bemporad, A., Borrelli, F., & Morari, M. (2003). Min–max control of constrained uncertain discrete-time linear systems. IEEE Transactions on Automatic Control, 48(9), 1600–1606; Diehl, M., & Björnberg, J. (2004). Robust dynamic programming for min–max model predictive control of constrained uncertain systems. IEEE Transactions on Automatic Control, 49(12), 2253–2257]. We show how to apply the method to robust MPC, and give conditions that guarantee closed-loop stability. We finish by applying the method to a state constrained tutorial example, a parking car with uncertain mass. 2006 Elsevier Ltd. All rights reserved. Keywords: Dynamic programming; Receding horizon control; Min–Max model predictive control; Robustness 1. Introduction Robust model predictive control (MPC), originally proposed by Witsenhausen (1968), is an emerging control technique based on a worst-case optimization of future system behaviour. While a key assumption in classical MPC is that the system is deterministic and known, in robust MPC the system is not as- sumed to be known exactly, and the optimization is performed against the worst-case predicted system behaviour. Robust MPC thus typically leads to min–max optimization problems, which either arise from an open loop, or from a closed-loop formulation of the optimal control problem (Lee & Yu, 1997). In this paper, we are concerned with the less conservative, but computationally more demanding closed-loop formulation (for an interesting open-loop approach treating the same system class as this paper see Pluymers, Rossiter, Suykens, & De Moor, 2005). We regard discrete-time polytopic systems with piecewise affine cost and linear constraints only. This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Martin Guay under the direction of Editor Frank Allgower. ∗ Corresponding author. E-mail addresses: jeb76@cam.ac.uk (J. Björnberg), m.diehl@iwr.uni-heidelberg.de (M. Diehl). 0005-1098/$ - see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2005.12.016 For this problem class, the closed-loop formulation of ro- bust MPC leads to multi-stage min–max optimization problems that can be attacked by a scenario tree formulation (Kerrigan & Maciejowski, 2004; Scokaert & Mayne, 1998) or by robust dy- namic programming (DP) approaches (Bemporad, Borrelli, & Morari, 2003; Diehl & Björnberg, 2004; Mayne, 2001). Note that the scenario tree formulation treats a single optimization problem for one initial state only, whereas DP produces the feedback solution for all possible initial states. Unfortunately, the computational burden of both approaches quickly becomes prohibitive even for small-scale systems as the size of the pre- diction horizon increases. The contribution of this article is a novel approximation technique for robust DP that considerably reduces the computational burden. This is at the expense of optimality, but still allows to generate robustly stable feedback laws that respect control and state constraints under all circum- stances. We first show, in Section 2, how the robust DP recursion can be compactly formulated entirely in terms of operations on sets. For the problem class we consider, these sets are polyhedra and can be explicitly computed (Bemporad et al., 2003; Diehl & Björnberg, 2004), as reviewed in Section 3. In Section 4, we generalize an approximation technique originally proposed in Lincoln and Rantzer (2002) (for deterministic DP). This allows