Automatica 41 (2005) 595 – 604 www.elsevier.com/locate/automatica Incorporating state estimation into model predictive control and its application to network traffic control Jun Yan , Robert R. Bitmead Department of Mechanical & Aerospace Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla CA 92093-0411, USA Received 10 July 2003; received in revised form 21 October 2004; accepted 4 November 2004 Abstract Model predictive control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. This paper focuses on exploring the inclusion of state estimates and their interaction with constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. Using a gaussian assumption, the original problem is approximated by a standard deterministically-constrained MPC problem for the conditional mean process of the state. The state estimates’ conditional covariances appear in tightening the constraints. ‘Closed-loop covariance’ is introduced to reduce the infeasibility and the conservativeness caused by using long-horizon, open-loop prediction covariances. The resulting control law is applied to a telecommunications network traffic control problem as an example. 2005 Elsevier Ltd. All rights reserved. Keywords: Model predictive control; Network control; Constrained control 1. Introduction Model predictive control (MPC) is an increasingly signif- icant and popular control approach because of its use of a possibly nonlinear multivariable process model and its abil- ity to handle constraints on inputs, states and outputs. It uses open-loop constrained optimization of finite-horizon control criteria in a receding horizon approach. A model is used to predict the future behavior of the system up to the horizon, starting from its current state, and a constrained optimization based on the prediction yields an optimal open-loop control This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor I. Paschalidis under the direction of Editor I. Petersen. Research supported by USA National Science Foundation Grants ECS-0200449 and ECS- 0225530. Corresponding author. E-mail addresses: junyan@mae.ucsd.edu (J. Yan), rbitmead@mae.ucsd.edu (R.R. Bitmead). 0005-1098/$ - see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2004.11.022 sequence over the complete horizon. Only the first element in this sequence is applied to the plant. New measurements available at the next sample time permit the calculation of an updated initial state value, and the optimization is then re-solved. The introduction of each output measurement, via the mechanism of state update, results in the overall method yielding a closed-loop control. The receding horizon approach behind MPC relies on state estimation even though most analyses of the stability, feasibility and performance of these schemes treat the con- troller as a full-state-feedback strategy (Mayne, Rawlings, Rao, & Scokaert, 2000). From MPC’s early linear uncon- strained variants such as generalized predictive control and its connection to LQG (Clarke, Mohtadi, & Tuffs, 1987; Bit- mead, Gevers, & Wertz, 1990), the formulation of MPC has included state estimation either inherently or explicitly in the construction of the predictor using observer polynomials or via the Kalman filter. However, to our knowledge, there has been no satisfactory treatment of the inclusion of state