SIAM J. CONTROL AND OPTIMIZATION Vol. 33, No. 3, pp. 937-959, May 1995 () 1995 Society for Industrial and Applied Mathematics 013 SINGULAR OPTIMAL STOCHASTIC CONTROLS II: DYNAMIC PROGRAMMING* ULRICH G. HAUSSMANN AND WULIN SUO$ Abstract. The dynamic programming principle for a multidimensional singular stochastic con- trol problem is established in this paper. When assuming Lipschitz continuity on the data, it is shown that the value function is continuous and is the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation. Key words, singular controls, control rules, value function, dynamic programming principle, Hamilton-Jacobi-Bellman equation, viscosity solution AMS subject classifications. 49J30, 49A55, 60G44, 93E20 1. Introduction. In [8] we applied a direct method to study the existence of optimal controls for the stochastic control problem in which the state is governed by the stochastic differential equation xt x + b(O, xe, ue)dO + a(O, xe, ue)dBe + g(O)dve on some filtered probability space (, , t, P), where b(.,., .), a(.,., .), g(.) are given deterministic functions, (Bt, t 0) is a d-dimensional Brownian motion (in fact, B. need not be d-dimensional), x is the initial state at time s, and u [0, T] U, v [0, T] k, with v nondecreasing componentwise, stand for the controls The expected cost has the form J(a) E p f (t, xt, ut)dt + c(t) dvt ,T) where f(.,-, .)" [0, T] x d x V , c(.)" [0, T] are given. We assume that the cost of applying the singular control is positive, i.e., ci(.) > 0, 1,..., k. For this type of problem, the reader may consult the paper by Haussmann and Suo [8] and the list of references therein. This paper is a continuation of Haussmann and Suo [8]. As is well known for the classical stochastic control problem, the dynamic programming principle is satisfied and, if the value function has appropriate regularity, it satisfies a second-order nonlin- ear partial differential equation (the Hamilton-Jacobi-Bellman equation) (cf. Flem- ing and Rishel [4] and Lions [10], among others). This is still the case for singular stochastic control where the Hamilton-Jacobi-Bellman equation is a second-order variational inequality (see Fleming and Soner [5] and the list of references in Hauss- mann and Suo [8]). In this paper, in 3 we adopt a probabilistic approach used in Haussmann [6], Haussmann and Lepeltier [7], and E1 garoui, Nguyen, and Jeanblanc- Picqu [3] to establish the dynamic programming principle under very mild conditions of the data. Then, in 4, assuming Lipschitz continuity of the coefficients, we prove Received by the editors June 15, 1993; accepted for publication (in revised form) January 12, 1993. This work was supported by Natural Sciences and Engineering Research Council of Canada grant 88051. Department of Mathematics, University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z2. Faculty of Management, University of Toronto, Toronto, Ontario, Canada MbS 1V4. 937