APPLICATIONES MATHEMATICAE 30,3 (2003), pp. 287–304 Ra´ ul Montes-de-Oca (M´exico) Alexander Sakhanenko (Puebla) Francisco Salem-Silva (Puebla) ESTIMATES FOR PERTURBATIONS OF GENERAL DISCOUNTED MARKOV CONTROL CHAINS Abstract. We extend previous results of the same authors ([11]) on the effects of perturbation in the transition probability of a Markov cost chain for discounted Markov control processes. Supposing valid, for each stationary policy, conditions of Lyapunov and Harris type, we get upper bounds for the index of perturbations, defined as the difference of the total expected discounted costs for the original Markov control process and the perturbed one. We present examples that satisfy our conditions. 1. Introduction. This paper deals with discounted Markov control pro- cesses (DMCPs) with discrete time, general space of states X and (possibly) unbounded one-step cost functions (see [6–8]). The main problem in the the- ory of DMCPs is to determine an optimal policy (see [6, 7]) with respect to an objective function equal to the total expected discounted cost V α (x, π), where α ∈ (0, 1) is a discount factor, x is an initial state of the process and π is a policy that we apply (see Section 2 for definitions). But in a lot of cases, the transition probability Q of the given DMCP is incompletely known because it may be obtained through estimation or approximation (see [3]). Actually, we only have a theoretical approximation e Q of Q. In this case, if we know that the approximating process with transition probability e Q has an optimal policy e π * , we may, in a natural way, use this policy to control the original process (corresponding to Q). As a consequence, we will have an increase of the total expected discounted cost, because e π * is not an 2000 Mathematics Subject Classification : 93E20, 90C40. Key words and phrases : index of perturbations, discounted Markov control processes, Lyapunov condition, Harris condition. Research of F. Salem-Silva supported by grant VIEP – BUAP II 33G01. [287]