Distributionally Robust Optimization with Expected Constraints via Optimal Transport Diego Fonseca and Mauricio Junca Department of Mathematics, Universidad de los Andes, Bogot´a, Colombia {df.fonseca,mj.junca20}@uniandes.edu.co Abstract. We consider a stochastic program with expected value con- straints. We analyze this problem in a general context via Distribution- ally Robust Optimization (DRO) approach using 1 or 2-Wasserstein met- rics where the ambiguity set depends on the decision. We show that this approach can be reformulated as a finite-dimensional optimization prob- lem, and, in some cases, this can be convex. Additionally, we establish criteria to determine the feasibility of the problem in terms of the Wasser- stein radius and the level of the constraint. Finally, we present numerical results in the context of management inventory and portfolio optimiza- tion. In portfolio optimization context, we present the advantages that our approach has over some existing non-robust methods using real fi- nancial market data. Keywords: Stochastic program · Expected constraints· Wasserstein met- ric· Mean-variance model. 1 Introduction In this work we consider stochastic programs with expected value constraints given by the following formulation: J = min x∈R m Φ (x,F, P) subject to E P [G (x,ξ )] ≥ µ, x ∈X , (1) where F and G are functions such that F,G : R m × R n → R, ξ ∈ R n is a random vector with (unknown) probability distribution P supported in Ξ ⊆ R n , and X⊆ R m is a set of constraints on the decision vectors. In addition, the objective function Φ is a risk function that depends on the performance function F . When Φ (x,F, P) := E P [F (x,ξ )], this problem appears in various contexts such as finance, [16,9], operations research, [21], and, machine learning [25,22]. Most attempts to solve (1) use Sample Average Approximation (SAA) where samples of ξ are used to replace expected values by the sample means. Strate- gies based on the stochastic gradient descent methods are used in [1,15,37]. These strategies are sensitive to alterations in the quality of the sample, and the out-of-sample performance can be poor, specifically, the constraints tend not to arXiv:2111.04663v1 [math.OC] 8 Nov 2021