arXiv:1909.11924v1 [math.OC] 26 Sep 2019 1 Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities Ion Necoara and Angelia Nedi´ c Abstract—This paper deals with the convex feasibility prob- lem, where the feasible set is given as the intersection of a (possibly infinite) number of closed convex sets. We assume that each set is specified algebraically as a convex inequality, where the associated convex function is general (possibly non- differentiable). For finding a point satisfying all the convex inequalities we design and analyze random projection algorithms using special subgradient iterations and extrapolated stepsizes. Moreover, the iterate updates are performed based on parallel random observations of several constraint components. For these minibatch stochastic subgradient-based projection methods we prove sublinear convergence results and, under some linear regularity condition for the functional constraints, we prove linear convergence rates. We also derive conditions under which these rates depend explicitly on the minibatch size. To the best of our knowledge, this work is the first deriving conditions that show when minibatch stochastic subgradient-based projection updates have a better complexity than their single-sample variants. Index Terms—Convex inequalities, minibatch stochastic sub- gradient projections, extrapolation, convergence analysis. I. I NTRODUCTION Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents a modeling paradigm for solving many engineering and physics problems, such as optimal control [8], [31], robust control [1], sensor networks [7], image recovery [9], data compression [21], neural networks [33], machine learning [18]. Projection methods are very attractive in applications since they are able to handle problems of huge dimension and with a very large number of convex sets in the intersection. Projection methods were first used for solving systems of linear equalities [19] and linear inequalities [22], and then extended to general convex feasibility problems, e.g. in [5], [10], [15], [25], [23]. For example, the alternating projection algorithm, which represents one of the first iterative algorithms for feasibility problems, rely at each iteration on orthogonal projections onto given individual sets taken in a random, cyclic or greedy order [11], [12], [17], [25], [30]. Otherwise, if the projection method uses, at the current iteration, an average of multiple projections of the current iterate onto a subfamily of sets, then it can be viewed as a minibatch projection algorithm [2], [3], [23], [9]. I. Necoara is with the Department of Automatic Control and Systems Engineering, University Politehnica Bucharest, 060042 Bucharest, Romania. E-mail: ion.necoara@acse.pub.ro. A. Nedi´ c is with the Electrical, Computer and Energy Engineer- ing Department at Arizona State University, Tempe, AZ, USA. E-mail: angelia.nedich@asu.edu. This work is supported by the Executive Agency for Higher Education, Research and Innovation Funding (UEFISCDI), Romania, PNIII-P4-PCE- 2016-0731, project ScaleFreeNet, no. 39/2017. The convergence properties and even the inherent limitations of projection methods have been intensely analyzed over the last decades, as it can be seen e.g. in [2], [3], [10], [12], [23], [25], [30] and the references therein. Contributions. In this paper we consider convex feasibility problems with (possibly) infinite intersection of constraints. In contrast to the classical approach, where the constraints are usually represented as intersection of simple sets, which are easy to project onto, in this paper we consider that each constraint set is given as the level set of a convex but not necessarily differentiable function. For finding a point satisfying all convex inequalities we propose projection al- gorithms using the Polyak’s subgradient update (see [30]). Moreover, the iterate updates are performed based on parallel random observations of several constraint components and novel (adaptive) extrapolated stepsize strategies. For these minibatch stochastic subgradient-based projection methods we derive sublinear convergence results and, under some linear regularity condition for the functional constraints, we prove linear convergence rates. We also derive conditions under which these rates depend explicitly on the minibatch size (number of sets we project at each iteration). From our best knowledge, this work is the first deriving theoretical conditions in terms of the geometric properties of the functional con- straints that explain when minibatch stochastic subgradient- based projection updates have a better complexity than their non-minibatch variants. More explicitly, the convergence es- timates for our parallel projection algorithms depend on the key parameters L or L N , defined in (11), which determines whether minibatching helps (L, L N < 1) or not (L = L N =1) and how much (the smaller L or L N , the better is the complexity). Our algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, these algorithms are of interest for convex feasibility problems where the constraints are known but their number is either large or not finite. Content. In Section II we introduce our feasibilty problem and derive some preliminary results. In Section III, we present the Polyak’s stochastic subgradient projection method [30] and derive its convergence rate under more general assumptions than those in [30]. In Section IV we consider minibatch variants with (adaptive) extrapolated stepsizes and derive the corresponding convergence rates depending on the minibatch size. We provide some concluding remarks in Section V. Notation. We will deal with a finite dimensional space R n , where a vector is viewed as a column vector. We use 〈x, y〉 to denote the inner product of two vectors x, y ∈ R n , and