TP4 - 350 Algorithms for Globally Solving D.C. Minimization Problems via Concave Programming * Shih-Mim Liu and G. P. Papavassilopoulos Dept of Electrical Engineering - Systems, University of Southern California Los Angeles, CA 90089-2563 Email : shihmiml@bode. usc. edu, yorgos8nyquist. usc. edu Abstract. Several methods have been proposed for solving a d.c. programming problem, but very few have been done on parallel approach. In this paper, three algorithms suitable for parallel implementation are presented to solve a d.c. problem via solving an equivalent concave minimization problem. To dis- tribute the computation load as evenly as possible, a simplex subdivision process such as bisection, tri- angulation or other partition procedures of simplices (cf. [7]) will be employed. Some numerical test results are reported and comparison of these algorithms are given. 1. Introduction Consider a d.c. (difference of two convex functions) global opti- mization problem minimize (I(*) - g(z)) subject to: x ED where D = (x E Rn : hi(x) < 0, (i = 1,. . . ,m)}, hi, j, and g are finite convex functions on R". Assume that D is compact and nonempty, then problem (DC) has a global solution. In the literature, d.c. problems play an important role in nonconvex programming problems either from a practical or a the- oretical viewpoint cf. [7]). Indeed, d.c. problems are encoun- tered frequently in B ngineering and several methods had been pro- posed to solve the class of d.c. problems (cf. [7][6]). However, only a few approaches were issued in numerical test. Although some of them might be efficient, in particular,. problem with a spe- cial structure such as separability of the objective function or a quadratic objective function (e.g. [13], [7] and references therein), very few have been done in parallel. Essentially, these proposed algorithms mainly use three different types of transformation of problem (DC), i.e. (1) the equivalent concave minimization prob- lem, (2) an equivalent convex minimization problem subject to an additional reverse convex constraint, (3) canonical d.c: problem (cf. [71[31~121[81[141[11 [GI) The purpose of our paper is to propose three algorithms fitting for parallel implementation to solve the problem (DC) via solving an equivalent concave minimization problem. The first approach (Algorithm l), which is similar to Hoffman algorithm [2], is an outer approximation method using cutting plane. Although finding the newly generated vertices is computationally expensive, the numeri- cal experiments of the serial al orithm indicate that it is much more efficient than the others ([11]16],[7],~525) for the tested problems given here. In addition, to investigate parallel behavior of Algo- rithm l we try a parallel simulation incorporating with the method described by [lo] in solving the problem. The second algorithm (Al- gorithm 2), a simplicial procedure for solving the equivalent concave minimization problems, is much less efficient than the other two be- cause of the inefficiency in detecting infeasible partition sets, the slow convergent rate of the bounds and the fast growth of the lin- ear constraints. The efficiency, however, would be improved in its parallel implementation. The last method (Algorithm 3), originally proposed by Horst et al. [6], has similar advantagesas Algorithm 2: only a sequence of linear subprograms have to be solved and both are appropriate for parallel computation with a suitable simplex partition procedure. Basically, during the parallel computation, in each iteration only the updated upper bound is required to com- municate among processors for Algorithm 1, the communication should not be a problem. In both Algorithms 2 and 3, in every *supported in part by NSF under Grant CCR-9222734 iteration during the parallel computation all the processors have to communicate with each other and share the following message they have obtained: the new upper bound, the new vertices created in the partition process, and the linear constraints added to define a new polyhedral set enclosing the feasible set. Since the amount of data to be passed is small, the communication overhead should not cause serious delay, compared to the time for solving a sequence of linear programs which are the main computation load of these two algorithms. Therefore, we use a sequential computer to simulate the parallel behavior of these three algorithms without considering the communication for the tested problems in Section 5. The rest of this paper is organized as follows. The next section contains the basic idea of the methods. In Section 3 we discuss the fundamental implementation of these algorithms. Section 4 describes the details of these algorithms and their convergences are proved. Some numerical test problem are given in Section 5. Conclusion is in the final Section. 2. Basic idea of the Methods By introducing an additional variable t, problem (DC) can be rewritten as an equivalent global concave minimization problem which has the form (CP) minimize (t - g(x)) subject to: f(x) <_ t and x E D Let 1> = {(x,t) E R" x R : x E D,f(x) < t) be the feasible set of problem (CP). Given an n-simplex S Gith vertex set V(S) containingthe feasibleset D, a prism (generatedby S) ' P = 'P(S) c R" x R is defined by P = {(x,t) E R" x R : x E s, tg 5 t 5 tT} (1) where tg = min (f(x) : x E D) and tT.= max(f(v : v E V(S)}. Note that the former is a convex minimization prob I em which can be done by any standard nonlinearprogrammingmethod. Let t~ = f(xg) (x.~ E 9). The prism P has n + 1 vertical lines (parallel to the t-axis) which pass through the n + 1 vertices of S respectively. 2.1. An Outer Approximation Method Using Cutting Plane Let a polyhedron Po = P. Obviously, Po encloses 2, and V(P0) are known. Therefore, a lower bound of problem (CP) is determined by simply minimizing the functional values at all vertices of Po. If the minimizing point is feasible to problem (CP , then that point will also be an upper bound, thus the problem /CP) is solved. Other- wise, the point violates at least one constraint of problem (CP). In this case, we should construct a hyperplane of support to some vio- lated constraint which separates this minimizing point from PO. In other words, this constructed hyperplane cuts through the previous polyhedron PO and creates a new polyhedron PI which will more tightly enclose the feasible set 'D. All new vertices generated from cut are easily determined by the method in [5]. Denote the vertex set of PI by V(P1) and go to the next iteration. For its parallel computation procedure, we will follow the approach in [lo]. 2.2. A Parallel Simplicial Algorithm Here, we introduce a simplicial algorithm to solve problem (CP) in parallel. In order to use simplicial algorithm, we make a prismatic triangulation of P, i.e. r P=uPi iEl where r is an integer multiple of (n + l), Pi is an n-simplex (i = 1,. . . , r) and each pair of simplices Pi, Pj (i # j) intersects at most 2527