Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 26 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
26
Dung lượng
216,07 KB
Nội dung
VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY INSTITUTE OF MATHEMATICS Hoang Ngoc Tuan DC ALGORITHMS AND NONCONVEX QUADRATIC PROGRAMMING Speciality: Applied Mathematics Speciality code: 62 46 01 12 SUMMARY OF PH.D. DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN MATHEMATICS Supervisor: Prof. Dr. Hab. Nguyen Dong Yen HANOI - 2015 The dissertation was written on the basis of the author’s research works carried at Institute of Mathematics, Vietnam Academy of Science and Tech- nology. Supervisor: Prof. Dr. Hab. Nguyen Dong Yen First referee: Second referee: Third referee: To be defended at the Jury of Institute of Mathematics, Vietnam Academy of Science and Technology: on , at o’clock The dissertation is publicly available at: • The National Library of Vietnam • The Library of Institute of Mathematics Introduction Convex functions have many nice properties. For instance, a convex func- tion, say ϕ : R n → R, is continuous, directionally differentiable, locally Lipschitz at any point u ∈ R n . In addition, ϕ is Fr´echet differentiable al- most everywhere on R n , i.e., the set of points where the gradient ∇ϕ(x) exists is of full Lebesgue measure. It is also known that the subdifferential ∂ϕ(u) := {x ∗ ∈ R n : x ∗ , x − u ≤ ϕ(x) − ϕ(u) ∀x ∈ R n } of a convex function ϕ : R n → R ∪ {+∞} at u ∈ dom ϕ := {x ∈ R n : ϕ(x) < +∞} is a closed, convex set. If x /∈ dom ϕ then one puts ∂ϕ(x) = ∅. The Fermat Rule for convex optimization problems asserts that ¯x ∈ R n is a solution of the minimization problem min{ϕ(x) : x ∈ R n } if and only if 0 ∈ ∂ϕ(¯x). Convex analysis is a powerful machinery for dealing with convex optimiza- tion problems. Note that convex programming is an important branch of optimization theory, which continues to draw attention of many researchers worldwide until now. If f : R n → R is a given function, and if there exist convex functions g : R n → R and h : R n → R such that f(x) = g(x) − h(x) for every x ∈ R n , then f is called a d.c. function. The abbreviation “d.c.” here 1 comes from the combination of words “Difference (of) Convex (functions”. More generally, a function f : R n → R, where R = R ∪ {±∞}, is said to be a d.c. function if there are lower semicontinuous, proper, convex functions g, h : R n → R such that f(x) = g(x) − h(x) for all x ∈ R n . The convention (+∞) − (+∞) = +∞ is used here. Despite their (possible) nonconvexity, d.c. functions still enjoy some good properties of convex functions. A minimization problem with a geometrical constraint min{f(x) = g(x) − h(x) : x ∈ C}, (0.1) where f, g and h are given as above, and C ⊂ R n is a nonempty closed convex set, is a typical DC programming problem. Setting f(x) = (g(x) + δ C (x)) − h(x), where δ C , with δ C (x) = 0 for all x ∈ C and δ C (x) = +∞ for all x /∈ C, is the indicator function of the set C, one can easily transform (0.1) to min{ f(x) : x ∈ R n }, (0.2) which is an unconstrained DC programming problem, with f(x) being a d.c. function. DC programming and DC algorithms (DCA, for brevity) treat the problem of minimizing a function f = g − h, with g, h being lower semicontinuous, proper, convex functions on R n , on the whole space. Usually, g and h are called d.c. components of f. The DCA are constructed on the basis of the DC programming theory and the duality theory of J. F. Toland. It was Pham Dinh Tao who suggested a general DCA theory, which have been developed intensively by him and Le Thi Hoai An starting from their fundamental paper “Convex analysis approach to D.C. programming: Theory, algorithms and applications” (Acta Mathematica Vietnamica, Vol. 22, 1997). Note that DC programming is among the most successful convex analysis approaches to nonconvex programming. One wishes to make an extension of convex programming, not too wide so that the powerful tools of convex analysis and convex optimization still can be used, but sufficiently large to 2 cover the most important nonconvex optimization problems. The set of d.c. functions, which is closed under the basic operations usually considered in optimization, can serve well the purpose. Note that the convexity of the two components of the objective function has been employed widely in DC programming to obtain essential theoretical results and to construct efficient solution methods. The DC duality scheme of J. F. Toland, is an example of such essential theoretical results. To be more precise, Toland’s Duality Theorem asserts that, under mild conditions, the dual problem of a DC program is also a DC program, and the two problems have the same optimal value. Due to their local character, DCA (i.e., DC algorithms) do not ensure the convergence of an iteration sequence to a global solution of the problem in question. However, with the help of a restart procedure, DCA applied to trust-region subproblems can yield a global solution of the problem. In practice, DCA have been successfully applied to many different nonconvex optimization problems for which it has proved to be more robust and efficient than many standard methods; in particular, DCA work well for large-scale problems. Note also that, with appropriate decompositions of the objective functions, DCA can generate several standard algorithms in convex and nonconvex programming. This dissertation studies the DCA applied for minimizing a quadratic problem on an Euclidean ball (also called the trust-region subproblem) and for minimizing a quadratic function on a polyhedral convex set. These problems play important roles in optimization theory. Let A ∈ R n×n be a symmetric matrix, b ∈ R n be a given vector, and r > 0 be a real number. The nonconvex quadratic programming with convex constraints min f(x) := 1 2 x T Ax + b T x : x 2 ≤ r 2 , where x = n i=1 x 2 i 1/2 denotes the Euclidean norm of x = (x 1 , . . . , x n ) T ∈ R n and T means the matrix transposition, is called the trust-region subprob- 3 lem. One encounters with the problem while applying the trust-region method (see, e.g., A. R. Conn, N. I. M. Gould, and P. L. Toint, “Trust-Region Methods”, 2000) to solve the unconstrained problem of finding the minimum of a C 2 –smooth function ϕ : R n → R. Having an approximate solution x k at a step k of the trust-region method, to get a better approximate solution x k+1 one finds the minimum of ϕ on a ball with center x k and a radius depending on a ratio defined by some calculations on ϕ and the point x k . If one replaces ϕ with its second-order Taylor expansion around x k , the auxiliary problem of the form of the trust-region subproblem appears, and x k+1 is a solution of this problem. Consider a quadratic programming problem under linear constraints of the form min f(x) := 1 2 x T Qx + q T x : Dx ≥ d where Q ∈ R n×n and D ∈ R m×n be given matrices, q ∈ R n and d ∈ R m be given vectors. It is assumed that Q is symmetric. This class of optimiza- tion problems is well known and has various applications. Basic qualitative properties related to the solution existence, structure of the solution set, first-order necessary and sufficient optimality conditions, second-order nec- essary and sufficient optimality conditions, stability, differential stability of the problem can be found in the books of B. Bank, J. Guddat, D. Klatte, B. Kummer, and K. Tammer, “Non-Linear Parametric Optimization” (1982), R. W. Cottle, J S. Pang, and R. E. Stone, “The Linear Complementarity Problem” (1992), G. M. Lee, N. N. Tam, and N. D. Yen, “Quadratic Pro- gramming and Affine Variational Inequalities: A Qualitative Study” (2005), and the references therein. The structure of the solution set and the Karush-Kuhn-Tucker point set of this problem is far different from the trust-region subproblem since the constraint set of the trust-region subproblem is convex, compact, and has smooth boundary. Our aim is to study the convergence and the convergence rate of DCA 4 applied for the two mentioned problems. An open question and a conjecture raised in the two papers by H. A. Le Thi, T. Pham Dinh, and N. D. Yen (J. Global Optim., Vol. 49, 2011, pp. 481–495, and Vol. 53, 2012, pp. 317–329) will be completely solved in this dissertation. By using some advanced tools, we are able to obtain complete results on convergence of DCA sequences. Moreover, convergence rates of DCA sequences are established for the first time in this dissertation. The results of this dissertation complement and develop the corresponding published results of T. Pham Dinh and H. A. Le Thi (SIAM J. Optim., Vol. 8, 1998), T. Pham Dinh, H. A. Le Thi, and F. Akoa (Optim. Methods Softw., Vol. 23, 2008), H. A. Le Thi, T. Pham Dinh, and N. D. Yen (J. Global Optim., Vol 49, 2011; Vol. 53, 2012). The dissertation has three chapters and a list of references. Chapter 1 “Preliminaries” presents basic concepts and results of a general theory on DC programming and DCA. Chapter 2 “Minimization of a Quadratic Function on an Euclidean Ball” considers an application of DCA to trust-region subproblems. Here we present in details an useful restart procedure that allows the algorithm to find a global solution. We also give an answer in the affirmative to the ques- tion raised by H. A. Le Thi, T. Pham Dinh, and N. D. Yen (2012) about the convergence of DCA. Furthermore, the convergence rate of DCA is studied. Chapter 3 “Minimization of a Quadratic Function on a Polyhedral Con- vex Set” investigates an application of DCA to indefinite quadratic programs under linear constraints. Here we solve in the affirmative a conjecture raised by H. A. Le Thi, T. Pham Dinh, and N. D. Yen (2011) about the bound- edness of the DCA sequences. At first, by a direct proof, we obtain the boundedness of the DCA sequences for quadratic programs in R 2 . Then, by using some error bounds for affine variational inequalities, we establish the R-linear convergence rate of the algorithm, hence give a complete solution for the conjecture. 5 The results of Chapter 2 were published in Journal of Global Optimization [5] (a joint work with N. D. Yen) and in Journal of Optimization Theory and Applications [2]. Chapter 3 is written on the basis of the papers [3], which was published in Journal of Optimization Theory and Applications, and of the paper [4], which was published in Journal of Mathematical Analysis and Aplications. The above results were reported by the author of this dissertation at Semi- nar of Department of Numerical Analysis and Scientific Computing of Hanoi Institute of Mathematics, The 8th Vietnam-Korea Workshop “Mathemati- cal Optimization Theory and Applications” (University of Dalat, December 8-10, 2011), The 5th International Conference on High Performance Scien- tific Computing (March 5-9, 2012, Hanoi, Vietnam), The Joint Congress of the French Mathematical Society (SMF) and the Vietnamese Mathemati- cal Society (VMS) (University of Hue, August 20-24, 2012), The 8th Viet- namese Mathematical Conference (Nha Trang, August 10-14, 2013), The 12th Workshop on Optimization and Scientific Computing (Ba Vi, April 23-25, 2014). 6 Chapter 1 Preliminaries This chapter reviews some background materials of DC Algorithms. For more details, we refer to H. A. Le Thi and T. Pham Dinh’s papers (1997, 1998), H. N. Tuan’s Master dissertation (“DC Algorithms and Applications in Quadratic Programming”, Hanoi, 2010), and the references therein. 1.1 Toland’s Duality Theorem for DC Programs and Iteration Algorithms Consider the space R n which is equipped with the canonical inner product ·, ·. Then the dual space of R n can be identified with R n . A function θ : R n → R is said to be proper if it does not take the value −∞ and it is not equal identically to +∞, i.e., there is some x ∈ R n with θ(x) ∈ R. The effective domain of θ is defined by dom θ := {x ∈ R n : θ(x) < +∞}. Let Γ 0 (R n ) be the set of all lower semicontinuous, proper, convex functions on R n . The conjugate function g ∗ of the function g ∈ Γ 0 (R n ) is defined by g ∗ (y) = sup{x, y − g(x) : x ∈ R n } ∀ y ∈ R n . Note that g ∗ : R n → R is also a lower semicontinuous, proper, convex function. In the sequel, we use the convention (+∞)−(+∞)=(+∞). 7 Definition 1.1 The optimization problem inf{f(x) := g(x) − h(x) : x ∈ R n }, (P) where g and h are functions belonging to Γ 0 (R n ), is called a DC program. The functions g and h are called d.c. components of f. Definition 1.2 For any g, h ∈ Γ 0 (R n ), the DC program inf{h ∗ (y) − g ∗ (y) : y ∈ R n }, (D) is called the dual problem of (P). Proposition 1.1 (Toland’s Duality Theorem) The DC programs (P) and (D) have the same optimal value. Definition 1.3 A vector x ∗ ∈ R n is said to be a a local solution of (P) if f(x ∗ ) = g(x ∗ ) − h(x ∗ ) is finite (i.e., x ∗ ∈ dom g ∩ dom h) and there exists a neighborhood U of x ∗ such that g(x ∗ ) − h(x ∗ ) ≤ g(x) − h(x), ∀x ∈ U. If we can choose U = R n , then x ∗ is called a (global) solution of (P). Proposition 1.2 (First-order optimality condition) If x ∗ is a local solution of (P), then ∂h(x ∗ ) ⊂ ∂g(x ∗ ). Definition 1.4 A point x ∗ ∈ R n satisfying ∂h(x ∗ ) ⊂ ∂g(x ∗ ) is called a stationary point of (P). Definition 1.5 A point x ∗ ∈ R n is said to be a critical point of (P) if ∂g(x ∗ ) ∩ ∂h(x ∗ ) = ∅. If ∂h(x ∗ ) = ∅ and x ∗ is a stationary point of (P), then x ∗ is a critical point of (P). The reverse implication does not hold in general. The idea of the theory of DC algorithms (DCA for brevity) is to construct two sequences {x k } and {y k } (approximate solution sequences of (P) and (D), respectively) in an appropriate way such that: 8 [...]... Investigations on the following issues would be of interest: 1 Sufficient conditions for R-linear convergence of the DCA sequences studied in Chapter 2; 2 Q-linear convergence of the DCA sequences studied in Chapter 3; 22 3 Solving (3.1) globally by DCA; 4 Convergence and convergence rate of DCA sequences generated by Proximal DC decomposition algorithms; 5 Applicability of the general DC algorithm discussed... version of DCA is no longer a local optimization algorithm In fact, it is a very efficient global optimization algorithm to solve (P) 2.3 Auxiliary Results This section presents 4 lemmas together with 2 corollaries, which are needed in establishing the convergence of DCA sequences 2.4 Convergence Theorem The main result of this section is the following Theorem 2.2 For every initial point x0 ∈ Rn , the DCA sequence... on the data sets Then, by a different approach, we give a complete solution to Conjecture 1 3.3 DCA Sequences in Two-Dimensional Spaces Our result on the boundedness of the DCA sequences in two-dimensional spaces can be stated as follows Theorem 3.2 If (3.1) has a global solution and if n = 2, then every DCA sequence generated by Algorithm A is bounded 20 The arguments for proving Theorem (3.2) cannot... a DC algorithm to the trust–region subproblem; sufficient conditions for the Q-linear convergence of the DCA iteration sequences in question - A direct proof of the boundedness of DCA sequences in two-dimensional quadratic programming; a theorem on the convergence and the R-linear convergence rate of DCA sequences for solving indefinite quadratic programs under linear constraints Our results complement... that has no multiple negative eigenvalues, then any DCA sequence of (2.1) converges to a KKT point”, H A Le Thi, T Pham Dinh, and N D Yen (J Global Optim., 2012) have posed the following Question Under what conditions is the DCA sequence {xk } convergent? The next sections are aimed at solving completely the above Question It will be proved that any DCA sequence constructed by the Pham Dinh–Le Thi algorithm... λ1 Hence, if ∈ (0, 1) then {xk } converges Q-linearly to x∗ ∗) ρ(ρ + λ From Theorem 2.3 we can derive the following sufficient conditions for the linear convergence rate of DCA sequences for problem (2.1) Theorem 2.4 Let {xk } be a DCA sequence of problem (2.1) converging to a KKT point x∗ Let λ∗ ≥ 0 be the Lagrange multiplier associated with x∗ The following is valid: (a) If A is positive definite,... Chapter 2 Minimization of a Quadratic Function on an Euclidean Ball In this chapter, we prove that any DCA sequence constructed by the Pham Dinh–Le Thi algorithm for the trust-region subproblem converges to a KarushKuhn-Tucker point We also obtain sufficient conditions for the Q-linear convergence of DCA sequences In addition, we give two examples to show that, if the sufficient conditions are not satisfied,... subproblem (2.1) converges to a KKT point 2.2.2 Restart Procedure DCA for finding global solutions of (2.1): Start Compute λ1 (the smallest eigenvalue of A) , λn (the largest eigenvalue of A) and an eigienvector u corresponding to λ1 by a suitable algorithm Take ρ > max{0, λn }, x ∈ domf , stop:=false While stop=false do 1 Implement DCA with an initial point x0 := x to get a KKT point x∗ 2 Take λ∗... 4 If xk+1 − xk < ε, then terminate the computation Otherwise, increase k by 1 and resume the test (2.3) For ε = 0, the above algorithm generates an infinite sequence {xk }k≥0 , called a DCA sequence Basic properties of DCA sequences produced by the above algorithm can be sated as follows Theorem 2.1 (Pham Dinh and Le Thi, 1998) For any k ≥ 1, 1 f (xk+1 ) ≤ f (xk ) − (ρ + θ1 ) xk+1 − xk 2 , 2 13 where... k < 1 then one says that {xk } is R-linearly k→∞ 15 convergent to x∗ It is well known that Q-linear convergence implies Rlinear convergence, but the reverse implication is not true Theorem 2.3 Let a DCA sequence {xk }k≥0 for the trust-region subproblem (2.1) be convergent to a KKT point x∗ , and λ∗ be the Lagrange multiplier corresponding to x∗ Then we have ||xk+1 − x∗ || ≤ ρ − λ1 ||xk − x∗ ||, ∗) . (1997, 1998), H. N. Tuan’s Master dissertation (“DC Algorithms and Applications in Quadratic Programming”, Hanoi, 2010), and the references therein. 1.1 Toland’s Duality Theorem for DC Programs and Iteration. efficient than many standard methods; in particular, DCA work well for large-scale problems. Note also that, with appropriate decompositions of the objective functions, DCA can generate several standard. programming and DCA. Chapter 2 “Minimization of a Quadratic Function on an Euclidean Ball” considers an application of DCA to trust-region subproblems. Here we present in details an useful restart