1. Trang chủ
  2. » Khoa Học Tự Nhiên

Duality in convex optimization

28 143 1
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 28
Dung lượng 166,94 KB

Nội dung

Chapter 5 Duality in convex optimization Chapter 5. Duality in convex optimization tvnguyen (University of Science) Convex Optimization 81 / 108 Chapter 5 Duality in convex optimization The Fermat Rule Proposition. Let f : IR n → IR ∪ {+∞} be a closed convex and proper function. Then, for an element x ∗ ∈ IR n the two following statements are equivalent : (i) f (x ∗ ) ≤ f (x) for all x ∈ IR n (ii) 0 ∈ ∂f (x ∗ ) The necessary and sufficient condition 0 ∈ ∂f (x ∗ ) is an extension of the classical optimality condition for convex C 1 functions : ∇f (x ∗ ) = 0. So finding the optimal solutions of f can be attacked by solving the generalized equation 0 ∈ ∂f (x) tvnguyen (University of Science) Convex Optimization 82 / 108 Chapter 5 Duality in convex optimization The constrained convex problem Consider the following optimization problem (P) min {f 0 (x) | x ∈ C} where f 0 : IR n → IR ∪ {+∞} is a closed convex and proper function (called the objective function) and C is a closed convex nonempty subset of IR n (set of constraints). Assume dom f 0 ∩ C = ∅. Setting f = f 0 + δ C , this problem can be written in the equivalent form min {f (x) | x ∈ IR n } When dom f 0 ∩ int C = ∅, we have ∂f (x) = ∂f 0 (x) + ∂δ C (x). So x ∗ optimal solution of (P) ⇔ 0 ∈ ∂f 0 (x ∗ ) + ∂δ C (x ∗ ) To describe ∂δ C (x), we need to introduce the notion of tangent and normal cone to C at x. tvnguyen (University of Science) Convex Optimization 83 / 108 Chapter 5 Duality in convex optimization The tangent and normal cones Definition. Let C be a closed convex nonempty subset of IR n and let x ∈ C. (a) The tangent cone to C at x, denoted T C (x) is defined by T C (x) = ∪ λ≥0 λ (C − x) It is the closure of the cone spanned by C − x. (b) The normal cone N C (x) to C at x is the polar cone of T C (x) : N C (x) = {x ∗ ∈ IR n | x ∗ , y ≤ 0 ∀y ∈ T C (x)} = {x ∗ ∈ IR n | x ∗ , y − x ≤ 0 ∀y ∈ C} tvnguyen (University of Science) Convex Optimization 84 / 108 Chapter 5 Duality in convex optimization Illustration Tangent cones Normal cones tvnguyen (University of Science) Convex Optimization 85 / 108 Chapter 5 Duality in convex optimization Properties Proposition. Let C be a closed convex nonempty subset of IR n and let x ∈ C. Then (i) T C (x) is a closed convex cone containing 0 (ii) T C (x) = IR n when x ∈ int C (iii) N C (x) is a closed convex cone containing 0 (iv) N C (x) = {0} when x ∈ int C Proposition. Let C be a closed convex nonempty subset of IR n and let x ∈ C. Then ∂δ C (x) = N C (x) tvnguyen (University of Science) Convex Optimization 86 / 108 Chapter 5 Duality in convex optimization The constrained convex problem Consider again the following optimization problem (P) min {f 0 (x) | x ∈ C} where f 0 : IR n → IR ∪ {+∞} is a closed convex and proper function and C is a closed convex nonempty subset of IR n . Proposition. Assume that the following qualification assumption is satisfied : dom f 0 ∩ int C = ∅ Then the following statements are equivalent : (i) x ∗ is an optimal solution to (P) ; (ii) x ∗ is a solution to the equation 0 ∈ ∂f 0 (x ∗ ) + N C (x ∗ ) ; (iii) x ∗ ∈ C and ∃ s ∈ ∂f 0 (x ∗ ) such that s, x − x ∗  ≥ 0 ∀x ∈ C tvnguyen (University of Science) Convex Optimization 87 / 108 Chapter 5 Duality in convex optimization The mathematical programming problem Consider the problem (P)  min f (x) s.t. g i (x) ≤ 0, i = 1, . . . , m where f : IR n → IR ∪ {+∞} is closed convex and proper, and g 1 , . . . , g m : IR n → IR, are convex. Here the constraint C has the following specific form C = { x ∈ IR n | g i (x) ≤ 0, i = 1, . . . , m} This problem is of fundamental importance : a large number of problems in decision sciences, engineering, and so forth can be written as mathematical programming problems. tvnguyen (University of Science) Convex Optimization 88 / 108 Chapter 5 Duality in convex optimization N C (x) when C = {x ∈ IR n | g(x) ≤ 0} Proposition. Let C = {x ∈ IR n | g(x) ≤ 0} where g : IR n → IR is convex (and thus also continuous). Assume that C satisfies the following Slater property : there exists some x 0 ∈ C such that g(x 0 ) < 0 Then, for every x ∈ C N C (x) =  {0} if g(x) < 0, IR + ∂g(x) if g(x) = 0. As a consequence, s ∈ N C (x) ⇔ ∃ λ ≥ 0 such that s ∈ λ ∂g (x) and λg (x) = 0 tvnguyen (University of Science) Convex Optimization 89 / 108 Chapter 5 Duality in convex optimization N C (x) when C = {x ∈ IR n | g i (x) ≤ 0, i = 1, . . . , m} Proposition. Let C = ∩ 1≤i≤m C i where for each i = 1, . . . , m C i = {x ∈ IR n | g i (x) ≤ 0} and g i : IR n → IR, i = 1, . . . , m is convex. Assume that C satisfies the following Slater property : there exists some x 0 ∈ C such that g i (x 0 ) < 0, i = 1, . . . , m Then x 0 ∈ ∩ i int C i , δ C = δ C 1 + · · · + δ C m , and (by the subdifferential rule for the sum of convex functions) ∂δ C = ∂δ C 1 + · · · + ∂δ C m As a consequence, for every x ∈ C, N C (x) = N C 1 (x) + · · · + N C m (x) tvnguyen (University of Science) Convex Optimization 90 / 108 [...]... function and −g is a proper convex funtion Particular case : minimizing f over convex set C (take g = −δC ) The duality consists in the connection between minimizing f − g and maximizing the concave function g ∗ − f ∗ tvnguyen (University of Science) Convex Optimization 107 / 108 Chapter 5 Proposition Duality in convex optimization Let f , −g be proper convex functions One has inf {f (x) − g (x)} = sup{g... a Saddle Point of F tvnguyen (University of Science) Convex Optimization 93 / 108 Chapter 5 Duality in convex optimization Example of a saddle point 10 5 0 −5 −10 3 2 3 1 2 0 1 0 −1 −1 −2 −2 −3 (x ∗ , y ∗ ) −3 = (0, 0) is a saddle point of F (x, y ) = x 2 − y 2 tvnguyen (University of Science) Convex Optimization 94 / 108 Chapter 5 Duality in convex optimization Saddle problem Saddle point When (x... form of a min-max problem, we consider the Lagrangian function defined by m λi gi (x) L(x, λ) = f (x) + i=1 tvnguyen (University of Science) Convex Optimization 97 / 108 Chapter 5 Duality in convex optimization Lagrangian duality (I) We use the min-max duality with X = IR n , Y = I + and Rm F (x, λ) = L(x, λ) So we have p(x) = sup L(x, λ) and d(λ) = inf L(x, λ) x∈X λ∈Y The corresponding optimization. .. Science) Convex Optimization 99 / 108 Chapter 5 Duality in convex optimization The dual problem Problem (PD) will be denoted (D) and written under the form : (D) max d(λ) s.t λ ≥ 0 The function d(λ) = inf x∈IRn L(x, λ) is called the dual function Proposition The dual function d(λ) is a concave function tvnguyen (University of Science) Convex Optimization 100 / 108 Chapter 5 Duality in convex optimization. .. tvnguyen (University of Science) Convex Optimization 104 / 108 Chapter 5 Duality in convex optimization Strong Duality Theorem II Theorem Assume that problem (P) is convex and that all the constraints are affine Let x ∗ be a solution to (P) Then the Lagrange multipliers λ∗ associated with x ∗ are solution to (D) the strong duality property p ∗ = d ∗ holds Duality theory is interesting when the dual problem... when the strong duality property holds tvnguyen (University of Science) Convex Optimization 105 / 108 Chapter 5 Duality in convex optimization Solving the dual to get the solution of (P) Let λ∗ be a solution to (D) In order to recover a primal solution from λ∗ , a strategy consists in finding x ∗ such that (x ∗ , λ∗ ) is a saddle point of L Observe that this strategy implies that the strong duality property... every point x ∗ such that (1) L(x ∗ , λ∗ ) = minx∈IR n L(x, λ∗ ) (2) all the constraints of problem (P) are satisfied at x ∗ (3) λ∗ gi (x ∗ ) = 0, i = 1, , m i is a solution to problem (P) tvnguyen (University of Science) Convex Optimization 106 / 108 Chapter 5 Duality in convex optimization Fenchel’s duality Consider the problem of minimizing a difference f (x) − g (x) where f is a proper convex function... > 0 ⇒ gi (x ∗ ) = 0) i In other terms, a multiplier associated with an inactive constraint (i.e., gi (x ∗ ) < 0) is equal to zero tvnguyen (University of Science) Convex Optimization 92 / 108 Chapter 5 Duality in convex optimization Min-max duality The basic concept is the concept of saddle point Definition 5.1.1 A Saddle Problem calls for a solution (x ∗ , y ∗ ) of a double inequality of the form :... Science) Convex Optimization 95 / 108 Chapter 5 Duality in convex optimization Characterization of saddle points Theorem 5.1.1 Let X × Y ⊆ IR n × IR q and let F : X × Y → I be a R given function Then sup inf F (x, y ) ≤ inf sup F (x, y ) y ∈Y x∈X x∈X y ∈Y For (x ∗ , y ∗ ) ∈ X × Y , the following conditions are equivalent : (1) (x ∗ , y ∗ ) is a saddle point of F on X × Y , (2) x ∗ ∈ arg minx∈X p(x),... d(y ), and sup inf F (x, y ) = inf sup F (x, y ) y ∈Y x∈X x∈X y ∈Y (3) p(x ∗ ) = d(y ∗ ) = F (x ∗ , y ∗ ) tvnguyen (University of Science) Convex Optimization 96 / 108 Chapter 5 Duality in convex optimization Lagrangian duality Consider the problem (P) min f (x) s.t gi (x) ≤ 0, i = 1, , m where f , gi , i = 1, , m : IRn → IR We denote p ∗ the optimal value of (P) In view of writing problem (P) . 5 Duality in convex optimization Chapter 5. Duality in convex optimization tvnguyen (University of Science) Convex Optimization 81 / 108 Chapter 5 Duality. (University of Science) Convex Optimization 82 / 108 Chapter 5 Duality in convex optimization The constrained convex problem Consider the following optimization

Ngày đăng: 23/10/2013, 15:20

TỪ KHÓA LIÊN QUAN