Báo cáo khoa học: "D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn" pps

12 461 0
Báo cáo khoa học: "D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn" pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Limiting Subgradients of the Marginal Function in Some Pathological Smooth Programming Problems Thai Doan Chuong (a) Abstract In this paper we show that the results of Mordukhovich, Nam and Yen [6] on differential stability in parametric programming can be used to derive upper estimates for the limiting subgradients of the marginal function in some pathological smooth programming problems proposed by Gauvin and Dubeau [2] Introduction Let ϕ : X × Y → R be a function taking values in the extended real line R := [−∞, ∞], G : X Y a set-valued mapping between Banach spaces Consider the parametric programming problem (1.1) minimize ϕ(x, y) subject to y ∈ G(x) The extended-real-valued function µ(x) := inf{ϕ(x, y) | y ∈ G(x)} (1.2) is said to be the marginal function (or the value function) of (1.1) The solution map M (·) of the problem is defined by M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)} (1.3) For (1.1), we say that ϕ is the objective function and G is the constraint mapping Continuity and differentiability properties of µ in the case where X = Rn , Y = Rm , ϕ is a smooth function (i.e., a C -function) and G(x) is the set of all x satisfying the parametric inequality/equality system gi (x, y) 0, i = 1, , p; hj (x, y) = 0, j = 1, , q; (1.4) gi : X × Y → R (i = 1, , p) and hj : X × Y → R (j = 1, , q ) are smooth functions, were studied firstly by Gauvin and Tolle [3], Gauvin and Dubeau [2] Their results and ideas have been extended and applied by many authors; see Mordukhovich, Nam and Yen [6], where the case ϕ is a nonsmooth function and G is an arbitrary set-valued map between Banach spaces is investigated, and the references therein We will show that the results of [6] on differential stability in parametric programming can be used to estimate the set of limiting subgradients (i.e the limiting subdifferential) of the marginal function in six ``pathological" smooth programming problems proposed by Gauvin and Dubeau [2] Thus, general results on differentiability Nhận ngày 30/3/2007 Sửa chữa xong ngày 23/5/2007 properties of the marginal function of (1.1) are very useful even for the classical finitedimensional smooth setting of the problem We also consider several illustrative examples for the results of [6] Unlike the corresponding examples in that paper, all the problems considered herein are smooth The emphasis in [1] [3] was made on the Clarke subgradients of µ, while the main concern of [6] is about the Frechet and the limiting subgradients of µ The reader is referred to [4, 5] for interesting comments on the development of the concepts of subgradients just mentioned Note that, under very mild assumptions on X and ϕ, the convex hull of the limiting subdifferential of ϕ at a given point x ∈ X coincides with the Clarke subdifferential of ϕ at the point So, the limiting subdifferential can be considered as the (nonconvex) core of the corresponding Clarke subdifferential Thus upper estimates for the limiting subdifferential of marginal functions can lead to sharp upper estimates for the Clarke subdifferential Preliminaries Let us recall some material on generalized differentiation, which is available in [4, 5] All the spaces considered are Banach, unless otherwise stated Definition 2.1 Let ϕ : X → R be an extended-real-valued function which is finite at x Given any ε ≥ 0, we say that a vector x∗ from the topological dual space X ∗ of X is an ε − subgradient of ϕ at x if lim inf (2.1) x→x ϕ(x) − ϕ(x) − x∗ , x − x ≥ −ε ||x − x|| ˆ ˆ ˆ Denote by ∂ε ϕ(x) the set of the ε-subgradients of ϕ at x Clearly, ∂0 ϕ(x) ⊂ ∂ε ϕ(x) for ˆ ˆ every ε ≥ The set ∂ϕ(x) := ∂0 ϕ(x) is called the Frechet subdifferential of ϕ at x X ∗ between a Banach space X and its dual X , the sequential Painleve-Kuratowski upper limit of F (x) as x → x is defined Definition 2.2 For a set-valued mapping F : X ∗ by Lim supF (x) := {x∗ ∈ X ∗ | x→x w∗ ∃ sequence xk → x and x∗ −→ x∗ k with x∗ ∈ F (xk ) for all k = 1, 2, }, k where w ∗ denotes the weak∗ topology in X ∗ Definition 2.3 The tial) limiting subdifferential (or the Mordukhovich/basic subdifferen- of ϕ at x is defined by setting ˆ ∂ϕ(x) := Lim sup ∂ε ϕ(x) (2.2) ϕ x−→x ε↓0 The (2.3) singular subdifferential of ϕ at x is given by ˆ ∂ ∞ ϕ(x) := Lim supλ∂ε ϕ(x) ϕ x−→x ε,λ↓0 Remark 2.4 (see [4]) If X is an Asplund space have separable duals) and if ϕ is (i.e., such that its separable subspaces lower semicontinuous around x, then we can equiv- alently put ε = in (2.2) Moreover, we have ∂ϕ(x) = ∅ for every locally Lipschitzian function Definition 2.5 Let X be a Banach space, f : X → R a Lipschitzian function around x The Clarke subdifferential (2.4) ∂ CL f (x) := of f at x is the set x∗ ∈ X ∗ | x∗ , v lim sup x →x,t→0+ f (x + tv) − f (x ) , ∀v ∈ X t Remark 2.6 (see [4, Theorem 3.57]) For the Clarke subdifferential in Asplund spaces, we have ∗ ∂ CL f (x) = cl co[∂f (x) + ∂ ∞ f (x)], (2.5) ∗ where ``co" denotes the convex hull and ``cl " stands for the closure in the weak∗ topology of X ∗ Remark 2.7 (see [4]) If ϕ : Rn → R is strictly differentiable at x, then ∂ CL ϕ(x) = ∂ϕ(x) = { ϕ(x)} (2.6) The domain and the graph of the map F : X dom F := {x ∈ X | F (x) = ∅}, Y are defined, respectively, by setting gph F := {(x, y) ∈ X × Y | y ∈ F (x)} Subgradients of the value function in smooth programming problems Consider (1.1) in the special case where the objective function ϕ is smooth and the constraint set is given by G(x) := y ∈ Y | ϕi (x, y) 0, i = 1, , m, ϕi (x, y) = 0, i = m + 1, , m + r , (3.1) with ϕi : X × Y → R (i = 1, , m + r ) being some given smooth functions Such problems are called smooth programming problems Definition 3.1 The classical (3.2) Lagrangian is defined by setting L(x, y, λ) = ϕ(x, y) + λ1 ϕ1 (x, y) + · · · + λm+r ϕm+r (x, y), where the scalars λ1 , , λm+r (and also the vector λ := (λ1 , , λm+r ) ∈ Rm+r ) are the Lagrangian multipliers Given a point (x, y) ∈ gph M in the graph of the solution M (·), we consider the set of Lagrange multipliers: m+r Λ(x, y) := λ ∈ Rm+r | Ly (x, y, λ) := y ϕ(x, y) + λi y ϕi (x, y) = 0, i=1 λi ≥ 0, λi ϕi (x, y) = for i = 1, , m (3.3) Definition 3.2 We say that the Mangasarian-Fromovitz dition holds at (x, y) if the gradients (3.4) constraint qualification con- ϕm+1 (x, y), , ϕm+r (x, y) are linearly independent; there is w ∈ X × Y such that and ϕi (x, y), w = for i = m + 1, , m + r ϕi (x, y), w < whenever i = 1, , m with ϕi (x, y) = Definition 3.3 We say that the solution map M : domG Lipschitzian selection Y admits a local upper at (x, y) if there exists a single-valued mapping h : domG → Y which satisfies h(x) = y and for which there are constants G(x) and h(x) − h(x) > 0, δ > such that h(x) ∈ x − x for all x ∈ domG ∩ Bδ (x) Here Bδ (x) := {x ∈ X | x − x < δ} The next statement follows from [6, Theorem 4.1] Theorem 3.4 (Frechet subgradients of value functions in smooth nonlinear programs in Asplund spaces) M (x) Let µ(.) be defined by (1.2) Take x∈ domM and y ∈ and assume that the gradients ϕ1 (x, y), , ϕm+r (x, y) (3.5) are linearly independent Then we have the inclusion m+r ˆ ∂µ(x) ⊂ (3.6) x ϕ(x, y) + λi x ϕi (x, y) λi x ϕi (x, y) i=1 λ∈Λ(x,y) Furthermore, (3.6) reduces to the equality m+r ˆ ∂µ(x) = (3.7) x ϕ(x, y) λ∈Λ(x,y) if the solution map M : domG Y + i=1 admits a local upper Lipschitzian selection at (x, y) From [6, Corollary 4.3] we obtain the following result Corollary 3.5 In the assumptions imposed in the first part of Theorem 3.4, suppose that the spaces X and Y are Asplund, and that the qualification condition (3.5) is replaced by the (3.4) Then we have inclusion (3.6), which reduces to the equality (3.7) provided that the solution map tion at M : domG Y admits a local upper Lipschitzian selec- smooth programming problems illustrating the (x, y) Let us consider some examples of results obtained in Theorem 3.4 and Corollary 3.5 and the assumptions made therein We start with examples showing that the upper Lipschitzian assumptions of Theorem 3.4 is essential but not necessary to ensure the equality in the Frechet subgradient in- clusion (3.6) For convenience, denote by ``RHS'' and ``LHS'' the expressions standing on the right-hand side and left-hand side of inclusion (3.6), respectively Example 3.6 (cf [2, Example 3.4]) Let X = R, Y = R2 and x = 0, y = (0, 0) Consider the marginal function µ(.) in (1.2) with ϕ(x, y) = −y2 , y = (y1 , y2 ) ∈ G(x), where G(x) := y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = y2 − y1 0, ϕ2 (x, y) = y2 + y1 − x Then we have  −x if x µ(x) = − x otherwise ;  x if x M (x) = y = (y1 , y2 ) ∈ G(x) | y2 = x  otherwise Λ(x, y) = {(t, − t) | t 1} , Furthermore, ϕ1 (x, y) = (0, 0, 1), ϕ2 (x, y) = (−1, 0, 1) are linearly independent Hence RHS=[−1, 0] On the other hand, a direct computation based on (2.1) gives LHS=[−1, − ], i.e., inclusion (3.6) is strictly Observe that the solu- tion map M (.) as above does not admit any upper Lipschitzian selection at (x, y) This example shows that the latter assumption is essential for the validity of the equality in (3.6) by Theorem 3.4 Example 3.7 Let X = Y = R and x = y = Consider the marginal function µ(.) in (1.2) with ϕ(x, y) = (x − y )2 , G(x) = {y ∈ R | ϕ1 (x, y) = −(1 + y)2 0} One can easily deduce from (1.2) and (1.3) that µ(x) = x2 if x otherwise ; M (x) = {0} √ √ {− x, x} if x otherwise ; Λ(x, y) = {0} ˆ ϕ1 (x, y) = (0, −2) = (0, 0) Hence RHS={0} Besides, LHS=∂µ(0) = {0} Thus (3.6) holds as equality although the solution map M (.) does not admit any upper Lipschitzian selection at (x, y) We have seen that the upper Lipschitzian assumption Furthermore, is sufficient but not necessary for the equality assertion of Theorem 3.4 The next example shows that (3.4) is weaker than (3.5) Example 3.8 Let X = R2 , Y = R2 and x = (0, 0), y = (0, 0) Consider the marginal function µ(.) in (1.2) with ϕ(x, y) = −y2 , y = (y1 , y2 ) ∈ G(x), G(x) := y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = y2 + y1 x1 + g(y1 ) ϕ2 (x, y) = y2 − y1 − x2 ϕ3 (x, y) = y1 −5 0 ϕ4 (x, y) = −y2 − ; where y1 sin4 g(y1 ) = if y1 = 2π y1 otherwise Then we have ϕ1 (x, y) = (0, 0, 0, 1), ϕ2 (x, y) = (0, −1, 0, 1), ϕ3 (x, y) = (0, 0, 0, 0), ϕ4 (x, y) = (0, 0, 0, −1) are not linearly independent, i.e., the qualification condition of Theorem 3.4 is violated, but (3.4) is satisfied at (x, y), i.e., the results of Corollary 3.5 are applicable for this problem It is easy to find that Λ(x, y) = {(t, − t, 0, 0) | t 1} Thus we have an upper estimate for the Frechet subdifferential of the value function ˆ ∂µ(x) ⊂ {0} × [−1, 0] The next theorem follows from [6, Corollary 5.4] Theorem 3.9 (Limiting subgradients of value functions in smooth nonlinear programs in Asplund spaces) Let M (.) be the solution mapping from constraint mapping G(.) defined by (3.1), (1.3) with the where both spaces X and Y are Asplund Sup- pose that the Mangasarian-Fromovitz constraint qualification (3.4) is satisfied Then one has the inclusions m+r ∂µ(x) ⊂ (3.8) x ϕ(x, y) + λi x ϕi (x, y) , i=1 λ∈Λ(x,y) m+r ∂ ∞ µ(x) ⊂ (3.9) λi λ∈Λ(x,y) x ϕi (x, y) , i=1 where the set of multipliers Λ(x, y) is defined in (3.3) and where m+r Λ∞ (x, y) := λ ∈ Rm+r | λi y ϕi (x, y) = 0, λi ≥ 0, λi ϕi (x, y) = for i = 1, , m i=1 Moreover, (x, y) (3.8) holds as equality if M (·) admits a local upper Lipschitzian selection at Similarly as in Theorem 3.4, the upper Lipschitzian assumption of Theorem 3.9 is sufficient but not necessary to ensure the equality in the inclusion (3.8) Observe that ∂ ∞ µ(x) = {0} due to (3.9) if ϕi satisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect to y , i.e., when the full gradients of ϕi in (3.4) are replaced by y ϕi (x, y) By the representation (2.6), the results obtained in Theorem 3.9 immediately implies an upper estimate for the Clarke subdifferential of the value function in smooth programming, which extends the well-known results of Gauvin and Dubeau [1, Theorem 5.3] established in finite dimensions Application to Gauvin-Dubeau's examples Let us apply the results of Theorems 3.4, 3.9 and Corollary 3.5 to compute or estimate the Frechet, the limiting, and the Clarke subdifferentials of the value function in the ``pathological'' examples from Gauvin and Dubeau [2] Example (see [2, Example 2.1]) Let X = Y = R Consider the problem 4.1 minimize ϕ(x, y) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ1 (x, y) = g(y) − x 0}, where   −(y + )2 + g(y) =  −y e if y otherwise We have µ(x) = inf{ ϕ(x, y) = −y | y ∈ G(x)} One can find that G(x) =  R    (−∞, − −  (−∞, − −     (−∞, − − µ(x) = M (x) = if x ≥ − x] ∪ [− + 5 − x] ∪ [− ln x + ∞) if < x < − x] if x + x< 0; otherwise; −x {y ∈ R | y = − − if x − x} otherwise ∅ √ if if x > −∞ − x, +∞) √ Let x = and y = − − 25 Note that ϕ1 (x, y) = (−1, 5) = (0, 0) and the solution map √ M (·) does not admit any upper Lipschitzian selection at (0, − − 25 ) From (3.3) it follows −1 ˆ that Λ(x, y) = { √ } Hence, applying the results of Theorem 3.4 we obtain ∂µ(x) ⊂ { √ } −1 Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) ⊂ { √ } 5 Remark A direct computation based on (2.1) and (2.2) gives ˆ ∂µ(x) = ∂µ(x) = ∅ Example (see [2, Example 2.2]) Let X = R, Y = R2 Consider the problem 4.2 minimize ϕ(x, y) = −y2 subject to y = (y1 , y2 ) ∈ G(x), 2 G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = (y1 + (y2 − 1)2 − )(y1 + (y2 + 1)2 − 1) ϕ2 (x, y) = y1 − x = 0, x ≥ 0} We have  −1 − − x2   √ µ(x) = − − x2   +∞    {y = (y , y ) ∈ G(x), y = 2 M (x) =   ∅ For x := 2, y := ( , 1) x if < x if if x > 1; 1+ −1 + √ − x2 if x − x2 if < x } if x > 1; ∈ M (x), we see that 13 , 0), ϕ2 (x, y) = (−1, 1, 0) are linearly independent, and Λ( , , 1) = ∅ Hence, by Theorem 3.4 we obtain ϕ1 (x, y) = (0, ˆ ∂µ(x) = ∅ Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) = ∅ Taking into account the fact that ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain ∂ CL µ(x) = ∅ For x := 1, y := (1, −1) ∈ M (x), we obtain ˆ ∂µ(x) = ∂µ(x) = ∂ CL µ(x) = ∅ Example (see [2, Example 3.1]) Taking X = Y = R, we consider the problem 4.3 minimize ϕ(x, y) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ1 (x, y) = y − x We have √ G(x) = {y ∈ R | y ∈ (−∞, x]}; √ µ(x) = − x; √ M (x) = {y ∈ R | y = x} 0} For x := 0, y := 0, we see that ϕ1 (x, y) = (−1, 0) and Λ(0, 0) = ∅ Hence, applying Theorem 3.4 we obtain ˆ ∂µ(x) = ∅ Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) = ∅ Taking into account the fact that ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain ∂ CL µ(x) = ∅ Example (see [2, Example 3.2]) Taking X = R, Y = R2 , we consider the problem 4.4 minimize ϕ(x, y) = −y2 subject to y = (y1 , y2 ) ∈ G(x), G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = y1 − 100 ϕ2 (x, y) = g(y1 ) − y2 = ϕ3 (x, y) = (y1 − x)y2 = 0}, where g(y1 ) = y1 cos( 2π ) y1 if y1 = otherwise One can find that G(x) = {(y1 , 0) ∈ R2 , y1 = or y1 = 2k+1 , k = 0, ±1, ±2, } √ √ √ 8 G(0) ∪ {(± x, g( x)) | x 100} µ(x) = min{0, −g( √ if x x)} if x otherwise; otherwise; {(y1 , 0) ∈ R2 y1 = or y1 = 2k+1 , k = 0, ±1, ±2, } if x √ otherwise; {(y1 , y2 ) ∈ G(x), y2 = max {0, g( x)}} For x := 0, y := ( 2k+1 , 0), k = 0, ±1, ±2, with k = 2n, n = 0, ±1, ±2, Note that M (x) = ϕ1 (x, y) = (0, 32π , 0), ϕ2 (x, y) = (0, , −1), ϕ3 (x, y) = (0, 0, ( )8 ) 4n + (4n + 1)2 4n + are linearly independent, and the solution map M (.) does not admit any upper Lips4 chitzian selection at (0, 4n+1 , 0) From (3.3) it follows that get Λ(0, , 0) = {(0, 0, (n + )8 )} 4n + Hence, applying the results of Theorem 3.4 we obtain ˆ ∂µ(x) ⊂ {0} ˆ Similarly, with k = 2n + 1, n = 0, ±1, ±2, , we obtain ∂µ(x) ⊂ {0} Using (3.8) and (3.9) in Theorem 3.9 and similarly computing as above we get ∂µ(x) ⊂ {0}, ∂ ∞ µ(x) ⊂ {0} Taking into account the fact that ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain ∂ CL µ(x) ⊂ {0} ˆ Remark Applying (2.1) and (2.2) again, we see that neither belongs to ∂µ(x) nor ˆ to ∂µ(x), i.e., ∂µ(x) = ∂µ(x) = ∅ This implies ∂ CL µ(x) = ∅ Therefore, (3.6) and (3.8) hold strict inclusions The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {0}) because ϕi , i = 1, 2, satisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect to y Example (see [2, Example 3.3]) Let X = R2 , Y = R2 Consider the problem 4.5 minimize ϕ(x, y) = −y2 subject to y = (y1 , y2 ) ∈ G(x), where G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = y2 + y1 x1 + g(y1 ) ϕ2 (x, y) = y2 − ϕ3 (x, y) = y1 y1 − x2 −5 ϕ4 (x, y) = −y1 − y1 sin4 ( 2π ) y1 g(y1 ) = 0 0}, if y1 = otherwise For x := (0, 0), the optimal solution set is M (x) = {(0, 0)} ∪ {( , 0) | k = ±1, ±2, } k For y := ( k , 0), k = ±1, ±2, , we see that 16 ϕ1 (x, y) = ( k4 , 0, 0, 1), ϕ3 (x, y) = (0, 0, k , 0), ϕ2 (x, y) = (0, −1, −32 , 1), k3 ϕ4 (x, y) = (0, 0, 0, −1) are linearly independent Besides, Λ(0, 0, k , 0) = {(1, 0, 0, 0)} It follows from Theorem 3.4 that 16 ˆ ∂µ(x) ⊂ {( , 0)} k Using (3.8) and (3.9) and performing a similar computation as above, we get ∂µ(x) ⊂ {( 16 , 0)}, k4 ∂ ∞ µ(x) ⊂ {(0, 0)} Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we have ∂ CL µ(x) ⊂ {( 16 , 0)} k4 For y := (0, 0), we see that ϕ1 (x, y) = (0, 0, 0, 1), ϕ2 (x, y) = (0, −1, 0, 1), ϕ3 (x, y) = (0, 0, 0, 0), ϕ4 (x, y) = (0, 0, 0, −1) So (3.4) is satisfied at (x, y), i.e., the results of Corollary 3.5 is applicable for this problem It is easy to find that Λ(0, 0, 0, 0) = {(t, − t, 0, 0) | t 1} Hence, by Corollary 3.5 we get à(x) {0} ì [−1, 0] Using (3.8) and (3.9) we have ∂µ(x) ⊂ {0} ì [1, 0], à(x) {(0, 0)} Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we get CL à(x) {0} ì [1, 0] Remark The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {(0, 0)}) because ϕi , i = 1, 2, 3, satisfy the (partial) Mangasarian-Fromovitz constraint qualification with re- spect to y Example (see [2, Example 3.4]) Taking X = R, Y = R2 , we consider the problem 4.6 minimize ϕ(x, y) = −y2 subject to y = (y1 , y2 ) ∈ G(x), G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y) = y2 − y1 y1 −x x if x x otherwise ϕ2 (x, y) = y2 + 0} It holds µ(x) = M (x) = −x −x if x otherwise; y = (y1 , y2 ) ∈ G(x) | y2 = For x := 0, y := (0, 0), we see that ϕ1 (x, y) = (0, 0, 1), ϕ2 (x, y) = (−1, 0, 1) are linearly independent, and the solution map M (·) does not admit any upper Lipschitzian selection at (x, y) From (3.3) it follows that Λ(0, 0, 0) = {(t, − t) | t 1} Hence, by Theorem 3.4 we get ˆ ∂µ(x) ⊂ [−1, 0] Using (3.8) and (3.9) we obtain ∂µ(x) ⊂ [−1, 0], ∂ ∞ µ(x) ⊂ {0} Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we get ∂ CL µ(x) ⊂ [−1, 0] Remark In this example, the inclusions (3.6) and (3.8) are strict A direct computa1 ˆ tion based on (2.1) and (2.2) gives us ∂µ(x) = ∂µ(x) = [−1, − ] The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {0}) because ϕ1 and ϕ2 satisfy the (partial) MangasarianFromovitz constraint qualification with respect to y Ackowledgement I would like to thank Prof N D Yen and Dr N Q Huy for helpful discussions on the topic and useful remarks References [1] J Gauvin, F Dubeau, matical programming, [2] Differential properties of the marginal function in mathe- Math Program Study, 19 (1982), 101 119 J Gauvin, F Dubeau, Somes examples and counterexamples for the stability anal- ysis of nonlinear programming problems, [3] J Gauvin, J W Tolle, Math Program Study, 21 (1984), 69-78 Differential stability in nonlinear programming, SIAM J Control and Optim, 15 (1977), 294-311 [4] B S Mordukhovich, sic Theory, [5] B S Mordukhovich, Applications, [6] Variational Analysis and Generalized Differentiation, I: Ba- Springer, Berlin, 2006 Variational Analysis and Generalized Differentiation, II: Springer, Berlin, 2006 B S Mordukhovich, N M Nam, N D Yen, in parametric mathematical programming, Subgradients of marginal functions to appear in Mathematical Programming, (2007) tóm tắt Dưới vi phân giới hạn hàm giá trị tối ưu số toán ``bệnh tật" quy hoạch trơn Trong báo kết Modukhovich, Nam Yên [6] tính ổn định vi phân quy hoạch chứa tham số sử dụng để ước lượng cho vi phân giới hạn hàm giá trị tối ưu số toán ``bệnh tật" quy hoạch trơn đề xuất Gauvin Dubeau [2] (a) Khoa toán, Trường Đại học sư phạm Đồng Tháp ... tối ưu số toán ``bệnh tật" quy hoạch trơn Trong báo kết Modukhovich, Nam Yên [6] tính ổn định vi phân quy hoạch chứa tham số sử dụng để ước lượng cho vi phân giới hạn hàm giá trị tối ưu số toán. .. Mordukhovich, N M Nam, N D Yen, in parametric mathematical programming, Subgradients of marginal functions to appear in Mathematical Programming, (2007) tóm tắt Dưới vi phân giới hạn hàm giá trị tối. .. để ước lượng cho vi phân giới hạn hàm giá trị tối ưu số toán ``bệnh tật" quy hoạch trơn đề xuất Gauvin Dubeau [2] (a) Khoa toán, Trường Đại học sư phạm Đồng Tháp

Ngày đăng: 23/07/2014, 14:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan