1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Second-Order Optimality Conditions with the Envelope-Like Effect for Set-Valued Optimization

23 105 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

DSpace at VNU: Second-Order Optimality Conditions with the Envelope-Like Effect for Set-Valued Optimization tài liệu, gi...

J Optim Theory Appl DOI 10.1007/s10957-015-0728-6 Second-Order Optimality Conditions with the Envelope-Like Effect for Set-Valued Optimization P Q Khanh1 · N M Tung2 Received: 18 August 2014 / Accepted: 13 March 2015 © Springer Science+Business Media New York 2015 Abstract We consider Karush–Kuhn–Tucker second-order optimality conditions for nonsmooth set-valued optimization with attention to the envelope-like effect To analyse the critical feasible directions, which produce this phenomenon, we use the contingent derivatives, the adjacent derivatives and the corresponding asymptotic derivatives, since directions are explicitly involved in these kinds of derivatives To pursue strong multiplier rules, we impose cone-Aubin conditions to deal with the objective and constraint maps separately In this way, we can invoke constraint qualifications of the Kurcyusz–Robinson–Zowe type To our knowledge, some of the results are new; they will be indicated explicitly The paper also discusses improvements or extensions of known results Keywords Optimality condition · Second-order contingent derivative · Kurcyusz–Robinson–Zowe constraint qualification · Weak minimizer · Firm minimizer Mathematics Subject Classification 90C29 · 49J52 · 90C46 · 90C48 Communicated by Jafar Zafarani B P Q Khanh pqkhanh@hcmiu.edu.vn N M Tung nmtung@hcmus.edu.vn Department of Mathematics, International University, Vietnam National University Ho Chi Minh City, Linh Trung, Thu Duc, Ho Chi Minh City, Vietnam Department of Mathematics and Computing, University of Science, Vietnam National University Ho Chi Minh City, Ho Chi Minh City, Vietnam 123 J Optim Theory Appl Introduction In recent decades, set-valued optimization has been received much attention from researchers Set-valued optimization is an expanding branch of applied mathematics that deals with optimization problems where the objective map and the constraint maps are set-valued Until now, many derivative-like notions have been proposed and applied to investigate optimality conditions in nonsmooth problems (see [1–5] for first-order conditions, [6–10] for higher-order conditions and the references therein) In the pathbreaking paper [2], Corley employed the contingent and circatangent derivatives to establish a first-order Fritz John necessary optimality condition By using contingent epiderivatives, Götz and Jahn in [4] obtained a first-order Karush–Kuhn–Tucker (KKT) necessary optimality condition With the well-known Dubovitski–Milutin approach, Issac and Khan in [5] established a Lagrange multiplier rule for set-valued optimization with generalized inequality constraints Another fruitful approach in set-valued optimization is the dual space approach initiated by Mordukhovich (see [11,12]) Recently, second-order optimality conditions for scalar and vector optimization problems have been intensively developed because they refine the first-order by second-order information which is very helpful for recognizing optimal solutions as well as for designing numerical algorithms for computing them We observe, in most related contributions in the literature, that the core of second-order necessary optimality conditions is a direct extension of the classical result in calculus that the second derivative of the objective map (or the Lagrange map in constrained problems) at minimizers is nonnegative Kawasaki in [13] discovered that the second derivative of the Lagrangian at the minimal solution may be strictly negative in certain critical directions He called this phenomenon the envelope-like effect The Kawasaki result was developed in [14,15] for C scalar programming, in [16–18] for nonsmooth multiobjective programming, and in [19] for infinite-dimensional nonsmooth optimization However, for set-valued optimization, we observe only paper [20] dealing with the envelope-like effect Let us mention first some papers on second-order optimality conditions for set-valued optimization (not discussing the envelope-like effect) In [3], Durea employed second-order contingent derivatives to establish such conditions In [6], Jahn et al proposed a second-order contingent epiderivative and a generalized second-order contingent epiderivative and applied them to obtain optimality conditions in the primal form Following the Dubovitski–Milutin approach, in [21], Khan and Tammer proved second-order optimality conditions in terms of second-order asymptotic contingent derivatives Zhu et al [10] proposed and used the second-order composed contingent derivative to establish dual second-order conditions Higher-order necessary and sufficient optimality conditions for set-valued optimization can be seen in [7,9] In all the above results concerning second-order necessary optimality conditions, the envelope-like effect was not considered In [22], Studniarski introduced the concept of a higher-order local strict (known also as firm or isolated) minimizer for scalar programming and established necessary conditions and sufficient conditions for set-constrained minimization in infinite-dimensional spaces This notion was extended to vector optimization in [23] and to set-valued vector optimization problems in [24], where conditions for local firm minimizers (of order 1) were obtained by using various generalized derivatives of set-valued maps Li et al 123 J Optim Theory Appl in [8] established a primal second-order sufficient optimality condition for local firm minimizers of order for set-valued optimization with inclusion constraints Next, it is worth noticing that [20] considered the envelope-like effect only for Fritz John second-order conditions, not KKT ones Motivated by the above arguments, in this paper we consider second-order KKT optimality conditions for set-valued optimization under set constraints and generalized inequality constraints with attention to the envelope-like effect Furthermore, in [20], second-order approximations were used (as generalized derivatives), which not explicitly involve directions, while the envelope-like effect occurs only in certain critical directions Hence, we have chosen the second-order contingent and asymptotic contingent derivatives since they have close relations to useful kinds of second-order tangent sets, which express approximating directions of the object under consideration Moreover, in many papers, such as [3,7,9,10,16–19], the disjunction map, composed from the objective and the constraints, is often used in necessary conditions Under assumptions on cone-Aubin properties, we get some stronger results involving the objective map and the constraint map separately We obtain second-order KKT multiplier rules under second-order qualification conditions of the Kurcyusz– Robinson–Zowe (KRZ) type We also compare these qualification conditions and some other existing ones In our second-order sufficient conditions for firm minimizers, no convexity assumption is imposed The organization of the paper is as follows In Sect 2, we collect definitions and preliminary facts for our later use Section is devoted to second-order necessary optimality conditions in the primal and dual forms In Sect 4, we discuss secondorder sufficient optimality conditions without any convexity assumptions Preliminaries Throughout the paper, if not otherwise stated, X, Y and Z will denote real Banach spaces By N, Bn and Bn+ , we denote the set of the natural numbers, a n-dimensional Banach space and its nonnegative orthant, respectively (resp) B X denotes the open unit ball of X and B X (x, r ) the open ball centred at x with radius r (similarly for other spaces) For A ⊆ X , intA, clA, bdA and convA stand for its interior, closure, boundary and convex hull, resp The cone generated by A is coneA := {λx : λ ≥ 0, x ∈ A} X ∗ stands for the dual space of X and , for the canonical pairing of any pair of dual spaces For t ∈ R, t ↓ means t > and t → Let C ⊆ Y be a closed and convex cone, and let Y be partially ordered by C Given a set-valued map F : X ⇒ Y , the domain, graph and epigraph of F are domF := {x ∈ X : F(x) = ∅}, gphF := {(x, y) ∈ X × Y : y ∈ F(x)}, epiF := {(x, y) ∈ X × Y : y ∈ F(x) + C}, resp F(A) := x∈A F(x) and the profile/epigraphic map F+ : X ⇒ Y is defined by F+ (x) := F(x) + C for x ∈ domF F is said to be upper semicontinuous (in short, u.s.c.) at x0 ∈ X iff, for any neighbourhood V of F(x0 ), there exists a neighbourhood U of x0 such that F(U ) ⊆ V F is called a C-function iff, for all x1 , x2 ∈ X and λ ∈ [0, 1], λF(x1 ) + (1 − λ)F(x2 ) ⊆ F(λx1 + (1 − λ)x2 ) + C When C contains (or equals, or is contained in) the nonnegative orthant, then “C-function” is specialized as 123 J Optim Theory Appl “C-convex” (or “convex” or “strictly C-convex”) The terms “C-concave”, “concave” and “strictly C-concave” are similarly defined Clearly, F is a C-function if and only if gphF+ is convex F is said to be C-Aubin at (x0 , y0 ) ∈ gphF (see [24,25]) iff there exist neighbourhoods U of x0 , V of y0 , and L > such that F(x) ∩ V ⊆ F(x ) + C + L x − x clBY , ∀x, x ∈ U If C = {0}, this is the well-known Aubin property We recall notions of tangency and corresponding generalized derivatives Definition 2.1 Let M ⊆ X and x0 , u ∈ X (i) The contingent cone (interior tangent cone, resp) of M at x0 is T (M, x0 ) := {u ∈ X : ∃tn ↓ 0, ∃u n → u, ∀n, x0 + tn u n ∈ M} I T (M, x0 ) := {u ∈ X : ∀tn ↓ 0, ∀u n → u, ∀n large, x0 + tn u n ∈ M} (ii) The second-order contingent set (resp, adjacent set and interior set) of M at x0 in direction u is T (M, x0 , u) := {w ∈ X : ∃tn ↓ 0, ∃wn → w, ∀n ∈ N, x0 + tn u + tn2 wn ∈ M} 2 A (M, x0 , u) := {w ∈ X : ∀tn ↓ 0, ∃wn → w, ∀n, x0 + tn u + tn wn ∈ M}, I T (M, x0 , u) := {w ∈ X : ∀tn ↓ 0, ∀wn → w, ∀n large, x0 + tn u + tn2 wn ∈ M} (iii) The asymptotic second-order tangent cone (resp, adjacent cone and interior cone) of M at x0 in direction u is T (M, x0 , u) := {w ∈ X : ∃(tn , rn ) ↓ (0, 0), tn rn−1 ↓ 0, ∃wn → w, ∀n ∈ N, x0 + tn u + tn rn wn ∈ M} A (M, x0 , u) := {w ∈ X : ∀(tn , rn ) ↓ (0, 0), tn rn−1 ↓ 0, ∃wn → w, ∀n ∈ N, x0 + tn u + tn rn wn ∈ M}, I T (M, x0 , u) := {w ∈ X : ∀(tn , rn ) ↓ (0, 0), tn rn−1 ↓ 0, ∀wn → w, ∀n large, x0 + tn u + tn rn wn ∈ M} M ⊆ X is called second-order derivable (resp, asymptotic derivable) at (x0 , u) iff T (M, x0 , u) = A2 (M, x0 , u) (resp, T (M, x0 , u) = A (M, x0 , u)) / clM, then all above tangent sets are empty and if u ∈ / T (M, x0 ), Note that if x0 ∈ then all above second-order tangent sets are empty Hence, we always assume conditions such as x0 ∈ clM, u ∈ T (M, x0 ) Some known properties of second-order tangent sets used later are collected in the following (see more in [14,16,17,19,26]) Proposition 2.1 Let M ⊆ X and x0 , u ∈ X (i) T (M, x0 , 0) = T (M, x0 , 0) = T (M, x0 ) 123 J Optim Theory Appl (ii) If X is reflexible and u ∈ T (M, x0 ), then either T (M, x0 , u) or T (M, x0 , u) is nonempty Let, in addition, M be convex and u ∈ T (M, x0 ) Then, the following assertions hold (iii) If T (M, x0 , u) = ∅, then T (M, x0 , u) ⊆ T (M, x0 , u) = T (T (M, x0 ), u) Additionally, if ∈ T (M, x0 , u), then T (M, x0 , u) = T (M, x0 , u) (iv) If A2 (M, x0 , u) = ∅, then clI T (M, x0 , u) = A2 (M, x0 , u) and A2 (M, x0 , u) + T (T (M, x0 ), u) ⊆ A2 (M, x0 , u) (v) If A (M, x0 , u) = ∅, then clI T (M, x0 , u) = A (M, x0 , u) and A (M, x0 , u) + T (T (M, x0 ), u) ⊆ A (M, x0 , u) Definition 2.2 Let F : X ⇒ Y , (x0 , y0 ) ∈ gphF and (u, v) ∈ X × Y (i) The contingent derivative D F(x0 , y0 ) of F at (x0 , y0 ) is defined by gphD F(x0 , y0 ) := T (gphF, (x0 , y0 )) (ii) The second-order contingent derivative D F(x0 , y0 , u, v) of F at (x0 , y0 ) in direction (u, v) is defined by gphD F(x0 , y0 , u, v) := T (gphF, (x0 , y0 ), (u, v)) The adjacent derivative D (2) F(x0 , y0 , u, v) is similar, with A2 replacing T (iii) The second-order asymptotic contingent derivative D F(x0 , y0 , (u, v)) of F at (x0 , y0 ) in direction (u, v) is defined by gphD F(x0 , y0 , u, v) := T (gphF, (x0 , y0 ), (u, v)) The adjacent derivative D ( ) F(x0 , y0 , u, v) is similar, with A replacing T (iv) The second-order composed contingent derivative D c(2) F(x0 , y0 , (u, v)) of F at (x0 , y0 ) in direction (u, v) is defined by gphD c(2) F(x0 , y0 , u, v) := T (T (gphF, (x0 , y0 )), (u, v)) Remark 2.1 (i) If X × Y is reflexive and (u, v) ∈ T (gphF, (x0 , y0 )), then, by Proposition 2.1(ii), T (gphF, (x0 , y0 ), (u, v))∪ T (gphF, (x0 , y0 ), (u, v)) = ∅ Hence, D F(x0 , y0 , (u, v))(x) ∪ D F(x0 , y0 , (u, v))(x) = ∅ for all x ∈ X (ii) By Proposition 2.1(iii), if F is a C-function and T (gphF+ , (x0 , y0 ), (u, v)) = ∅, then T (gphF+ , (x0 , y0 ), (u, v)) ⊆ T (gphF+ , (x0 , y0 ), (u, v)) = T (T (gphF, (x0 , y0 )), (u, v)) Therefore, for all x ∈ X , D F+ (x0 , y0 , u, v)(x) ⊆ D F+ (x0 , y0 , u, v)(x) = D c(2) F+ (x0 , y0 , u, v)(x) In addition, if ∈ D F+ (x0 , y0 , u, v)(0), then D F+ (x0 , y0 , u, v)(x) = D F+ (x0 , y0 , u, v)(x) = D c(2) F+ (x0 , y0 , u, v)(x) (Note that the notion of a C-function is applied only here for comparing derivatives Later, we not assume that a map is a C-function) 123 J Optim Theory Appl (iii) Since T (gphF+ , (x0 , y0 ), (u, v)) is a cone, the second-order asymptotic contingent derivative is strictly positive homogeneous, i.e D F+ (x0 , y0 , u, v)(t x) = t D F+ (x0 , y0 , u, v)(x) for all t > Second-Order Necessary Optimality Conditions In this section, let C ⊆ Y be a closed and convex cone, which defines a partial order on Y (C is not necessarily pointed) Let D be a convex cone in Z Our set-valued vector optimization problem is (P) MinC F(x) s.t x ∈ S, G(x) ∩ (−D) = ∅, where F : X ⇒ Y and G : X ⇒ Z are nonempty-valued and S is a nonempty subset of X Let := {x ∈ X : x ∈ S, G(x) ∩ (−D) = ∅}, D(z ) := cone(D + z ), and (F, G)(x) := F(x) × G(x) The following minimizer notions of vector optimization are discussed in this paper Definition 3.1 Let x0 ∈ , (x0 , y0 ) ∈ gphF, and m ∈ N (i) Supposing intC = ∅, a pair (x0 , y0 ) is said to be a local weak minimizer of (P), denoted by (x0 , y0 ) ∈ LWMin(P), iff there exists a neighbourhood U of x0 such that (F(x) − y0 ) ∩ (−intC) = ∅, ∀x ∈ ∩ U (ii) ([24,27]) A pair (x0 , y0 ) is called a local firm minimizer of order m of (P), denoted by (x0 , y0 ) ∈ LFMin(P,m), iff (a) y0 ∈ StrC F(x0 ) (i.e (F(x0 ) − y0 ) ∩ (−C\{0}) = ∅); (b) there exist a neighbourhood U of x0 and α > such that (F(x) + C) ∩ B(y0 , α x − x0 m ) = ∅, ∀x ∈ ∩ U \{x0 } We first establish a second-order necessary condition for LWMin(P) in a primal form Proposition 3.1 Let (x0 , y0 ) ∈ LWMin(P) and z ∈ G(x0 ) ∩ (−D) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 ) (u) ∩ (−bdC), and w ∈ DG + (x0 , z )(u) ∩ (−clD(z )), one has (i) D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x) ∩ I T (−C, v) × I T (−D, z , w) = ∅, ∀x ∈ I T (S, x0 , u); (ii) D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x) ∩ I T (−C, v) × I T (−D, z , w) = ∅, ∀x ∈ I T (S, x0 , u) Proof We prove only part (i) because the proof of (ii) is similar Since (x0 , y0 ) ∈ LWMin(P), ∃U (a neighbourhood of x0 ), ∀x ∈ ∩ U , (F(x) − y0 ) ∩ (−intC) = ∅ Suppose to the contrary the existence of x ∈ I T (S, x0 , u) and (y, z) ∈ Y × Z such that there are tn ↓ 0, xn → x, and (yn , z n ) → (y, z), with y ∈ I T (−C, v) and z ∈ I T (−D, z , w), such that 1 y0 + tn v + tn2 yn ∈ F(x0 + tn u + tn2 xn ) + C, 2 123 (1) J Optim Theory Appl 1 z + tn w + tn2 z n ∈ G(x0 + tn u + tn2 xn ) + D 2 (2) Since x ∈ I T (S, x0 , u), for the preceding sequences tn and xn , there exists n ∈ N such that x0 +tn u + tn2 xn ∈ S for all n ≥ n Moreover, because z ∈ I T (−D, z , w), for the given tn and z n , there exists n ∈ N such that z + tn w + tn2 z n ∈ −D for all n ≥ n From (2), one has (G(x0 + tn u + tn2 xn ) +D) ∩ (−D) = ∅ for all n ≥ n , 1 and then, G(x0 + tn u + tn2 xn ) ∩ (−D) = ∅ Hence, x0 + tn u + tn2 xn ∈ for all 2 n ≥ max{n , n } Since I T (−C, v) = I T (−intC, v) and y ∈ I T (−C, v), for the preceding sequences tn and yn , there exists n ∈ N such that tn v + tn2 yn ∈ −intC for 2 all n ≥ n From (1), one has (F(x0 + tn u + tn xn ) − y0 ) ∩ (−intC) = ∅, ∀n ≥ n Then, we have, for all n ≥ max{n , n , n }, x0 + tn u + tn2 xn ∈ and (F(x0 + tn u + 2 tn xn ) − y0 ) ∩ (−intC) = ∅, a contradiction because (x0 , y0 ) ∈ LWMin(P) Under additional Aubin property assumptions for F and G, we obtain the following primal second-order necessary condition involving separately derivatives of F and G Proposition 3.2 Let (x0 , y0 ) ∈ LWMin(P) and z ∈ G(x0 ) ∩ (−D) Assume that F+ is C-Aubin at (x0 , y0 ) and G + is D-Aubin at (x0 , z ) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 )(u) ∩(−bdC), and w ∈ DG + (x0 , z ) (u) ∩ (−clD(z )), (i) for all x ∈ A2 (S, x0 , u), D F+ (x0 , y0 , u, v)(x) × D (2) G + (x0 , z , u, w)(x) ∩ I T (−C, v) × I T (−D, z , w) = ∅; (ii) for all x ∈ A (S, x0 , u), D F+ (x0 , y0 , u, v)(x) × D ( ) G + (x0 , z , u, w)(x) ∩ I T (−C, v) × I T (−D, z , w) = ∅ Proof By reasons of similarity, we prove only part (i) There exists a neighbourhood U of x0 such that (F(x) − y0 ) ∩ (−intC) = ∅, ∀x ∈ ∩ U We define := {x ∈ X : G(x) ∩ (−D) = ∅} and claim that, for all x ∈ A2 (S, x0 , u), one has (I) if x ∈ I T ( , x0 , u), then D F+ (x0 , y0 , u, v)(x) ∩ I T (−C, v) = ∅; (II) if x ∈ / I T ( , x0 , u), then D (2) G + (x0 , z , u, w)(x) ∩ I T (−D, z , w) = ∅ First, we prove (I) Suppose there exists y ∈ D F+ (x0 , y0 , u, v)(x) ∩ I T (−C, v) Then, there are tn ↓ 0, xn → x, and yn → y such that, for all n ∈ N, 1 y0 + tn v + tn2 yn ∈ F(x0 + tn u + tn2 xn ) + C 2 (3) 123 J Optim Theory Appl Since x ∈ A2 (S, x0 , u) ∩ I T ( , x0 , u), for the preceding sequences tn , one has 1 xn → x and n ∈ N such that x0 + tn u + tn2 xn ∈ S ∩ , i.e x0 + tn u n + tn2 xn ∈ 2 for all n ≥ n Since F+ is C-Aubin at (x0 , y0 ), there exist a neighbourhood V of y0 and L F > such that, for large n, 1 F+ (x0 + tn u + tn2 xn ) ∩ V ⊆ F(x0 + tn u + tn2 xn ) + L F tn2 xn − xn BY + C 2 From (3), for large n, 1 y0 + tn v + tn2 yn ∈ F(x0 + tn u + tn2 xn ) + L F tn2 xn − xn BY + C 2 Then, for some bn ∈ BY , 1 y0 + tn v + tn2 (yn − L F xn − xn bn ) ∈ F(x0 + tn u + tn2 xn ) + C 2 As yn − L F xn − xn bn → y ∈ I T (−C, v) and I T (−C, v) = I T (−intC, v), there exists n ∈ N such that tn v + tn2 (yn − L F xn − xn bn ) ∈ −intC for all n ≥ n Therefore, (F(x0 + tn u + tn2 xn ) − y0 ) ∩(−intC) = ∅ for all n ≥ n This is a contradiction because (x0 , y0 ) is a local weak minimizer of (P) Next, we prove (II) Suppose there exists z ∈ D (2) G + (x0 , z , u, w)(x) ∩ I T (−D, z , w) For every tn ↓ 0, there exist sequences x¯n → x and z n → z such that, for all n ∈ N, 1 z + tn w + tn2 z n ∈ G(x0 + tn u + tn2 x¯n ) + D 2 (4) Since G + is D-Aubin at (x0 , z ), for every xn → x, there exist a neighbourhood V of z , L G > 0, and n ∈ N such that, for all n ≥ n , 1 G + (x0 + tn u + tn2 x¯n ) ∩ V ⊆ G(x0 + tn u + tn2 xn ) + L G tn2 x¯n − xn B Z + D 2 From (4), for large n and for some bn ∈ B Z , one has z + tn w + tn2 (z n − L G x¯n − xn bn ) ∈ G(x0 +tn u + tn2 xn )+ D As z n − L G x¯n − xn bn → z ∈ I T (−D, z , w), this leads to the existence of n ∈ N such that z + tn w + tn2 (z n − L G x¯n − xn bn ) ∈ −D for all n ≥ n Hence, for large n, G(x0 + tn u + tn2 xn ) ∩ (−D) = ∅, i.e x0 + tn u + tn2 xn ∈ and thus x ∈ I T ( , x0 , u), a contradiction Remark 3.1 (i) Propositions 3.1 and 3.2 can be modified for an arbitrary subset D of Z (not a convex cone) Namely, we need only to replace G + by G Indeed, the 123 J Optim Theory Appl inclusion (2) in the proof of Proposition 3.1 (resp, (4) in the proof of Proposition 1 3.2) becomes z + tn w + tn2 z n ∈ G(x0 + tn u + tn2 xn ), and for large n, one also 2 has z + tn w + tn z n ∈ −D Hence, G(x0 + tn u + tn2 xn ) ∩ (−D) = ∅ The 2 rest of the proof is similar to the proof of the above propositions We note that in [13,16–19], where single-valued maps ( f, g) is in the place of (F, G), D is a convex set and ( f, g) is used instead of the profile map ( f + , g+ ) (ii) By Remark 2.1(i), applying Propositions 3.1 and 3.2 with (u, (v, w)) = (0, (0, 0)), we immediately obtain the following first-order necessary optimality condition in the primal form, for all x ∈ I T (S, x0 ), D(F+ , G + )(x0 , (y0 , z ))(x) ∩ − int(C × D(z )) = ∅, D F+ (x0 , y0 )(x) × DG + (x0 , z )(x) ∩ − int(C × D(z )) = ∅ Propositions 3.1 and 3.2 provide second-order information for local weak minimizers when direction u ∈ X satisfies the first-order condition critically in the sense that v ∈ D F+ (x0 , y0 )(u)∩(−bdC) and w ∈ DG + (x0 , z )(u)∩−clD(z ) We note that in many known second-order necessary conditions, such a critical direction w is not considered For instance, w is only in −D in [7,9,10], in −int D − R+ z in [3], and in −D(z ) in [27] Only in directions w belonging to the additional part −(clD(z )\D(z )) can the so-called envelope-like effect occur, as we will see in Theorems 3.1 and 3.2 below (iii) Since v ∈ −C, z ∈ −D, and w ∈ −clD(z ), one has −intC × (−int D − R+ z ) ⊆ I T (−C, v) × I T (−D, z , w) Hence, Proposition 3.1(i) improves Proposition 3.8 in [3] As −intC − v ⊆ I T (−C, v), Proposition 3.1(i) sharpens Theorem 3.1 in [6] We note that in many higher-order necessary conditions for set-valued optimization, e.g [3,7,9,10], only −int D − R+ z or −D(z ) or −D are involved I T (−D, z , w) and I T (−D, z , w) in Propositions 3.1 and 3.2 play an important role in establishing necessary conditions in the dual form, as we will see in Theorems 3.1-3.4 below (iv) With the use of asymptotic objects, Proposition 3.1(ii) improves Theorem 6.2 in [21], and Proposition 3.2(ii) improves Theorem 6.1 in [21], since the authors of [21] used (F, G), not the profile map (F+ , G + ) as in our results (v) If epiG is second-order derivable at (x0 , y0 , u, v), one has D (2) G + (x0 , z , u, w) (x) = D G + (x0 , z , u, w)(x) Hence, because D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x) ⊆ D F+ (x0 , y0 , u, v)(x) × D (2) G + (x0 , z , u, w)(x), Proposition 3.2(i) is stronger than Proposition 3.1(i) Similarly, if epiG is second-order asymptotic derivable at (x0 , y0 , u, v), then Proposition 3.2(ii) sharpens Proposition 3.1(ii) In [21], the Aubin property of F and G was imposed Here, we use the relaxed property that F+ is C-Aubin at (x0 , y0 ) and G + is D-Aubin at (x0 , z ) For the cone C ⊆ Y (resp, D ⊆ Z ), C ∗ := {c∗ ∈ Y ∗ : c∗ , c ≥ 0, ∀c ∈ C} is the polar cone of C (resp, D ∗ ) Then, it is not hard to check that, for z ∈ −D, [D(z )]∗ = N (−D, z ), the normal cone of −D at z Note that, if D is a convex cone, then N (−D, z ) = {d ∗ ∈ D ∗ : d ∗ , z = 0} 123 J Optim Theory Appl Now, we are able to establish a dual-form second-order necessary condition for local weak minimizers in terms of KKT multipliers Theorem 3.1 Let (x0 , y0 ) ∈ LWMin(P) and z ∈ G(x0 ) ∩ (−D) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 )(u) ∩ (−bdC), and w ∈ DG + (x0 , z )(u) ∩ (−clD(z )), the following statements hold (i) For all x ∈ I T (S, x0 , u) and (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x), there exists (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d (ii) In particular, for (u, v, w) such that D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) is a convex set, there exists a common (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d for all (x, y, z) mentioned in (i) Moreover, c∗ = if the following qualification condition of the KRZ type is fulfilled: {z ∈ Z : (y, z) ∈ cone(D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) −{0} × A2 (−D, z , w))} + D(z ) = Z Proof (i) By Proposition 3.1(i), for all x ∈ I T (S, x0 , u) and (y, z) ∈ D (F+ , G + ) / I T (−C, v) × I T (−D, z , w) By (x0 , (y0 , z ), u, (v, w))(x), one has (y, z) ∈ the standard separation theorem, we obtain (c∗ , d ∗ ) ∈ Y ∗ × Z ∗ \{(0, 0)}, such that, for all c ∈ I T (−C, v) and d ∈ I T (−D, z , w), c∗ , y + d ∗ , z ≥ c∗ , c + d ∗ , d (5) Since C is a convex cone, one has I T (−C, v) = int(cone(−C − v)) (according to Proposition 2.4 in [14]) It follows from (5) that c∗ , c ≤ for all c ∈ cone(−C − v) This leads to c∗ ∈ [cone(C + v)]∗ Because v ∈ −bdC, one has c∗ ∈ C ∗ and c∗ , v = According to Proposition 2.1, one has clI T (−D, z , w) = A2 (−D, z , w) and A2 (−D, z , w) + T (T (−D, z ), w) ⊆ A2 (−D, z , w) From (5), by taking c = 0, one has, for all d ∈ A2 (−D, z , w) and d ∈ T (T (−D, z ), w), c∗ , y + d ∗ , z ≥ d ∗ , d + d ∗ , d Because T (T (−D, z ), w) is a cone, d ∗ ∈ −[T (T (−D, z ), w)]∗ = {d ∗ ∈ N (−D, z ) : d ∗ , w = 0} By letting d = 0, we obtain c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d (ii) In view of Proposition 3.1(i), one has D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) ∩ I T (−C, v) ×I T (−D, z , w) = ∅ 123 J Optim Theory Appl By the assumed convexity of D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)), similarly as for part (i), we obtain (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d for every (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) Now we prove that c∗ = under the qualification condition Supposing c∗ = 0, one has d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d for every (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) By the qualification condition, ∃¯z ∈ Z , ∃t1 , t2 ≥ 0, ∃z ∈ {z ∈ Z : (y , z ) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w)) (I T (S, x0 , u))}, ˆ + t2 (d + z ) Since d ∗ ∈ D ∗ and dˆ ∈ A2 (−D, z , w), ∃d ∈ D, z¯ = t1 (z − d) ∗ d , z = 0, d ∗ , z¯ = t1 d ∗ , z − dˆ + t2 d ∗ , d +z ≥ t1 (supd∈A2 (−D,z ,w) d ∗ , d − d ∗ , dˆ ) ≥ Since z¯ ∈ Z is arbitrary, we have d ∗ = 0, a contradiction because (c∗ , d ∗ ) = (0, 0) Under a cone-Aubin property, we derive a deeper dual-form second-order necessary condition for local weak minimizers, where the assumptions on F and G are separate, and hence, we need only a constraint qualification involving G, not the objective F, as follows Theorem 3.2 Let (x0 , y0 ) ∈ gphF be a local weak minimizer of (P) and z ∈ G(x0 )∩ (−D) Assume that F+ is C-Aubin at (x0 , y0 ) and G + is D-Aubin at (x0 , z ) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 )(u) ∩ (−bdC), and w ∈ DG + (x0 , z )(u) ∩ (−clD(z )), the following statements hold (i) For all x ∈ A2 (S, x0 , u), y ∈ D F+ (x0 , y0 , u, v)(x), and z ∈ D (2) G + (x0 , z , u, w)(x), there exists (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d (ii) In particular, for (u, v, w) such that (D F+ (x0 , y0 , u, v), D (2) G + (x0 , z , u, w)) (A2 (S, x0 , u)) is convex, there exists a common (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} satisfying c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ supd∈A2 (−D,z ,w) d ∗ , d for all (x, y, z) mentioned in (i) Moreover, c∗ = if the following constraint qualification of the KRZ type is satisfied: cone(D (2) G + (x0 , z , u, w)(A2 (S, x0 , u))− A2 (−D, z , w)) + D(z ) = Z (6) Proof Argue similarly as for Theorem 3.1, applying Proposition 3.2(i) instead of Proposition 3.1(i) In the next example, Theorem 3.1 rejects a candidate and supd∈A2 (−D,z ,w) d ∗ , d < 123 J Optim Theory Appl Example 3.1 Let X = R, Y = B2 , Z = B3 , S = [−2, 2], C = B2+ and D = {(d1 , d2 , d3 ) ∈ B3 : d2 d3 ≥ 2d12 , d2 ≤ 0, d3 ≤ 0} (D is taken from [9]), F(x) = {(y1 , y2 ) ∈ B2 : y1 ≥ −x , y1 − y2 ≥ x}, and G(x) = (x, + x , 3x) Consider x0 = 0, y0 = (0, 0) ∈ F(x0 ), and z = (0, 1, 0) ∈ G(x0 ) ∩ (−D) By direct computations, one has D(z ) = {(d1 , d2 , d3 ) ∈ B3 : d3 < 0} ∪ {(0, d2 , 0) : d2 ∈ R}, T (−D, z ) = {(d1 , d2 , d3 ) ∈ B3 : d3 ≥ 0}, N (−D, z ) = {α(0, 0, −1) : α ≥ 0}, D F+ (x0 , y0 )(u) = {(v1 , v2 ) ∈ B2 : v1 ≥ 0, v1 − v2 ≥ u}, and DG + (x0 , z )(u) = {(w1 , w2 , w3 ) ∈ B3 : w2 ≤ 0, w3 ≤ 3u, w2 (w3 − 3u) ≥ 2(w1 − u)2 } Take u = 1, v = (0, −1) ∈ D F+ (x0 , y0 )(u) ∩ −bdC, and w = (1, 0, 0) ∈ DG + (x0 , z )(u) ∩ −clD(z ) Direct calculations yield I T (S, x0 , u) = R and D (F, G)+ (x0 , (y0 , z ), u, (v, w))(x) = {(y1 , y2 , z , z , z ) ∈ B5 : y1 ≥ −2, y1 − y2 ≥ x, z ≤ 2} Choosing x = ∈ I T (S, x0 , u) and (y, z) = (−1, −1, 0, 0, 5) ∈ D (F, G)+ (x0 , (y0 , z ), u, (v, w))(0), for all c∗ = (c1 , c2 ) ∈ B2+ and d ∗ = α(0, 0, −1) ∈ N (−D, z ) with α ≥ 0, one has c∗ , y + d ∗ , z = −c1 − c2 − 5α On the other hand, A2 (−D, z , w) = {(d1 , d2 , d3 ) ∈ B3 : d3 ≥ 4} So, supd∈A2 (−D,z ,w) d ∗ , d = −4α Since c1 , c2 , α ≥ and ((c1 , c2 ), α) = 0, one has c∗ , y + d ∗ , z = −c1 − c2 − 5α < −4α Theorem 3.1 ensures that (x0 , y0 ) ∈ LWMin(P) In some cases, second-order contingent derivatives are trivial maps so that Theorems 3.1 and 3.2 cannot be employed Then, the following second-order KKT necessary condition in terms of asymptotic contingent derivatives may be helpful Theorem 3.3 Let (x0 , y0 ) ∈ LWMin(P) and z ∈ G(x0 ) ∩ (−D) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 )(u) ∩ (−bdC), and w ∈ DG + (x0 , z )(u) ∩ (−clD(z )), the following statements hold (i) For all x ∈ I T (S, x0 , u) and (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x), there exists (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ (ii) In particular, for (u, v, w) such that D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u) is convex, there exists a common (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ for all (x, y, z) mentioned in (i) Moreover, if the qualification condition {z ∈ Z : (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u))} + D(z ) = Z is satisfied, then c∗ = Proof (i) By Proposition 3.1(ii), for all x ∈ A (S, x0 , u), (y, z) ∈ D (F+ , G + ) / I T (−C, v) × I T (−D, z , w) The (x0 , (y0 , z ), u, (v, w))(x), one has (y, z) ∈ separation theorem yields (c∗ , d ∗ ) ∈ Y ∗ × Z ∗ \{(0, 0)} such that, for all c ∈ I T (−C, v) and d ∈ I T (−D, z , w), c∗ , y + d ∗ , z ≥ c∗ , c + d ∗ , d This inequality implies that c∗ ∈ C ∗ and c∗ , v = Because clI T (−D, z , w) = A (−D, z , w), the inequality becomes, for all d ∈ A (−D, z , w), c∗ , y + d ∗ , z ≥ d ∗ , d Since D is convex, A (−D, z , w) = T (T (−D, z ), w) Hence, d ∗ ∈ [T (T (−D, z ), w)]∗ , i.e d ∗ ∈ N (−D, z ) and d ∗ , w = Moreover, because A (−D, z , w) is a cone, one has c∗ , y + d ∗ , z ≥ 123 J Optim Theory Appl (ii) In view of Proposition 3.1(ii), D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) ∩ I T (−C, v) ×I T (−D, z , w) = ∅ Due to the assumed convexity of D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)), similarly as for part (i), we obtain (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ 0, for every (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) Now supposing the qualification condition and c∗ = One has d ∗ , z ≥ for every (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) Take arbitrarily z¯ ∈ Z By the qualification condition, there are t ≥ 0, z ∈ {z ∈ Z : (y , z ) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u))}, and d ∈ D such that z¯ = z + t (d+z ) Since d ∗ ∈ D ∗ and d ∗ , z = 0, d ∗ , z¯ ≥ Hence, d ∗ = 0, a contradiction because (c∗ , d ∗ ) = (0, 0) Theorem 3.3(ii) improves Theorem 4.1 of [10] Indeed, if F is a C-function, G is a D-function, and I T (S, x0 , u) is a convex set, then D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) is a convex set, and by Remark 2.1(iii), D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x) = D c(2) (F+ , G + )(x0 , (y0 , z ), u, (v, w))(x) By Theorem 3.3(ii), we obtain a common (c∗ , d ∗ ) ∈ C ∗ × N (−D, z ) \{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ for every (y, z) ∈ D c(2) (F+ , G + )(x0 , (y0 , z ), u, (v, w)) (I T (S, x0 , u)), as asserted in that Theorem 4.1 Furthermore, the result of Theorem 4.1 in [10] is only for w in −D and c∗ in C ∗ , while in Theorem 3.3, w is in −clD(z ) and c∗ satisfies additionally c∗ , v = Theorem 3.4 Let (x0 , y0 ) ∈ LWMin(P), z ∈ G(x0 ) ∩ (−D), F+ be C-Aubin at (x0 , y0 ), and G + be D-Aubin at (x0 , z ) Then, for all u ∈ X , v ∈ D F+ (x0 , y0 )(u) ∩ (−bdC), and w ∈ DG + (x0 , z )(u) ∩ (−clD(z )), (i) ∀x ∈ A (S, x0 , u), ∀y ∈ D F+ (x0 , y0 , u, v)(x), ∀z ∈ D ( ) G + (x0 , z , u, w)(x), ∃(c∗ , d ∗ ) ∈ C ∗ ×N (−D, z )\{(0, 0)}: c∗ , v = d ∗ , w = 0, c∗ , y + d ∗ , z ≥ (ii) In particular, for (u, v, w) such that (D F+ (x0 , y0 , u, v), D ( ) G + (x0 , z , u, w)) (A (S, x0 , u)) is convex, there exists a common (c∗ , d ∗ ) ∈ C ∗×N (−D, z )\{(0, 0)} such that c∗ , v = d ∗ , w = and c∗ , y + d ∗ , z ≥ for all (x, y, z) encountered in (i) Moreover, c∗ = if the following constraint qualification is satisfied D ( ) G + (x0 , z , u, w)(A (S, x0 , u)) + D(z ) = Z (7) Proof Repeat the proof of Theorem 3.3, with Proposition 3.2(ii) replacing Proposition 3.1(ii) Obviously, as a direct consequence of Theorem 3.3 with (u, (v, w)) = (0, (0, 0)), we immediately obtain 123 J Optim Theory Appl Corollary 3.1 Let (x0 , y0 ) ∈ LWMin(P), z ∈ G(x0 ) ∩ (−D), and D(F+ , G + )(x0 , (y0 , z ))(I T (S, x0 )) be convex Then, there exists (c∗ , d ∗ ) ∈ C ∗ ×N (−D, z )\{(0, 0)} such that c∗ , y + d ∗ , z ≥ for every (y, z) ∈ D(F+ , G + )(x0 , (y0 , z ))(I T (S, x0 )) Moreover, if the qualification condition of the KRZ type {z ∈ Z : (y, z) ∈ D(F+ , G + )(x0 , (y0 , z ))(I T (S, x0 ))} + D(z ) = Z holds, then c∗ = Remark 3.2 (i) Since A2 (−D, z , w) ⊆ cl[cone[cone(−D − z ) − w]] and d ∗ ∈ −[T (T (−D, z ), w)]∗ = −[cl[cone[cone (−D − z ) − w]]]∗ , supd∈A2 (−D,z ,w) d ∗ , d is nonpositive It may even be negative (see Example 3.1) Of course, this supremum vanishes if ∈ A2 (−D, z , w) So, for direction w satisfying this, the result takes the classical form For example, if w ∈ −D(z ), then ∈ A2 (−D, z , w) In particular, if D is a convex polyhedron, no envelope-like effect occurs since D(z ) is closed Since ∈ A (−D, z , w), in Theorems 3.3 and 3.4, the envelope-like effect does not occur (ii) If w ∈ −D(z ), then ∈ A2 (−D, z , w), and hence, condition (6) is implied by the condition cone(D (2) G + (x0 , z , u, w)(A2 (S, x0 , u)) + D(z ) = Z Note that, since A (−D, z , w) is a cone and D ( ) G + (x0 , z , u, w) is strictly positive homogeneous, coneD ( ) G + (x0 , z , u, w)(A (S, x0 , u)) = D ( ) G + (x0 , z , u, w)(A (S, x0 , u)) Hence, (6) and (7) are of the same type, but in terms of two kinds of derivatives In Theorem 3.1 (resp, Theorem 3.3), the condition {z ∈ Z : (y, z) ∈ cone(D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u)) −{0} × A2 (−D, z , w))} + D(z ) = Z (resp,{z ∈ Z : (y, z) ∈ D (F+ , G + )(x0 , (y0 , z ), u, (v, w))(I T (S, x0 , u))} +D(z ) = Z ) involves both F and G (and is called a qualification condition) In Theorems 3.2 and 3.4, the corresponding condition involves only G (and is called a constraint qualification) Our qualification condition extends the KRZ condition to the second-order case (see more details on second-order constraint qualifications in [28]) Similar conditions were also considered in [10] with second-order composed contingent derivatives The second-order constraint qualifications (6) and (7) are weaker than the following ones ¯ − (CQ)21 There exist x¯ ∈ A2 (S, x0 , u) and z¯ ∈ cone(D (2) G + (x0 , z , u, w)(x) A (−D, z , w)) such that z¯ ∈ I T (−D, z ) (CQ)22 The following conditions are satisfied (a) the graph of D (2) G + (x0 , z , u, w)(·) − A2 (−D, z , w) is closed and convex; (b) ∈ core(D (2) G + (x0 , z , u, w)(A2 (S, x0 , u)) − A2 (−D, z , w)); (c) A2 (S, x0 , u) and D (2) G + (x0 , z , u, w)(A2 (S, x0 , u)) − A2 (−D, z , w) are convex, where core(·) stands for the algebraic interior of a set (·) 123 J Optim Theory Appl (CQ)1 There exist x¯ ∈ A (S, x0 , u) and z¯ ∈ D z¯ ∈ I T (−D, z ) (CQ)2 The following conditions are satisfied ( )G ¯ + (x , z , u, w)( x) such that (a) the graph of D ( ) G + (x0 , z , u, w) is closed and convex; (b) ∈ coreD ( ) G + (x0 , z , u, w)(A (S, x0 , u)); (c) A (S, x0 , u) and D ( ) G + (x0 , z , u, w)(A (S, x0 , u)) are convex (CQ)21 and (CQ)1 are relaxed second-order Slater constraint qualifications Note that the condition (CQ)21 (resp, (CQ)1 ) is obviously implied by the following stronger condition: There exists x¯ ∈ A2 (S, x0 , u) (resp, x¯ ∈ A (S, x0 , u)) such that z + z¯ ∈ ¯ − A2 (−D, z , w) (z + z¯ ∈ −int D, ∀¯z ∈ −int D, ∀¯z ∈ D (2) G + (x0 , z , u, w)(x) D ( ) G + (x0 , z , u, w)(x), ¯ resp) Note that (CQ)1 improves the corresponding conditions in [10] (CQ)22 and (CQ)2 were used in [29] to get KKT multiplier rules for multiobjective optimization problems with inclusion constraints Proposition 3.3 The following assertions hold (i) each of (CQ)21 and (CQ)22 implies the constraint qualification (6); (ii) each of (CQ)1 and (CQ)2 implies the constraint qualification (7) Proof By reasons of similarity, we prove only (ii) If (CQ)1 holds, for the given 1 ¯ and z ∈ Z , −z − (¯z − z) ∈ D for large x, ¯ z¯ ∈ D ( ) G + (x0 , z , u, w) (x), n n n ∈ N As D b( ) G + (x0 , z , u, w) is strictly positive homogeneous and A (S, x0 , u) ¯ ⊆ D ( ) G + (x0 , z , u, w)(A (S, x0 , u)) is a cone, n z¯ ∈ D ( ) G + (x0 , z , u, w)(n x) 1 Then, z = n z¯ + n (−z − z¯ + z + z ) ∈ D ( ) G + (x0 , z , u, w)(A2 (S, x0 , u)) + n n D(z ) Because z ∈ Z is arbitrary, we have (7) If (CQ)2 holds, setting (x) := D ( ) G + (x0 , z , u, w)(x) + D(z ) By (CQ)2 , the graph of is closed and convex, ∈ core (A (S, x0 , u)), and (A (S, x0 , u)) is a convex set By the Robinson– ¯ Ursescu open mapping theorem (see [30–32]), for x¯ ∈ A (S, x0 , u) with ∈ (x), there exists > such that B(0, 1) ⊂ ((x¯ + B(0, 1)) ∩ A (S, x0 , u)) Therefore, B(0, 1) ⊂ D ( ) G + (x0 , z , u, w)(x) ¯ + D(z ) ⊂D ( ) G + (x0 , z , u, w)(A (S, x0 , u)) + D(z ), As D ( ) G + (x0 , z , u, w) is strictly positive homogeneous and A (S, x0 , u) is a cone, we obtain (7) In next example, Theorem 3.4 can be employed, while Theorems 3.1 and 3.2 cannot This example also illustrates that the constraint qualification (7) can be fulfilled even when the generalized second-order Slater condition in (CQ)1 does not hold Example 3.2 Let X = Y = Z = B2 , S = {(x1 , x2 ) ∈ B2 : x2 = |x1 | }, C = B2+ , D = {(z , z ) ∈ B2 : z ≤ 0, z = 0}, F(x1 , x2 ) = {(y1 , y2 ) ∈ B2 : y1 ≥ x22 − x1 , y2 ≥ x2 − x1 }, and G(x1 , x2 ) = {(z , z ) ∈ B2 : z ≥ x1 x2 } Consider x0 = (0, 0), y0 = (0, 0) ∈ F(x0 ), and z = ∈ G(x0 ) ∩ (−D) Then, 123 J Optim Theory Appl F+ is C-Aubin at (x0 , y0 ) and G + is D-Aubin at (x0 , z ), G + (x) = B2 , and, for u = (u , u ) ∈ X , one has D F+ (x0 , y0 )(u) = {(v1 , v2 ) ∈ B2 : v1 ≥ −u , v2 ≥ u − u }, DG + (x0 , z )(u) = B2 , and T (S, x0 ) = {(u , u ) ∈ B2 : u = 0} By taking u = (u , u ) = (1, 0), one has I T (S, x0 , u) = A2 (S, x0 , u) = ∅ So, Theorems 3.1 and 3.2 cannot be employed We apply Theorem 3.4 with A (S, x0 , u) = R × R+ Choosing v = (0, −1) ∈ D F+ (x0 , y0 )(u) ∩ (−bdC) and w = (0, 0) ∈ DG + (x0 , z )(u) ∩ − clD(z ), we have D F+ (x0 , y0 , u, v)(x1 , x2 ) = {(y1 , y2 ) ∈ B2 : y1 ≥ −x1 }, and D ( ) G + (x0 , z , u, w)(x1 , x2 ) = B2 Hence, (D F+ (x0 , y0 , u, v), D ( ) G + (x0 , z , u, w))(A (S, x0 , u)) is convex Since I T (−D, z ) = ∅, (CQ)1 is not fulfilled We see that D ( ) G + (x0 , z , u, w)(x1 , x2 ) + D(z ) = B2 , and so the constraint qualification in Theorem 3.4 is satisfied To check the necessary condition given in this theorem, we discuss all c∗ = (c1 , c2 ) ∈ B2+ \{(0, 0)} with c∗ , v = We have c1 > and c2 = By choosing x = (1, 0) ∈ A (S, x0 , u), y = (−1, 0) ∈ D F+ (x0 , y0 , u, v)(x), and z = (0, 0) ∈ D ( ) G + (x0 , z , u, w)(x), one has, for any c∗ = (c1 , c2 ) ∈ B2+ with c2 > and d ∗ ∈ N (−D, z), c∗ , y + d ∗ , z = −c1 < According to Theorem 3.4, (x0 , y0 ) is not a local weak minimizer of problem (P) In Theorem 4.1 of [10], c∗ ∈ C ∗ \{0}, but does not need to satisfy the condition ∗ c , v = So, if we take c∗ = (0, 1), then c∗ , y + d ∗ , z = Hence, Theorem 4.1 of [10] cannot be employed Second-Order Sufficient Optimality Conditions In this section, we consider sufficient optimality conditions for local firm minimizers since such conditions are also valid for local weak and Pareto minimizers Let C and D be closed cones (possibly nonconvex with empty interior) Definition 4.1 ([33]) A map X ⇒ Y is said to be compact at x0 iff, for any sequence (xn , yn ) ∈ gphF such that xn → x0 , there exists a subsequence (xn k , yn k ) → (x0 , y) for some y ∈ F(x0 ) ¯ tends to x, ¯ then (for Lemma 4.1 ([14]) Let dim X be finite and x0 ∈ S If xn ∈ S\{x} a subsequence) ¯ n → u for some u ∈ T (S, x), ¯ where tn = xn − x¯ ; (i) (xn − x)/t (ii) either there exists w ∈ T (S, x, ¯ u) ∩ u ⊥ such that (xn − x¯ − tn u)/ tn2 → w, or there exist w ∈ T (S, x, ¯ u) ∩ u ⊥ \{0} and rn ↓ such that tn /rn ↓ and (xn − x¯ − tn u)/ tn rn → w, where u ⊥ is the orthogonal complement of u in X Theorem 4.1 Assume that X, Y, and Z are finite dimensional, C is pointed, x0 ∈ , y0 ∈ StrC (F(x0 )), z ∈ G(x0 ) ∩ (−D), and F, G are compact at x0 Then, (x0 , y0 ) ∈ LFMin(P, 2) if (i) for all u ∈ T (S, x0 ), D F(x0 , y0 )(u) ∩ (−intC) = ∅ and DG(x0 , z )(u) ∩ T (−D, z ) = ∅; 123 J Optim Theory Appl (ii) for all u ∈ T (S, x0 ), v ∈ D F(x0 , y0 )(u) ∩ (−bdC) and w ∈ DG(x0 , z )(u) ∩ T (−D, z ), one has (a) ∀x ∈ T (S, x0 , u), ∀(y, z) ∈ (x): z ∈ T (−D, z , w), ∃(c∗ , d ∗ ) ∈ C ∗ × N (−D, z ): c∗ , v = d ∗ , w = 0, c∗ , y + d ∗ , z > 0; (x): z ∈ T (−D, z , w), ∃(c∗ , d ∗ ) ∈ C ∗ × (b) ∀x ∈ T (S, x0 , u), ∀(y, z) ∈ ∗ ∗ N (−D, z ): c , v = d , w = 0, c∗ , y + d ∗ , z > 0, where (x) := (y, z) : (y, z) ∈ D (F, G)(x0 , (y0 , z ), u, (v, w))(x), (x, y, z) ∈ (u, v, w)⊥ , (x) := {(y, z) : (y, z) ∈ D (F, G)(x0 , (y0 , z ), u, (v, w))(x), (x, y, z) ∈ (u, v, w)⊥ \{(0, 0, 0)} } Proof Suppose to the contrary that there exist xn ∈ S ∩ B X (x0 , ) ∩ , yn ∈ F(xn ), n z n ∈ G(xn ) ∩(−D), and cn ∈ C such that yn − y0 + cn ∈ BY (0, xn − x0 ) n (8) There exist (subsequences) yn , z n , y¯ ∈ F(x0 ), and z ∈ G(x0 ) ∩ (−D) such that yn → y¯ and z n → z By (8), cn → y0 − y¯ ∈ C As y0 ∈ StrC (F(x0 )), i.e (F(x0 ) − y0 ) ∩ (−C) = {0}, y¯ = y0 Hence, (xn , yn , z n ) → (x0 , y0 , z ) By Lemma 4.1(i), we have, for tn := (xn , yn , z n ) − (x0 , y0 , z ) , (u n , , wn ) := (xn , yn , z n ) − (x0 , y0 , z ) → (u, v, w) ∈ X × Y × Z \{(0, 0, 0)} tn Therefore, tn−1 (xn − x0 ) → u, and then, u ∈ T (S, x0 ) Since tn−1 (yn − y0 ) → v and yn ∈ F(xn ), v ∈ D F(x0 , y0 )(u) One also has tn−1 (z n − z ) → z and z n ∈ G(xn ) ∩ (−D), which implies that w ∈ DG(x0 , z )(u) ∩ T (−D, z ) It follows from (8) that tn−1 (yn −y0 )+tn−1 cn ∈ BY (0, tn−1 xn −x0 ) Since tn−1 xn −x0 ≤ tn → n and C is a cone, by letting n → ∞, one has v ∈ −C By assumption (i), one has v ∈ −bdC.By Lemma 4.1, it suffices now to consider the following two cases (using subsequences if necessary) First case: there exists (x, y, z) ∈ (u, v, w)⊥ such that (x¯n , y¯n , z¯ n ) := 2tn−2 (xn , yn , z n ) − (x0 , y0 , z ) − tn (u, v, w) → (x, y, z) It follows that x¯n = 2tn−2 (xn − x0 − tn u) → x, and hence, x0 + tn u + tn2 x¯n ∈ S So, x ∈ T (S, x0 , u) Since y¯n = 2tn−2 (yn −y0 −tn v) → y, z¯ n = 2tn−2 (z n −z −tn w) → z and yn ∈ F(xn ), z n ∈ G(xn ) ∩ (−D), one has z + tn w + tn2 z¯ n ∈ (−D) and 2 2 y0 + tn v+ tn y¯n ∈ F(x0 + tn u + tn x¯n ), z + tn w + tn z¯ n ∈ G(x0 + tn u + tn2 x¯n ) 2 2 123 J Optim Theory Appl Now we have z ∈ T (−D, z , w) and x ∈ T (S, x0 , u), (y, z) ∈ D (F, G)(x0 , (y0 , z ), u, (v, w))(x) From (8), one has yn − y0 − tn v 2 tn Since 2tn−2 xn − x0 Therefore, + tn−1 cn + v tn ∈ BY (0, xn − x0 n tn ) ≤ 2, by letting n → ∞, one has y ∈ −cl(cone(C + v)) y ∈ D F(x0 , y0 , u, v)(x) ∩ (−cl(cone(C + v))), z ∈ D G(x0 , z , u, w)(x) ∩ T (−D, z , w) So, for c∗ ∈ C ∗ with c∗ , v = 0, one has c∗ , y ≤ For d ∗ ∈ N (−D, z ) with d ∗ , w = 0, one has d ∗ , z ≤ Therefore, c∗ , y + d ∗ , z ≤ 0, contradicting assumption (ii)(a) Second case: there exist rn ↓ and (x, y, z) ∈ (u, v, w)⊥ \{(0, 0, 0)} such that −1 rn tn ↓ and (xn , yn , z n ) := 2tn−1rn−1 (xn , yn , z n ) − (x0 , y0 , z ) − tn (u, v, w) → (x, y, z) Arguing similarly as in the first case, we have z ∈ T (−D, z , w), x ∈ T (S, x0 , u), and (y, z) ∈ D (F, G)(x0 , (y0 , z ), u, (v, w))(x) It follows from (8) that yn − y0 − tn v tn r n + tn−1 cn + v rn ∈ BY 0, xn − x0 n tn r n Since xn − x0 tn r n = xn − x0 2 tn tn tn ≤ → 0, rn rn by letting n → ∞, one has y ∈ −cl(cone(C + v)) The rest of this case is similar to the first case Note that the compactness of F at x0 is implied by either of the following conditions (a) F is u.s.c at x0 and F(x0 ) is a compact set (see [25]); (b) F is strong calm at (x0 , y0 ), i.e there exist L F > and a neighbourhood U of x0 such that, for every x ∈ U , F(x) ⊆ y0 + L F x − x0 B X These conditions are relatively restrictive (Condition (b) is used in [3] to establish sufficient conditions.) The compactness assumption in Theorem 4.1 is also restrictive Now, we use the cut set-valued map Fr : X ⇒ Y defined by Fr (x) := F(x) ∩ r (x)BY , where r : X → R+ , to improve Theorem 4.1 as follows Theorem 4.2 Let X, Y, and Z be finite dimensional, x0 ∈ , y0 ∈ StrC (Fr (x0 )), z ∈ G r (x0 ) ∩ (−D), and Fr , G r compact at x0 If the following conditions are satisfied 123 J Optim Theory Appl (i) for every x ∈ S, F(x) has the C-domination property (i.e F(x) ⊆ MinC F(x) + C), G(x) has the D-domination property; (ii) for every x ∈ S, MinC F(x) ⊆ r (x)BY and Min D G(x) ⊆ r (x)B Z ; (iii) for all u ∈ T (S, x0 ), D Fr (x0 , y0 )(u) ∩ (−intC) = ∅, and DG r (x0 , z )(u) ∩ T (−D, z ) = ∅; (iv) for all u ∈ T (S, x0 ), v ∈ D Fr (x0 , y0 )(u) ∩ (−bdC) and w ∈ DG r (x0 , z )(u) ∩ T (−D, z ), one has (a) for all x ∈ T (S, x0 , u) and (y, z) ∈ (x) with z ∈ T (−D, z , w), there exists (c∗ , d ∗ ) ∈ C ∗ ×N (−D, z ) with c∗ , v = d ∗ , w = such that c∗ , y + d ∗ , z > 0; (x) with z ∈ T (−D, z , w), there (b) for all x ∈ T (S, x0 , u) and (y, z) ∈ exists (c∗ , d ∗ ) ∈ C ∗ ×N (−D, z ) with c∗ , v = d ∗ , w = such that c∗ , y + d ∗ , z > 0, where (x) := (y, z) : (y, z) ∈ D (Fr , G r )(x0 , (y0 , z ), u, (v, w))(x), (x, y, z) ∈ (u, v, w)⊥ , (x) := (y, z) : (y, z) ∈ D (Fr , G r )(x0 , (y0 , z ), u, (v, w))(x), (x, y, z) ∈ (u, v, w)⊥ \{(0, 0, 0)} , then (x0 , y0 ) is a local firm minimizer of order of problem (P) Proof By applying Theorem 4.1, (iii) and (iv) imply that (x0 , y0 ) ∈ LFMin(Pr , 2), where (Pr ) MinC Fr (x) s.t x ∈ S, G r (x) ∩ (−D) = ∅ This means the existence of a neighbourhood U of x0 and α > such that (Fr (x) + C) ∩ B(y0 , α x − x0 ) = ∅, ∀x ∈ r ∩ U \{x0 }, where r := {x ∈ X : x ∈ S, G r (x) ∩ (−D) = ∅} Now, we prove that (x0 , y0 ) ∈ LFMin(P, 2) Suppose there exist x¯ ∈ , y¯ ∈ F(x), ¯ and c¯ ∈ C such that y¯ + c¯ ∈ B(y0 , α x¯ − x0 ) There ¯ and d ∈ d is z¯ ∈ G(x) ¯ such that z¯ ∈ −D By (i), there exists z ∈ Min D G(x) ¯ Thus, such that z¯ = z + d , and by (ii) one has z ∈ r (x)BY So, z ∈ G r (x) ¯ ∩ (−D) = ∅, i.e x¯ ∈ r Similarly, one has z + d ∈ −D, and hence, G r (x) ¯ and c ∈ C such that y¯ = y + c and y + c + c¯ ∈ B(y0 , α x¯ − x0 ) y ∈ Fr (x) ¯ + C) ∩ B(y0 , α x¯ − x0 ) = ∅, which is a contradiction Hence, (Fr (x) In next example, Theorem 4.1 is applicable, but the corresponding result of [3] is not Example 4.1 Let X = Y = B2 , Z = R, S = {(x1 , x2 ) ∈ B2 : x1 + |x2 | = 0}, C = B2+ , D = R+ , F(x) = {(y1 , y2 ) ∈ [−1, 1]×[−1, 1] : y1 ≥ x1 , y2 ≥ x12 +x22 }, and G(x) = {y ∈ R : y = x1 + x22 } Consider x0 = (0, 0), y0 = (0, 0), and z = Direct computations give T (S, x0 ) = {(x1 , x2 ) ∈ B2 : x1 = 0}, T (−D, z ) = −D, and, for (u , u ) ∈ B2 , D F(x0 , y0 )(u , u ) = {(v1 , v2 ) ∈ B2 : v1 ≥ |u |, v2 ≥ 0}, 123 J Optim Theory Appl DG(x0 , y0 )(u , u ) = {w ∈ R : w = u } Therefore, for all u ∈ T (S, x0 ), D F(x0 , y0 )(u)∩(−intC) = ∅, and DG(x0 , z )(u)∩T (−D, z ) = ∅ Hence, assumption (i) in Theorem 4.1 is verified Now, for u = (0, u ) ∈ T (S, x0 ), we consider direction v := (v1 , v2 ) ∈ D F(x0 , y0 )(u) ∩ (−bdC) and w ∈ DG(x0 , z )(u) ∩ T (−D, z ) It follows that v = (0, 0) and w = Since (u, v, w) = (0, 0, 0), u = For a direction u = (0, u ) ∈ T (S, x0 ), we have T (S, x0 , u) = ∅ and T (S, x0 , u) = R− × R So, only condition (ii)(b) in Theorem 4.1 needs to be considered We have T (−D, z , w) = −D and D (F, G)(x0 , (y0 , z ), u, (v, w))(x1 , x2 ) = {(y1 , y2 , z) ∈ B3 : y1 ≥ |x1 |, y2 ≥ 0, z = x1 } Since (y, z) ∈ (x), (x, y, z) ∈ (u, v, w)⊥ \{(0, 0, 0)} Hence, (x1 , x2 ), (0, u ) + (y1 , y2 ), (0, 0) + z, = 0, and so x2 = On the other hand, (x, y, z) = (0, 0, 0) So, if x1 = 0, then y1 + y2 > and thus |x1 | + y1 + y2 > Take c∗ = (3, 1) ∈ C ∗ and d ∗ = ∈ N (−D, z ) Then, c∗ , v = d ∗ , w = 0, and for all (y, z) ∈ (x) with z ∈ T (−D, z , w), one has c∗ , y + d ∗ , z = 3y1 + y2 + z ≥ |x1 |+ y1 + y2 + (|x1 | + x1 ) ≥ |x1 |+ y1 + y2 > Hence, condition (ii)(b) in Theorem 4.1 is fulfilled Therefore, (x0 , y0 ) ∈ LFMin(P, 2) We see that F is not strongly calm at (x0 , y0 ) and so the corresponding result in [3] cannot be employed Finally, as an application, we consider the special finite-dimensional case with single-valued f and g in the place of F and G Then, problem (P) becomes a nonsmooth multiobjective program In [16–18,27], some set-valued directional derivatives of single-valued maps are introduced as follows Let us recall the Painlevé–Kuratowski upper limit of a set-valued mapping : X ⇒ Y : limsupx→x¯ (x) := {y ∈ Y : ∃xk → x, ¯ ∃yk ∈ (xk ), yk → y} Definition 4.2 Let f : Bn → Bm be Fréchet differentiable at x0 and u, x ∈ Bn (i) The (set-valued) second-order Hadamard directional derivative of f at x0 in direction (u, x) is D f (x0 , u)(x) := limsupt↓0,x →x f (x0 + tu + 21 t x ) − f (x0 ) − t f (x0 )u 2t (ii) The (set-valued) asymptotic second-order Hadamard directional derivative of f at x0 in direction (u, x) is D f (x0 , u)(x) := limsupt↓0,r ↓0, t →0,x →x r 123 f (x0 + tu + 21 tr x ) − f (x0 ) − t f (x0 )u tr J Optim Theory Appl By Definition 4.2, with v = f (x0 )u and w = g (x0 )u, we have D F(x0 , y0 , u, v)(x) = D f (x0 , u)(x), D F(x0 , y0 , u, v)(x) = D f (x0 , u)(x), D G(x0 , z , u, w)(x) = D g(x0 , u)(x), D G(x0 , z , u, w)(x) = D g(x0 , u)(x) 2 The following statement is a direct consequence of Theorem 4.2 Corollary 4.1 Consider the nonsmooth multiobjective program (P) If the following conditions are satisfied: for all u ∈ T (S, x0 ) with f (x0 )u ∈ −bdC and g (x0 )u ∈ T (−D, g(x0 )), one has (i) ∀x ∈ T (S, x0 , u), ∀(y, z) ∈ D ( f, g)(x0 , u)(x): z ∈ T (−D, g(x0 ), g (x0 )u), (x, y, z) ∈ (u, f (x0 )u, g (x0 )u)⊥ , ∃(c∗ , d ∗ ) ∈ C ∗ × N (−D, g(x0 )): c∗ , f (x0 )u = d ∗ , g (x0 )u = 0, c∗ , y + d ∗ , z > 0; (ii) ∀x ∈ T (S, x0 , u), ∀(y, z) ∈ D ( f, g)(x0 , u)(x): z ∈ T (−D, g(x0 ), g (x0 )u), (x, y, z) ∈ (u, f (x0 )u, g (x0 )u)⊥ \{(0, 0, 0)}, ∃(c∗ , d ∗ ) ∈ C ∗ × N (−D, g(x0 )): c∗ , f (x0 )u = d ∗ , g (x0 )u = 0, c∗ , y + d ∗ , z > 0, then (x0 , y0 ) is a local firm minimizer of order Remark 4.1 (i) Corollary 4.1 improves Theorem in [16] and Theorem 3.2 in [18], in which the constraint x ∈ S is not considered, f , g are assumed stable at x0 in [16] and l-stable at x0 in [18] In [16,18], the multipliers (c∗ , d ∗ ) ∈ (x0 ), where (x ) := {(c∗ , d ∗ ) ∈ C ∗ × N (−D, g(x0 ))\{(0, 0)} : c∗ ◦ f (x0 ) + d ∗ ◦ g (x0 ) = 0}, while we only need (c∗ , d ∗ ) ∈ C ∗ × N (−D, z )\{(0, 0)} with c∗ , f (x0 )u = d ∗ , g (x0 )u = (ii) In [18,19], for the feasible set M := {x ∈ X : g(x) ∈ −D} the authors use T (M, x0 , u), T (M, x0 , u) to establish sufficient optimality conditions However, it is not easy to compute these sets By applying Corollary 4.1 with S = X , we not need to calculate them Perspectives The results of this paper can be extended to a set-valued optimization problem with mixed constraints consisting of generalized inequalities and inclusions Such a model includes more practical situations Another interesting development can be expected for a general case, where the interiors of the ordering cones C and D are empty In this case, one deals with quasi-minimizers and quasi-relative minimizers instead of weak minimizers This general case is often met in practice, since the natural ordering cone (i.e the positive cone) in many spaces, such as l p , L p ( ), is nonsolid 123 J Optim Theory Appl Conclusions The second-order optimality conditions for set-valued optimization in this paper are different from most of the existing ones in the literature at the following points: • the envelope-like effect is discussed for the first time for set-valued optimization; • the (generalized) derivatives of the objective and the constraint maps are involved in a separate way so that only a constraint qualification for the constraint needs to be imposed Then, the results become stronger Acknowledgments This work was supported by Vietnam National University, Ho Chi Minh City The authors are grateful to the anonymous referees for their valuable remarks References Aubin, J.P.: Contingent derivatives of set-valued maps and existence of solutions to nonlinear inclusions and differential inclusions In: Nachbin, L (ed.) Mathematical Analysis and Applications, Part A, pp 160–229 Academic Press, New York (1981) Corley, H.W.: Optimality conditions for maximizations of set-valued functions J Optim Theory Appl 58, 1–10 (1988) Durea, M.: Optimality conditions for weak and firm efficiency in set-valued optimization J Math Anal Appl 344, 1018–1028 (2008) Götz, A., Jahn, J.: The Lagrange multiplier rule in set-valued optimization SIAM J Optim 10, 331–344 (1999) Isac, G., Khan, A.A.: Dubovitskii–Milyutin approach in set-valued optimization SIAM J Control Optim 47, 144–162 (2008) Jahn, J., Khan, A.A., Zeilinger, P.: Second-order optimality conditions in set optimization J Optim Theory Appl 125, 331–347 (2005) Khanh, P.Q., Tuan, N.D.: Variational sets for multivalued mappings and a unified study of optimality conditions J Optim Theory Appl 139, 47–65 (2008) Li, S.J., Zhu, S.K., Li, X.B.: Second order optimality conditions for strict efficiency of constrained set-valued optimization J Optim Theory Appl 155, 534–557 (2012) Wang, Q.L., Li, S.J., Teo, K.L.: Higher-order optimality conditions for weakly efficient solutions in nonconvex set-valued optimization Optim Lett 4, 425–437 (2010) 10 Zhu, S.K., Li, S.J., Teo, K.L.: Second-order Karush–Kuhn–Tucker optimality conditions for set-valued optimization J Global Optim 58, 673–679 (2014) 11 Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, vol Basic Theory Springer, Berlin (2006) 12 Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, vol II Applications Springer, Berlin (2006) 13 Kawasaki, H.: An envelope-like effect of infinitely many inequality constraints on second order necessary conditions for minimization problems Math Program 41, 73–96 (1988) 14 Jiménez, B., Novo, V.: Optimality conditions in differentiable vector optimization via second-order tangent sets Appl Math Optim 49, 123–144 (2004) 15 Penot, J.P.: Second order conditions for optimization problems with constraints SIAM J Control Optim 37, 303–318 (1999) 16 Gutiérrez, C., Jiménez, B., Novo, V.: On second order Fritz John type optimality conditions in nonsmooth multiobjective programming Math Program Ser B 123, 199–223 (2010) 17 Khanh, P.Q., Tuan, N.D.: Second order optimality conditions with the envelope-like effect in nonsmooth multiobjective programming I: l-stability and set-valued directional derivatives J Math Anal Appl 403, 695–702 (2013) 18 Khanh, P.Q., Tuan, N.D.: Second order optimality conditions with the envelope-like effect in nonsmooth multiobjective programming II: optimality conditions J Math Anal Appl 403, 703–714 (2013) 19 Khanh, P.Q., Tuan, N.D.: Second-order optimality conditions with the envelope-like effect for nonsmooth vector optimization in infinite dimensions Nonlinear Anal 77, 130–148 (2013) 123 J Optim Theory Appl 20 Khanh, P.Q., Tung, N.M.: First and second-order optimality conditions without differentiability in multivalued vector optimization Positivity, Onlinefirst, 2015 doi:10.1007/s11117-015-0330-z 21 Khan, A.A., Tammer, C.: Second-order optimality conditions in set-valued optimization via asymptotic derivatives Optimization 62, 743–758 (2013) 22 Studniarski, M.: Necessary and sufficient conditions for isolated local minima of nonsmooth functions SIAM J Control Optim 24, 1044–1049 (1986) 23 Jiménez, B.: Strict efficiency in vector optimization J Math Anal Appl 265, 264–284 (2002) 24 Flores-Bazan, F., Jimenez, B.: Strict efficiency in set-valued optimization SIAM J Control Optim 48, 881–908 (2009) 25 Aubin, J.P., Frankowska, H.: Set-Valued Analysis Birkhauser, Boston (1990) 26 Cominetti, R.: Metric regularity, tangent sets, and second-order optimality conditions Appl Math Optim 21, 265–287 (1990) 27 Khanh, P.Q., Tuan, N.D.: Optimality conditions for nonsmooth multiobjective optimization using Hadamard directional derivatives J Optim Theory Appl 133, 341–357 (2007) 28 Jahn, J.: Introduction to the Theory of Nonlinear Optimization, 2nd edn Springer, Berlin (1996) 29 Taa, A.: Second order conditions for nonsmooth multiobjective optimization problems with inclusion constraints J Global Optim 50, 271–291 (2011) 30 Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings Springer, Berlin (2009) 31 Robinson, S.M.: Regularity and stability for convex multivalued functions Math Oper Res 1, 130–143 (1976) 32 Ursescu, C.: Multifunctions with closed convex graph Czechoslovak Math J 25, 438–441 (1975) 33 Penot, J.P.: Differentiability of relations and differential stability of perturbed optimization problems SIAM J Control Optim 22, 529–551 (1984) 123 ... However, for set-valued optimization, we observe only paper [20] dealing with the envelope-like effect Let us mention first some papers on second-order optimality conditions for set-valued optimization. .. 123 J Optim Theory Appl Conclusions The second-order optimality conditions for set-valued optimization in this paper are different from most of the existing ones in the literature at the following... points: • the envelope-like effect is discussed for the first time for set-valued optimization; • the (generalized) derivatives of the objective and the constraint maps are involved in a separate way

Ngày đăng: 16/12/2017, 03:10

Xem thêm:

Mục lục

    Second-Order Optimality Conditions with the Envelope-Like Effect for Set-Valued Optimization

    3 Second-Order Necessary Optimality Conditions

    4 Second-Order Sufficient Optimality Conditions

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN