1. Trang chủ
  2. » Khoa Học Tự Nhiên

On Higher Order Sensitivity Analysis in Nonsmooth Vector Optimization

39 225 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

We propose the notion of higherorder a radialcontingent derivative of a setvalued map, develop some calculus rules and apply directly them to obtain optimality conditions for several particular optimization problems. Then, we employ this derivative together with contingenttype derivatives to analyze sensitivity for nonsmooth vector optimization. Properties of higherorder contingenttype derivatives of the perturbation and weak perturbation maps of a parameterized optimization problem are obtained

On Higher-Order Sensitivity Analysis in Nonsmooth Vector Optimization H. T. H. Diem · P. Q. Khanh · L. T. Tung Abstract We propose the notion of higher-order a radial-contingent derivative of a set-valued map, develop some calculus rules and apply directly them to obtain optimality conditions for several particular optimization problems. Then, we employ this derivative together with contingent-type derivatives to analyze sensitivity for nonsmooth vector optimization. Properties of higher-order contingent-type derivatives of the perturbation and weak perturbation maps of a parameterized optimization problem are obtained. Keywords Sensitivity · Higher-order radial-contingent derivative · Higher-order contingent-type derivative · Set-valued vector optimization · Perturbation map · Weak perturbation map 2010 Mathematics Subject Classifications: 90C31, 49J52, 49J53 1 Introduction When an optimization problem is perturbed, sensitivity analysis provides quantitative information about its solution maps. For sensitivity results in classical smooth optimization H. T. H. Diem Department of Mathematics, College of Cantho, Cantho, Vietnam email: hthdiem@gmail.com P. Q. Khanh (corresponding author) Department of Mathematics, International University of Hochiminh City, Linh Trung, Thu Duc, Hochiminh City,Vietnam e-mail: pqkhanh@hcmiu.edu.vn L. T. Tung Department of Mathematics, College of Sciences, Cantho University, Cantho, Vietnam e-mail: lttung@ctu.edu.vn 1 problems, the reader is referred to the book [1] by Fiacco. For nonsmooth optimization, the first related works are [2, 3], where Tanino studied the behavior of solution maps called perturbation or weak perturbation maps in terms of contingent derivatives. The TP-derivative was proposed in [4] and used to weaken some assumptions in [2]. Behaviors of many kinds of efficient points were investigated in [5]. The papers [3, 6, 7] studied the behavior of perturbation maps in nonsmooth convex problems. Important results on sensitivity analysis were obtained by Levy and Rockafellar for generalized equations, a general model including optimization/minimization problems, in [8, 9], using the proto-derivative notion introduced by Rockafellar in [10]. (Recall that the proto-derivative of a map is the contingent derivative which coincides with the adjacient derivative.) Some developments were obtained in [11, 12]. Levy and Mordukhovich investigated sensitivity in terms of coderivatives in [13, 14], while the generalized Clarke epiderivative was the tool for analyzing sensitivity in [15]. All the above mentioned works dealt with only first-order sensitivity analysis. For higher-order considerations we observe only references [16, 17]. In [16], the (higher-order) lower Studniarski derivative (defined in [18]) of perturbation maps in vector optimization was considered. In [17], variational sets, introduced recently in [19, 20, 21] together with calculus rules and applications in establishing higher-order optimality conditions, were employed to deal with sensitivity of perturbation and weak perturbation maps of vector optimization. Since higher-order considerations for sensitivity, like for optimality conditions and many other topics in optimization, are of great importance, we aim to deal with this subject in the present paper. Our tools of generalized derivatives are different from [16, 17]. First we propose the notion of higher-order radial-contingent derivative and develop some calculus 2 rules. This kind of derivatives of set-valued maps combines the ideas of the well-known (higher-order) contingent derivative and the radial derivatives, which were developed and successfully used recently in establishing optimality conditions in [22, 23]. This combination makes the radial-contingent derivative bigger than the contingent-type derivative (as set-valued maps) and hence leads to better results in researches on optimality conditions and sensitivity analysis. Furthermore, unlike the radial derivative which captures global properties of a map, the radial-contingent derivative reflects local natures of a map, and is more suitable in such researches. We apply this kind of derivative with some similarity as the TP-derivative employed in [4], but now for higher-order considerations. While the radial-contingent derivative appears mainly in our assumptions, the conclusions of our results are in terms of contingent-type derivatives. This derivative is different from the well-known (higher-order) contingent derivative and has appeared in the literature also under the name “upper Studniarski derivative”. The plan of this paper is as follows. In Sect. 2, some definitions and preliminary facts are collected for our use in the sequel. We define the higher-order radial-contingent derivative, develop its calculus rules and apply directly them to establishing optimality conditions for various kinds of solutions to some particular vector optimization problems for the illustrative purpose in Sect. 3. Section 4 consists of relations between contingent-type derivatives of a set-valued map and its profile map (defined at the beginning of Sect. 2), and also relations between sets of various kinds of efficient points of these derivatives. In Sect. 5, we discuss relations between contingent-type derivatives of the perturbation, weak perturbation maps and the feasible-set map in a general vector optimization problem. The short Sect. 6 contains 3 some concluding remarks. 2 Preliminaries In this paper, if not otherwise stated, let X, Y and Z be normed spaces, and C ⊆ Y a closed convex cone. U(x0 ) is used for the set of the neighborhoods of x0 . R, R+ , and N stand for the set of the real numbers, nonnegative real numbers, and natural numbers, respectively (shortly, resp). For M ⊆ X, intM, clM, bdM denote its interior, closure and boundary, resp. A convex set B ⊆ Y is called a base of C iff 0 ∈ clB and C = {tb|t ∈ R+ , b ∈ B}. Clearly C has a compact base B if and only if C ∩ bdB is compact. For H : X → 2Y , the domain, graph, and epigraph of H are defined by, resp, domH := {x ∈ X|H(x) = ∅}, grH := {(x, y) ∈ X × Y |y ∈ H(x)}, epiH := {(x, y) ∈ X × Y |y ∈ H(x) + C}. The profile map of H is H + C (defined by (H + C)(x) = H(x) + C). Recall concepts of optimality/efficiency in vector optimization, for a0 ∈ A ⊆ Y . (i) a0 is called a local (Pareto) minimal/efficient point of A (with respect to, shortly wrt C), and denoted by a0 ∈ MinC A, iff there exists U ∈ U(a0 ) such that (A ∩ U − a0 ) ∩ (−C \ C) = ∅. (ii) Supposing that intC = ∅, a0 is said to be a local weak minimal/efficient point of A, denoted by a ∈ WMinC A, iff there exists U ∈ U(a0 ) such that (A ∩ U − a0 ) ∩ (−intC) = ∅. 4 (iii) Assuming that C is pointed, a0 is termed a Henig-proper minimal/efficient point of A, denoted by a0 ∈ HeC A, iff there exist a convex cone K Y with C \ {0} ⊆ intK and U ∈ U(a0 ) such that (A ∩ U − a0 ) ∩ (−K) = {0}. (iv) Let Q ⊆ Y be a nonempty open cone, different from Y . a0 is called a local Qminimal/efficient point of A (see [24]), denoted by a0 ∈ QMinC A, iff there exists U ∈ U(a0 ) such that (A ∩ U − a0 ) ∩ (−Q) = ∅. If U = Y , the word “local” is omitted, i.e., we have the corresponding global notions. Note that the notion of Q-minimal solutions contains as special cases many kinds of solutions in vector optimization, see [24, 25]. We mention the concept of Henig-proper efficiency above only as an example for many other definitions of properness in vector optimization. For comprehensive expositions including comparisons, of these notions, see [26, 27, 28]. Recall now the two kinds of higher-order derivatives which we are most concerned with in the sequel. Let F : X → 2Y , u ∈ X, m ∈ N, and (x0 , y0 ) ∈ grF . (i) ([18]) The mth-order contingent-type derivative of F at (x0 , y0 ) is defined by Dm F (x0 , y0 )(u) := {v ∈ Y | ∃tn ↓ 0, ∃(un , vn ) → (u, v), y0 + tm n vn ∈ F (x0 + tn un )}. −1 Setting (xn , yn ) := (x0 + tn un , y0 + tm n vn ) and γn = tn , we have Dm F (x0 , y0 )(u) := {v ∈ Y | ∃γn > 0, ∃(xn , yn ) ∈ grF : (xn , yn ) → (x0 , y0 ), (γn (xn − x0 ), γnm (yn − y0 )) → (u, v)}. 5 m (ii) ([25]) The mth-order radial derivative of F at (x0 , y0 ) is DR F (x0 , y0 ) defined by m DR F (x0 , y0 )(u) := {v ∈ Y | ∃tn > 0 , ∃(un , vn ) → (u, v), y0 + tm n vn ∈ F (x0 + tn un )}. −1 Setting (xn , yn ) := (x0 + tn un , y0 + tm n vn ) and γn = tn , we have m DR F (x0 , y0 )(u) = {v ∈ Y | ∃γn > 0, ∃(xn , yn ) ∈ grF, (γn (xn − x0 ), γnm (yn − y0 )) → (u, v)}. Note that Dm F (x0 , y0 ) is known also as the upper Studniarski derivative (see, e.g., [16]). Since D1 F (x0 , y0 ) is just the well-known contingent derivative, we choose the term “contingent-type” to reflect the similarity to the well-known mth-order contingent derivative of F at (x0 , y0 ) wrt (u1 , v1 ), ..., (um−1 , vm−1 ) ∈ X × Y defined as (see [29], Chapter 5) m D F (x0 , y0 , u1 , v1 , ..., um−1 , vm−1 )(u) := {v ∈ Y | ∃tn → 0+ , ∃(un , vn ) → (u, v), m−1 y0 + tn v1 + · · · + tm−1 vm−1 + tm um−1 + tm n n vn ∈ F (x0 + tn u1 + · · · + tn n un )}. 1 Of course, D F (x0 , y0 ) = D1 F (x0 , y0 ). An important geometric feature of the mth-order contingent derivative of F at (x0 , y0 ) ∈ grF is that its graph is the mth-order contingent set of grF at (x0 , y0 ) (see [29], Chapter 5). For the sake of simplicity, recall only for the first order that the contingent cone of A ⊆ X at x0 ∈ clA is TA (x0 ) := {u ∈ X| ∃tn ↓ 0, ∃un → u, x0 + tn un ∈ A}. 1 Then, grD F (x0 , y0 ) = TgrF (x0 , y0 ). m In [25], DR F (x0 , y0 ) is called the mth-order outer radial derivative. There are the cor- responding lower/inner objects obtained by replacing “∃tn , ∃un ” by “∀tn , ∀un ”. Since we consider only the upper/outer objects, we omit these adjectives. 6 3 Higher-order radial-contingent derivatives m Now, we propose an object, intermediate between Dm F (x0 , y0 ) and DR F (x0 , y0 ), as follows. Definition 3.1 The mth-order radial-contingent derivative of F at (x0 , y0 ) is DSm F (x0 , y0 ) defined by DSm F (x0 , y0 )(u) := {v ∈ Y | ∃tn > 0, ∃(un , vn ) → (u, v) : tn un → 0, y0 +tm n vn ∈ F (x0 +tn un )}. −1 Setting (xn , yn ) := (x0 + tn un , y0 + tm n vn ) and γn = tn , we have DSm F (x0 , y0 )(u) := {v ∈ Y | ∃γn > 0, ∃(xn , yn ) ∈ grF : xn → x0 , (γn (xn −x0 ), γnm (yn −y0 )) → (u, v)}. Note that DS1 F (x0 , y0 ) was introduced in [4] and called the TP-derivative. To have some comparisons, we propose a higher-order derivative corresponding to the adjacent derivative 1 (see [29], Chapter 5) in the same way as Dm F (x0 , y0 ) corresponds to D F (x0 , y0 ) as follows. The mth-order adjacent-type derivative of F at (x0 , y0 ) is Dbm F (x0 , y0 ) defined by Dbm F (x0 , y0 )(u) := {v ∈ Y | ∀tn ↓ 0, ∃(un , vn ) → (u, v), y0 + tm n vn ∈ F (x0 + tn un )}. Clearly, Dbm F (x0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u). This inclusion may be strict as for F : R → 2R defined by F (x) = {x3/2 }, since D2 F (0, 0)(u) = R+ , if u = 0, ∅, if u = 0, Db2 F (0, 0)(u) = {0}, if u = 0, ∅, if u = 0. The mth-order strong adjacent-type derivative of F at (x0 , y0 ) is Dlm F (x0 , y0 ) defined by Dlm F (x0 , y0 )(u) := {v ∈ Y | ∀tn ↓ 0, ∀un → u, ∃vn → v, y0 + tm n vn ∈ F (x0 + tn un )}. 7 Clearly, Dlm F (x0 , y0 )(u) ⊆ Dbm F (x0 , y0 )(u). Similarly as for the preceding strict inclusion, this inclusion may be strict. The proof of the following properties is immediate. Proposition 3.1 Let F : X → 2Y , u ∈ X, m ∈ N, and (x0 , y0 ) ∈ grF . (i) (0, 0) ∈ grDSm F (x0 , y0 ); if (u, v) ∈ grDSm F (x0 , y0 ) then, for any λ ≥ 0, (λu, λm v) ∈ grDSm F (x0 , y0 ); (m+1) (ii) domDS F (x0 , y0 ) ⊆ domDSm F (x0 , y0 ); (iii) Dlm F (x0 , y0 )(u) ⊆ Dbm F (x0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u) ⊆ DSm F (x0 , y0 )(u) m F (x0 , y0 )(u); ⊆ DR if u is nonzero, then DSm F (x0 , y0 )(u) = Dm F (x0 , y0 )(u). The first two inclusions in Proposition 3.1 (iii) are shown above to be possibly strict. The following example shows that so are the other two. Example 3.1 Let X = Y = R, (x0 , y0 ) = (0, 0), and F (x) = {0}, if x ≤ 0, 2 {1, −x }, if x > 0, Then, we have {0}, if u ≤ 0, 2 {−u }, if u > 0, D2 F (x0 , y0 )(u) =  if u < 0,  {0}, 2 R+ , if u = 0, DS F (x0 , y0 )(u) =  {−u2 }, if u > 0, 2 F (x0 , y0 )(u) = DR {0}, if u < 0, 2 R+ ∪ {−u }, if u ≥ 0. 8 Hence, D2 F (x0 , y0 )(0) DS2 F (x0 , y0 )(u) DS2 F (x0 , y0 )(0), 2 DR F (x0 , y0 )(u), ∀u > 0. Now we discuss the possibility of having equalities in Proposition 3.1 (iii), except for m the last derivative DR F (x0 , y0 )(u), which has a global character (unlike the local character of others). Consider the special case where F = f , a single-valued map. Since, the above m mentioned higher-order derivatives (except D F (x0 , y0 , u1 , v1 , ..., um−1 , vm−1 )) do not include the intermediate powers from 2 to m − 1, they are not compared with the Fr´echet derivative. We state the corresponding modification of this classical object as follows: for f : X → Y and x0 ∈ X, dm f (x0 ) is a map from X to L(X, L(X, Y ))...) (m times of L) such that the following holds, where L(X, Y ) denotes the space of the bounded linear map from X to Y and dm f (x0 )(x, x, ..., x) = (...(dm f (x0 )x)x...)x (m times of x), f (x0 + h) − f (x0 ) − dm f (x0 )(h, h, ..., h) = 0, →0 h m lim h or, equivalently, f (x0 + h) = f (x0 ) + dm f (x0 )(h, h, ..., h) + o( h m ). Proposition 3.2 For f : X → Y and x0 , u ∈ X, if there exists dm f (x0 ), then {dm f (x0 )(u, u, ..., u)} = Dlm f (x0 , f (x0 ))(u) = Dbm f (x0 , f (x0 ))(u) = Dm f (x0 , f (x0 ))(u) = DSm f (x0 , f (x0 ))(u). 9 Proof By the similarity, we consider only the case m = 2. Assume that d2 f (x0 ) exists. Then, for all u ∈ X, we have the following characterizations f (x0 + tx) − f (x0 ) − d2 f (x0 )(tx, tx) =0 ,x→u tx 2 lim + t→0 ⇔ f (x0 + tx) − f (x0 ) 1 2 − d f (x0 )(x, x) 2 2 t x x 2 lim + t→0 ,x→u ⇔ =0 f (x0 + tx) − f (x0 ) = d2 f (x0 )(u, u). ,x→u t2 lim + t→0 2 2 So, ∀tn → 0+ , ∀xn → u, vn := t−2 n (f (x0 + tn xn ) − f (x0 )) → d f (x0 )(u, u), i.e., d f (x0 )(u, u) ∈ Dl2 F (x0 , y0 )(u). It remains to show that DS2 f (x0 , f (x0 ))(u) ⊆ {d2 f (x0 )(u, u)}. Let v ∈ DS2 f (x0 , f (x0 ))(u).Then ∃tn > 0, ∃(un , vn ) → (u, v): tn un → 0, f (x0 ) + t2n vn = f (x0 + tn un ). Setting hn = tn un , one has v = lim n→∞ f (x0 + hn ) − f (x0 ) − d2 f (x0 )(hn , hn ) . un hn 2 2 + d2 f (x0 )(un , un ) = d2 f (x0 )(u, u). We will see later that, though DSm F (x0 , y0 ) is different from Dm F (x0 , y0 ) only at the origin, it plays a significant role in addressing optimality conditions and sensitivity analysis. m Furthermore, among the above-mentioned generalized derivatives, only DR F has a global character. All the others have a local character, since tn ↓ 0 or tn un → 0 appears in the definitions. For some calculus rules of the derivative Dm F , we need the following notion. Definition 3.2 Let F : X → 2Y , (x0 , y0 ) ∈ grF , u ∈ X, and m ∈ N. If DSm F (x0 , y0 )(u) = {v ∈ Y | ∀tn > 0, ∀un → u : tn un → 0, ∃vn → v, y0 +tm n vn ∈ F (x0 +tn un )}, 10 and the set on the right side is nonempty, then DSm F (x0 , y0 ) is called a mth-order radialsemi-derivative of F at (x0 , y0 ) in direction u. We choose the term “semi-derivative” following the idea of Penot for semi-differentiability (of order one) in [30]. Note further that in this paper we need to assume this property only when we are concerned with some calculus rules (we do not need this when we apply DSm F without using these rules). This property clearly holds if the left side of the equality in Definition 3.2 is a singleton. Proposition 3.3 Let F1 , F2 : X → 2Y , x0 ∈ int(domF1 ) ∩ domF2 , u ∈ X, and yi ∈ Fi (x0 ) for i = 1,2. Suppose F1 has a mth-order radial-semi-derivative DSm F1 (x0 , y1 ) at (x0 , y1 ) in direction u. Then, DSm F1 (x0 , y1 )(u) + DSm F2 (x0 , y2 )(u) ⊆ DSm (F1 + F2 )(x0 , y1 + y2 )(u). Proof Let vi ∈ DSm Fi (x0 , yi )(u) for i = 1, 2. Because v2 ∈ DSm F2 (x0 , y2 )(u), there exist tn > 0, un → u, vn2 → v2 such that tn un 2 m → 0 and y2 + tm n vn ∈ F2 (x0 + tn un ). Since DS F1 (x0 , y1 ) is a mth-order radial-semi-derivative 1 in direction u, with tn and un above, there exists vn1 → v1 such that y1 +tm n vn ∈ F1 (x0 +tn un ). Therefore, v1 + v2 ∈ DSm (F1 + F2 )(x0 , y1 + y2 )(u) since 1 2 (y1 + y2 ) + tm n (vn + vn ) ∈ (F1 + F2 )(x0 + tn un ). We cannot reduce the condition x0 ∈ int(domF1 ) ∩ domF2 to x0 ∈ domF1 ∩ domF2 as illustrated by the following. 11 Example 3.2 Let X = Y = R, x0 = y1 = y2 = 0, and F1 (x) = ∅, if x < 0, R+ , if x ≥ 0,   R− , if x < 0, {0}, if x = 0, F2 (x) =  ∅, if x > 0. ∅, if u < 0, R+ , if u ≥ 0, DS1 F2 (0, 0)(u) = Direct computations give DS1 F1 (0, 0)(u) = R− , if u ≤ 0, ∅, if u > 0. Then, x0 belongs to domF1 ∩ domF2 , but not to int(domF1 ) ∩ domF2 . We see that F1 has a radial-semi-derivative of order 1 at (0,0) in any direction u. Now we check that the inclusion of Proposition 3.3 is violated for u = 0. We have DS1 F1 (0, 0)(0) = R+ , DS1 F2 (0, 0)(0) = R− , DS1 (F1 + F2 )(0, 0)(0) = R+ , DS1 F1 (0, 0)(0) + DS1 F2 (0, 0)(0) ⊆ DS1 (F1 + F2 )(0, 0)(0). Proposition 3.4 Let F : X → 2Y , G : Y → 2Z with ImF ⊆ domG, (x0 , y0 ) ∈ grF , (y0 , z0 ) ∈ grG, and u ∈ X. (i) Suppose G has a mth-order radial-semi-derivative DSm G(y0 , z0 ) in any direction in DS1 F (x0 , y0 )(u). Then, DSm G(y0 , z0 )(DS1 F (x0 , y0 )(u)) ⊆ DSm (G ◦ F )(x0 , z0 )(u). (ii) Suppose G has a radial-semi-derivative DS1 G(y0 , z0 ) in any direction in DSm F (x0 , y0 )(u). Then, DS1 G(y0 , z0 )(DSm F (x0 , y0 )(u)) ⊆ DSm (G ◦ F )(x0 , z0 )(u). Proof By the similarity, we prove only (i). Let u ∈ X, v1 ∈ DS1 F (x0 , y0 )(u) and v2 ∈ DSm G(y0 , z0 )(v1 ). There exist tn > 0, un → u, vn1 → v1 such that tn un → 0 and, for 12 all n, y0 + tn vn1 ∈ F (x0 + tn un ). Since DSm G(y0 , z0 ) is a mth-order radial-semi-derivative in 1 2 direction v2 , with tn , vn1 above, there exists vn2 → v2 such that z0 + tm n vn ∈ G(y0 + tn vn ). So, 2 1 v2 ∈ DSm (G ◦ F )(x0 , z0 )(u) since z0 + tm n vn ∈ G(y0 + tn vn ) ⊆ (G ◦ F )(x0 + tn un ). The following properties are immediate from definition. Proposition 3.5 Let F : X → 2Y , (x0 , y0 ) ∈ grF , λ > 0 and β ∈ R. Then, for all u ∈ X, (i) DSm (βF )(x0 , βy0 )(u) = βDSm F (x0 , y0 )(u); (ii) DSm F (x0 , y0 )(λu) = λm DSm F (x0 , y0 )(u). Now we apply mth-order radial-contingent derivatives to establish necessary optimality conditions for Q-minimal solutions of some particular optimization problems. Consider first the following unconstrained problem, for F : X → 2Y , (P) min F (x), x ∈ X. For a vector optimization problem, from the concepts of optimality/efficiency recalled at the end of Sect. 1, we define in the usual and natural way, the corresponding solution notions. For instance, (x0 , y0 ) ∈ grF is called a local Q-minimal solution of (P) iff there exists U ∈ U(x0 ) such that (F (U ) − y0 ) ∩ (−Q) = ∅. Proposition 3.6 Let X, Y , and Q be as before, F : X → 2Y , and (x0 , y0 ) ∈ grF . If (x0 , y0 ) is a local Q-minimal solution of (P). Then, for all m ∈ N, DSm F (x0 , y0 )(X) ∩ (−Q) = ∅. Proof Suppose to the contrary there exist u ∈ X and v ∈ DSm F (x0 , y0 )(u) ∩ (−Q). Then, there exist sequences tn > 0 and (un , vn ) → (u, v) such that tn un → 0 and y0 + tm n vn 13 ∈ F (x0 + tn un ). Since the cone Q is open, tm n vn ∈ −Q for large n. Therefore, for such n, tm n vn ∈ (F (x0 + tn un ) − y0 ) ∩ (−Q), a contradiction. Proposition 3.6 is applicable, while some recent existing results are not, in the following. Example 3.3 Let X = Y = R, (x0 , y0 ) = (0, 0), Q = intR+ , C = R+ , and F (x) = {0}, if x ≤ 0, 2 {−1, x }, if x > 0. Then, we have D1 F (x0 , y0 )(u) ≡ {0}, D2 F (x0 , y0 )(u) = {0}, if u ≤ 0, {u2 }, if u > 0,   {0}, if u < 0, 2 R− , if u = 0, DS F (x0 , y0 )(u) =  2 {u }, if u > 0. Since D1 F (x0 , y0 )(u)∩(−intC) = ∅ for all u ∈ X, we cannot use Theorem 2.1 in [22] to reject the candidate (x0 , y0 ) for a weak solution. As D2 F (x0 , y0 )(u)∩(−intC) = ∅ for all u ∈ X, the second contingent-type derivative cannot be used either. But DS2 F (x0 , y0 )(0) ∩ (−intC) = ∅. So, Proposition 3.6 rejects (x0 , y0 ). The next example indicates the necessity of higher-order considerations. Example 3.4 Let X = Y = R, Q = intR+ , C  {0},    {|x|}, F (x) = {−|x|},    ∅, = R+ , (x0 , y0 ) = (0, 0), u = 0, and if x = 0, if x = − n1 , n ∈ N, if x = n1 , n ∈ N, otherwise. Then, DS1 F (x0 , y0 )(u) = {0} and DS2 F (x0 , y0 )(u) = R. Because DS1 F (x0 , y0 )(u) ∩ (−intC) = ∅, Theorem 3.1 in [31] with a first-order condition gives nothing. Since DS2 F (x0 , y0 )(u) ∩(−intC) = ∅, (x0 , y0 ) is not a weak solution due to Proposition 3.6. 14 Proposition 3.7 Assume that X is finite dimensional, C has a compact base B, F : X → 2Y , and (x0 , y0 ) ∈ grF . If, for at least one m ≥ 1, (i) DSm F (x0 , y0 )(0) ∩ (−C) = {0}; (ii) DSm F (x0 , y0 )(u) ∩ (−C) = ∅ for all nonzero u, then (x0 , y0 ) is a local minimal solution of (P). Proof Suppose to the contrary there exist xn → x0 and yn ∈ F (xn ) such that yn − y0 ∈ −C \ {0}. There are rn > 0 and bn ∈ B such that yn − y0 = −rn bn and bn → b for some b ∈ B. For tn := √ m rn , we have y0 + tm n (−bn ) ∈ F (x0 + tn (xn − x0 /tn ). If (for a subsequence) tn → t with t = ∞ or t = 0, then (xn − x0 )/tn → 0 and hence −b ∈ DSm F (x0 , y0 )(0), a contradiction. Let t = 0. For sn := xn − x0 , if sn /tn → +∞, then m y 0 + sm n [(tn /sn ) (−bn )] = y0 + rn (−bn ) ∈ F x0 + s n xn − x0 sn . Since X is finite dimensional, there exists u ∈ X \ {0} such that (xn − x0 )/sn → u. Hence, 0 ∈ DSm F (x0 , y0 )(u), contradicting (ii). If {sn /tn } has a convergent subsequence, say, sn /tn → α ≥ 0, then y0 + tm n (−bn ) ∈ F (xn ) = F (x0 + tn [((xn − x0 )/sn )(sn /tn )]). Since ((xn − x0 ) /sn )(sn /tn ) → αu, −b ∈ DSm F (x0 , y0 )(αu), which contradicts (i) if α = 0 or (ii) if α = 0. The next two examples explain advantages of Proposition 3.7 over recent existing results. Example 3.5 Let X = Y = R, C = R+ , (x0 , y0 ) = (0, 1), and   [1, +∞[, if x = 0, R+ , if x = 1, F (x) =  ∅, otherwise. 15 Then, for all u ∈ X, DS1 F (x0 , y0 )(u) = R+ , if u = 0, ∅, if u = 0. It follows from Proposition 3.7 that (x0 , y0 ) is a local minimal solution. Note that 1 DR F (x0 , y0 )(u) ∩ (−C \ {0}) = [−1, +∞[∩(−C \ {0}) = ∅. Hence, following Proposition 2.9 in [25] using radial derivatives, (x0 , y0 ) is not a global minimal solution. But, we do not know if (x0 , y0 ) is a local minimal solution or not. Observe also that unlike radial derivatives, which are suitable for discussing global solutions, radialcontingent derivatives are applied for considering local solutions. Example 3.6 Let X, Y , C, and (x0 , y0 ) be as in Example 3.5. Let   1 + n13 , if x = n1 , n ∈ N, {1}, if x = 0, F (x) =  ∅, otherwise. Then, for u ∈ X, DS3 F (x0 , y0 )(u) = {u3 }, if u ≥ 0, ∅, if u < 0. Proposition 3.7 asserts that (x0 , y0 ) is a local minimal solution. Since DS1 F (x0 , y0 )(u) = {0}, if u ≥ 0, ∅, if u < 0, nothing about (x0 , y0 ) follows from Theorem 4.1 in [31], using first-order considerations. Applying the above chain rule of mth-order radial-contingent derivatives, we easily establish necessary optimality conditions for local Q-minimal solutions of the following problem (P1 ) minF (x ) subject to x ∈ X and x ∈ G(x), where F : X → 2Y and G : X → 2X . This problem can be restated as the unconstrained problem min(F ◦ G)(x) s.t. x ∈ X. 16 Proposition 3.8 Let ImG ⊆ domF , (x0 , z0 ) ∈ grG, and (z0 , y0 ) ∈ grF . Assume that (x0 , y0 ) is a local Q-minimal solution of (P1 ) and u is any point in X. (i) If F has a mth-order radial-semi-derivative DSm F (z0 , y0 ) in any direction in DS1 G(x0 , z0 )(u), then DSm F (z0 , y0 )(DS1 G(x0 , z0 )(u)) ∩ (−Q) = ∅. (ii) If F has a radial-semi-derivative DS1 F (z0 , y0 ) in any direction in DSm G(x0 , z0 )(u), then DS1 F (z0 , y0 )(DSm G(x0 , z0 )(u)) ∩ (−Q) = ∅. Proof By the similarity, we prove only (i). From Proposition 3.6, for u ∈ X we have DSm (F ◦ G)(x0 , y0 )(u) ∩ (−Q) = ∅. Proposition 3.4 (i) implies that DSm F (z0 , y0 )(DS1 G(x0 , z0 )(u)) ∩ (−Q) = ∅. To compare Proposition 3.8 with a result in [32], we recall the definition of contingent epiderivative. A single-valued map EDF : X → Y satisfying epi(EDF (x0 , y0 )) = TepiF (x0 , y0 ) is said to be the contingent epiderivative of F at (x0 , y0 ) ∈ grF . Example 3.7 Let X = Y = R, Q = intR+ , C = R+ , G(x) = {−|x|}, and F (x) = R− , ∅, if x ≤ 0, if x > 0. Since G is single-valued, we try to make use of Proposition 5.2 of [32]. By a direct computation, we have DG(0, G(0))(u) = {−|u|} for all u ∈ X, and TepiF (G(0), 0) = R− × R. Hence, the contingent epiderivative EDF (G(0), 0)(u) does not exist for any u ∈ X and the mentioned Proposition 5.2 of [32] cannot be applied. However, F has a mth-order radial-semi17 derivative at (G(0), 0) in all directions in DS1 G(0, G(0))(0) = {0}, DSm F (G(0), 0)[DS1 G(0, G(0))(0)] = R− , which meets −intC. Therefore, Proposition 3.8 above rejects the candidate (0, 0). To illustrate the sum rule, we consider the following problem (P2 ) min F (x) subject to g(x) ∈ −C, where g : X → Y . Define M = {x ∈ X | g(x) ∈ −C} (the feasible set) and G : X → 2Y by G(x) = {0}, if x ∈ M, {g(x)}, otherwise. Consider the following unconstrained optimization problem, for an arbitrary positive s, (PC ) min(F + sG)(x). In the particular case, where Y = R and F is single-valued, (PC ) is used to approximate (P2 ) in penalty methods (see [33]). Then, usually s is large or tends to infinity. Think of a simple one dimensional case: f (x) = x and g(x) = −x + 2. Then, x∗ = 2 is a solution of (P2 ) and also of (PC ) for large s, e.g., s = 1000. But, for s = 1/2, the solution of (PC ) is not close to 2. (PC ) has also been studied independently from (P2 ). Optimality conditions for this general problem (PC ) were obtained in [32] by using sum rules and scalar product rules for contingent epiderivatives. In [21], problem (PC ) was investigated by using variational sets. Now, we apply Propositions 3.3, 3.5, and 3.6 for mth-order contingent-radial derivatives to get the following necessary condition for local Q-minimal solutions of (PC ). Here, s can be any positive number. Proposition 3.9 Let domF ⊆ domG, x0 ∈ M, y0 ∈ F (x0 ), u ∈ X, and either F or G have a mth-order radial-semi-derivative at (x0 , y0 ) or (x0 , 0), resp, in direction u. If (x0 , y0 ) is a 18 local Q-minimal solution of (PC ), then, (DSm F (x0 , y0 )(u) + sDSm G(x0 , 0)(u)) ∩ (−Q) = ∅. Proof By Proposition 3.6, one gets DSm (F + sG)(x0 , y0 )(u) ∩ (−Q) = ∅. According to Proposition 3.5, sDSm G(x0 , 0)(u) = DSm (sG)(x0 , 0)(u). Then, Proposition 3.3 completes the proof: DSm F (x0 , y0 )(u) + sDSm G(x0 , 0)(u) ⊆ DSm (F + sG)(x0 , y0 + 0)(u). The next example indicates a case, where Proposition 3.9 is more advantageous than earlier existing results. Example 3.8 Let X = Y = R, Q = intR+ , C = R+ , g(x) = x4 − 2x3 , and F (x) = R− , if x ≤ 0, ∅, if x > 0. Then, M = [0, 2] and G(x) = {max{0, x4 − 2x3 }}. Furthermore, since TepiF (0, 0) = R− × R and TepiG (0, 0) = {(x, y)| y ≥ 0}, the contingent epiderivative D1 F (0, 0)(u) does not exist for any u ∈ X. Hence, Proposition 5.1 in [32] cannot be employed. But, F has a radial-semiderivative of order 1 at (0,0) in any direction, DS1 F (0, 0)(0) = R− , and {0} ⊆ DS1 G(0, 0)(0) ⊆ R+ . So, (DS1 F (0, 0)(0)+sDS1 G(0, 0)(0))∩(−intC) = ∅. In view of Proposition 3.9, (x0 , y0 ) is not a local weak solution of (PC ). This fact can be checked directly too. 4 Properties of higher-order contingent-type derivatives In this section, we discuss relations between higher-order contingent-type derivatives of a set-valued map and those of its profile map. Such relations for various kinds of efficient 19 points of these derivatives are also investigated. We first propose a relaxed compactness notion as follows. Definition 4.1 For u ∈ X, F : X → 2Y is called mth-order u-directionally contingent compact at (x0 , y0 ) ∈ grF iff, for any tn ↓ 0, (un , vn ) ∈ X × Y such that un → u, and y0 + tm n vn ∈ F (x0 + tn un ) for all n, there exists a convergent subsequence of {vn }. A simple sufficient condition ensuring this relaxed compactness is the following. Proposition 4.1 Let F : X → 2Y , (x0 , y0 ) ∈ grF , and Y finite dimensional (or F take its values in a finite dimensional subset). If DSm F (x0 , y0 )(0) = {0}, then F is mth-order u-directionally contingent compact at (x0 , y0 ) for all u ∈ X. Proof Let un → u, tn → 0+ and y0 + tm n vn ∈ F (x0 + tn un ) . It is sufficient to show the boundedness of {vn }. Suppose lim vn = +∞. Set yn = vn / vn , xn = un / m vn , and n→∞ sn = tn m m vn . Then, sm n yn = tn vn , sn xn = tn un , xn → 0, and sn xn → 0. There exists y with y = 1 such that yn → y (for a subsequence). Therefore, y ∈ DSm F (x0 , f (x0 ))(0), a contradiction. Note that the assumption that DSm F (x0 , y0 )(0) = {0} is relatively natural and not much stringent. For the simple case of f being single-valued and differentiable in the sense given before Proposition 3.2, we have DSm F (x0 , f (x0 ))(0) = Dm F (x0 , f (x0 ))(0) = {f (x0 )(0)} = {0} (however, DSm F (x0 , y0 )(0) = {0} does not implies this differentiability). Now we begin with one of the relations mentioned at the beginning of the section. 20 Proposition 4.2 Let F : X → 2Y and (x0 , y0 ) ∈ grF . (i) for any u ∈ X, Dm F (x0 , y0 )(u) + C ⊆ Dm (F + C)(x0 , y0 )(u); (ii) for u ∈ X, if F is mth-order u-directionally contingent compact at (x0 , y0 ), then the inclusion in (i) becomes an equality; (iii) if C has a compact base and DSm F (x0 , y0 )(0) ∩ (−C) = {0}, we have the equality in (i) for all u ∈ X. Proof (i) Let z = v + c for some v ∈ Dm F (x0 , y0 )(u) and c ∈ C. Then, there exist tn → 0+ and (un , vn ) → (u, v) such that y0 + tm n vn ∈ F (x0 + tn un ) for all n. Setting v n := vn + c, one m has v n → v + c and y0 + tm n v n ∈ (F + C)(x0 + tn un ). So, z = v + c ∈ D (F + C)(x0 , y0 )(u). (ii) Let v ∈ Dm (F + C)(x0 , y0 )(u). If (u, v) = (0, 0), we have 0 ∈ Dm F (x0 , y0 )(0) + C. For (u, v) = (0, 0), there exist tn → 0+ and (un , vn ) → (u, v) such that y0 + tm n vn m ∈ (F +C)(x0 +tn un ). Hence, there exists cn ∈ C such that y0 +tm n (vn −cn /tn ) ∈ F (x0 +tn un ). ¯ ∈ Y . Then, By the u-directional contingent compactness, we can assume that vn −cn /tm n → v cn /tm ¯ := v − v¯ ∈ C and v ∈ Dm F (x0 , y0 )(u) + C. n → c (iii) Let u ∈ X and v ∈ Dm (F + C)(x0 , y0 )(u) be arbitrary. As in (ii), we need to consider only (u, v) = (0, 0). There exist tn → 0+ , (un , vn ) → (u, v), and cn ∈ C such that y0 + tm n vn ∈ F (x0 + tn un ) + cn for all n. If there exists n0 such that cn = 0 for all n ≥ n0 , then v ∈ Dm F (x0 , y0 )(u) + 0 ⊆ Dm F (x0 , y0 )(u) + C. Now assume that cn = 0 and cn / cn → c ∈ C with norm one. There are only two cases, for sn := m cn . Case 1: sn /tn → +∞. Then, sn [(tn /sn )un ] = tn un → 0. Since y0 + (sn )m [(tn /sn )m vn − cn /sm n ] ∈ F (x0 + sn [(tn /sn )un ]), 21 m (tn /sn )m yn − cn /sm n → −c, and (tn /sn )un → 0, one has −c ∈ DS F (x0 , y0 )(0), impossible. Case 2: {sn /tn } is bounded, and assume sn /tn → α ≥ 0. Then, since y0 + (tn )m [vn − (sn /tn )m (cn /sm n )] ∈ F (x0 + tn un ), m m m vn − (sn /tn )m (cn /sm n ) → v − α c, and un → u, one gets y − α c ∈ D F (x0 , y0 )(u), and hence v ∈ Dm F (x0 , y0 )(u) + C. The following example shows that the condition DSm F (x0 , y0 )(0) ∩ (−C) = {0} in Proposition 4.2 (iii) is essential. Example 4.1 Let X = Y = R, (x0 , y0 ) = (0, 0), C = R+ , and F (x) = [ − x3 , +∞[∪{−1}, if x < 0, [ − x3 , +∞[, if x ≥ 0. Then, we can check that DS2 F (x0 , y0 )(0) = R and, for u < 0, D2 F (x0 , y0 )(u) = R+ , D2 (F + C)(x0 , y0 )(u) = R. Hence, DS2 F (x0 , y0 )(0) ∩ (−C) = {0} and, for u < 0, D2 F (x0 , y0 )(u) + C = D2 (F + C)(x0 , y0 )(u). Proposition 4.3 Let F : X → 2Y , (x0 , y0 ) ∈ grF , and u ∈ X. (i) If F is mth-order u-directionally contingent compact at (x0 , y0 ), then (a) Assuming that C is pointed, MinC Dm F (x0 , y0 )(u) = MinC Dm (F +C)(x0 , y0 )(u); (b) HeC Dm F (x0 , y0 )(u) = HeC Dm (F + C)(x0 , y0 )(u); (c) when intC = ∅, WMinC Dm F (x0 , y0 )(u) ⊆ WMinC Dm (F + C)(x0 , y0 )(u); 22 ˜ is a closed convex cone with K ˜ ⊆ intC ∪ {0}, (d) when intC = ∅ and K ˜ 0 , y0 )(u); WMinC Dm F (x0 , y0 )(u) = WMinC Dm (F + K)(x (ii) If C has a compact base and DSm F (x0 , y0 )(0) ∩ (−C) = {0}, then assertions (a)-(d) in (i) hold for all u ∈ X. Proof (i) (a) We first prove the inclusion MinC Dm F (x0 , y0 )(u) ⊆ MinC Dm (F + C)(x0 , y0 )(u). (1) Let z ∈ MinC Dm F (x0 , y0 )(u). Then, z ∈ Dm F (x0 , y0 )(u) ⊆ Dm (F + C)(x0 , y0 )(u). Suppose to the contrary that z ∈ MinC Dm (F + C)(x0 , y0 )(u). Then, there exists y ∈ Dm (F + C) (x0 , y0 )(u) such that z − y := c ∈ C \ {0}. Proposition 4.2 (ii) yields y ∈ Dm F (x0 , y0 )(u) such that y − y ∈ C. Hence, z − y ∈ C \ {0}, a contradiction. For the inclusion reverse to (1), let y ∈ MinC Dm (F +C)(x0 , y0 )(u). Then, y ∈ Dm (F +C) (x0 , y0 )(u). By Proposition 4.2 (ii), there exists y ∈ Dm F (x0 , y0 )(u) ⊆ Dm (F + C)(x0 , y0 )(u) such that y − y := c ∈ C. We see that c = 0, because y ∈ MinC Dm (F + C)(x0 , y0 )(u). Therefore, y ∈ MinC Dm F (x0 , y0 )(u). (b) We prove the inclusion HeC Dm F (x0 , y0 )(u) ⊆ HeC Dm (F + C)(x0 , y0 )(u). (2) Let z ∈ HeC Dm F (x0 , y0 )(u). Then, there exists a convex cone K with C \ {0} ⊆ intK such that z ∈ MinK Dm F (x0 , y0 )(u). Hence, z ∈ Dm (F + C)(x0 , y0 )(u). We show that z belongs to the right-hand side of (2) relative to the same cone K. Suppose there exists 23 y ∈ Dm (F + C)(x0 , y0 )(u) with z − y := k ∈ K \ (−K). By Proposition 4.2 (ii), there exists y ∈ Dm F (x0 , y0 )(u) such that y − y := c ∈ C and we obtain a contradiction: z − y = k + c ∈ K \ (−K) + C ⊆ K \ (−K). To prove the inclusion reverse to (2), let y ∈ HeC Dm (F + C)(x0 , y0 )(u) relative to K. Then, y ∈ Dm (F + C)(x0 , y0 )(u). According to Proposition 4.2 (ii), there exists y ∈ Dm F (x0 , y0 )(u) ⊆ Dm (F + C)(x0 , y0 )(u) such that y − y := c ∈ C. If c ∈ C \ {0} ⊆ intK, then y ∈ HeC Dm (F + C)(x0 , y0 )(u). So, c = 0 and y ∈ HeC Dm F (x0 , y0 )(u). (c) Let z ∈ WMinC Dm F (x0 , y0 )(u). Then, z ∈ Dm (F + C)(x0 , y0 )(u). Suppose there exists y ∈ Dm (F + C)(x0 , y0 )(u) such that z − y := c ∈ intC. Proposition 4.2 (ii) yields y ∈ Dm F (x0 , y0 )(u) such that y − y := c ∈ C. Thus, z − y = c + c ∈ intC, impossible. (d) First, we prove that ˜ 0 , y0 )(u). WMinC Dm F (x0 , y0 )(u) ⊆ WMinC Dm (F + K)(x (3) ˜ 0 , y0 )(u). Suppose to the contrary Let z ∈ WMinC Dm F (x0 , y0 )(u). Then, z ∈ Dm (F + K)(x ˜ 0 , y0 )(u). Then, there exists y ∈ Dm (F + K)(x ˜ 0 , y0 )(u) such that z ∈ WMinC Dm (F + K)(x that z − y ∈ intC. Proposition 4.2 (ii) gives ˜ = Dm (F + K)(x ˜ 0 , y0 )(u). Dm (F )(x0 , y0 )(u) + K ˜ and hence z − y ∈ intC + K ˜ Therefore, there exists y ∈ Dm F (x0 , y0 )(u) with y − y ∈ K, ⊆ intC, contradicting that z ∈ WMinC Dm F (x0 , y0 )(u). ˜ 0 , y0 )(u). Then, To check the inclusion reverse to (3), let z ∈ WMinC Dm (F + K)(x ˜ Suppose there exists y ∈ Dm F (x0 , y0 )(u) such that z − y ∈ intC. z ∈ Dm F (x0 , y0 )(u) + K. 24 ˜ 0 , y0 )(u), then z ∈ WMinC Dm (F + K)(x ˜ 0 , y0 )(u), a If z ∈ Dm F (x0 , y0 )(u) ⊆ Dm (F + K)(x ˜ contradiction. If z ∈ Dm F (x0 , y0 )(u), then there exists y ∈ Dm F (x0 , y0 )(u) ⊆ Dm (F + K) ˜ \ {0} ⊆ intC, because z ∈ Dm F (x0 , y0 )(u) + K. ˜ Therefore, (x0 , y0 )(u) such that z − y ∈ K ˜ 0 , y0 )(u), again a contradiction. z ∈ WMinC Dm (F + K)(x (ii) For this case we argue similarly as in (i), but for all u ∈ X at the same time, ˜ ⊆ intC ∪ {0} and applying Proposition 4.2 (iii). For (d), notice additionally that from K ˜ = {0}. DSm F (x0 , y0 )(0) ∩ (−C) = {0}, it follows that DSm F (x0 , y0 )(0) ∩ (−K) The following example gives a negative answer to the natural question: can one have the ˜ replaced by C? equality in Proposition 4.3 (i)(d) and (ii)(d) with K Example 4.2 Let X = R, Y = R2 , (x0 , y0 ) = (0, (0, 0)), C = R2+ , and F (x) = {(x4 , x2 )}. Then, we can check that, for any u ∈ R, DS2 F (x0 , y0 )(u) = D2 F (x0 , y0 )(u) = {(0, u2 )} and D2 (F + C)(x0 , y0 )(u) = {(y1 , y2 ) ∈ R2+ : y1 ≥ 0, y2 ≥ u2 }. Hence, WMinC D2 F (x0 , y0 )(u) = {(0, u2 )} and WMinC D2 (F + C)(x0 , y0 )(u) = {(y1 , y2 ) ∈ R2+ | y1 = 0, y2 ≥ u2 } ∪ {(y1 , y2 ) ∈ R2+ | y1 ≥ 0, y2 = u2 }. So, DS2 F (x0 , y0 )(0) ∩ (−C) = {0} and WMinC Dm (F + C)(x0 , y0 )(u) WMinC Dm F (x0 , y0 )(u). Remark 4.1 Proposition 4.3 will be applied in Section 5. We can prove that, in Proposition 4.3 (ii) (a) and (d), without the condition DSm F (x0 , y0 )(0) ∩ (−C) = {0}, we still have a 25 weaker conclusion, resp, MinC Dm (F + C)(x0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u), ˜ 0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u). WMinC Dm (F + K)(x Observe that, in the particular case where Y = R (scalar optimization), the weak minimum, minimum and proper minimum coincide and it is easy to see that Proposition 4.3 becomes trivial. 5 Higher-order contingent-type derivatives of perturbation maps In this section, we consider the following parameterized vector optimization problem, minK f (x, u) = (f1 (x, u), f2 (x, u), ..., fq (x, u)), s.t. x ∈ X(u) ⊆ Rl , where x is a l-dimensional decision variable, u is a p-dimensional parameter, fi is a real valued objective function on Rl × Rp for i = 1, 2, ..., q, X is a set-valued map from Rp to Rl , which defines a feasible decision set, and K is a nonempty pointed closed convex ordering cone in Rq . Let Y (u) be the value at u of the feasible set map in the objective space, i.e., Y (u) := {y ∈ Rq | y = f (x, u) for some x ∈ X(u)}. We define the following two solution maps S and W : S(u) := MinK Y (u), W (u) := WMinK Y (u). These set-valued maps S and W are called the perturbation map and weak perturbation map, resp, of the considered problem. 26 ˜ ⊆ Rq , we say that Definition 5.1 ([32]) For u0 ∈ Rp and a closed convex cone K (i) Y is K-dominated by S near u0 iff Y (u) ⊆ S(u) + K, for all u in some U ∈ U(u0 ); ˜ ˜ for all u in some U ∈ U(u0 ). (ii) Y is K-dominated by W near u0 iff Y (u) ⊆ W (u) + K, Remark 5.1 Since S(u) ⊆ Y (u), the K-dominatedness of Y by S near u0 (relative to U ) implies that, for all u ∈ U , S(u) + K = Y (u) + K. Hence, Dm (S + K)(u0 , y0 )(u) = Dm (Y + K)(u0 , y0 )(u) for any y0 ∈ S(u0 ), u0 ∈ U , and u ∈ Rp . A similar assertion is true ˜ for the K-dominatedness. Proposition 5.1 (i) For u near u0 ∈ X, assume that Y is mth-order u-directionally contingent compact at (u0 , y0 ). (a) If Y is K-dominated by S near u0 , then, for u near u0 , MinK Dm Y (u0 , y0 )(u) ⊆ Dm S(u0 , y0 )(u). ˜ satisfying K ˜ ⊆ intK ∪ {0}, and (b) If intK = ∅, there is a closed convex cone K ˜ Y is K-dominated by W near u0 , then, for u near u0 , WMinK Dm Y (u0 , y0 )(u) ⊆ Dm W (u0 , y0 )(u). (ii) Assume that DSm Y (u0 , y0 )(0) ∩ (−K) = {0}. Then, assertions (a) and (b) of (i) hold for all u ∈ Rp . Proof We prove only assertion (ii)(b). The others can be proved similarly. Observe that, being a pointed closed convex cone in Rq , K clearly has a compact base and hence so does 27 ˜ Because Dm Y (u0 , y0 )(0) ∩ (−K) = {0} and 0 ∈ Dm W (u0 , y0 )(0) ⊆ Dm Y (u0 , y0 )(0), one K. S S S ˜ = {0}. Therefore, one has has DSm W (u0 , y0 )(0) ∩ (−K) ˜ 0 , y0 )(u) WMinK Dm Y (u0 , y0 )(u) = WMinK Dm (Y + K)(u ˜ 0 , y0 )(u) = WMinK Dm W (u0 , y0 )(u) ⊆ Dm W (u0 , y0 )(u). = WMinK Dm (W + K)(u Here the first and third equalities are due to Propositions 4.3 (ii)(d), and the second one follows from Remark 5.1. ˜ The essentialness of the K-dominatedness by W near u0 is justified as follows. 2 Example 5.1 Let p = 1, q = 2, K = R2+ , (u0 , y0 ) = (0, (0, 0)) and Y : R → 2R be given by Y (u) = {y ∈ R2 | y1 ≥ 0, y2 ≥ u4 }, if u ≤ 0, {y ∈ R2 | y1 > 0, y2 ≥ 0}, if u > 0. Then, W (u) = {y ∈ R2 | y1 = 0, y2 ≥ u4 } ∪ {y ∈ R2 | y1 ≥ 0, y2 = u4 }, if u ≤ 0, {y ∈ R2 | y1 > 0, y2 = 0}, if u > 0. ˜ ˜ satisfying Therefore, Y is not K-dominated by W near u0 for any closed convex cone K ˜ ⊆ intR2+ ∪ {0}. We can check that, for any u ∈ R, K D2 Y (u0 , y0 )(u) = DS2 Y (u0 , y0 )(u) = {y ∈ R2 | y1 ≥ 0, y2 ≥ 0}, DS2 Y (u0 , y0 )(0) ∩ (−K) = {(0, 0)}, WMinK D2 Y (u0 , y0 )(u) = {y ∈ R2 | y1 y2 = 0, y1 ≥ 0, y2 ≥ 0}. On the other hand, D2 W (u0 , y0 )(u) = {y ∈ R2 | y1 y2 = 0, y1 ≥ 0, y2 ≥ 0}, if u ≤ 0, {y ∈ R2 | y1 > 0, y2 = 0}, if u > 0. 28 Hence, for u > 0, WMinK Dm Y (u0 , y0 )(u) Dm W (u0 , y0 )(u). The inclusion reverse to that in Proposition 5.1 (i)(a) and (ii)(a) does not hold as indicated in the following. Example 5.2 Let p = 1, q = 2, U = R, K = R2+ , (u0 , y0 ) = (0, (0, 0)), and Y (u) = {y ∈ R2 | y1 ≥ 0, y2 ≥ 0}, if u ≤ 0, 2 8/3 {y ∈ R | y2 ≥ −u(y1 ) , y1 ≥ 0}, if u > 0. S(u) = {(0, 0)}, if u ≤ 0, 2 8/3 {y ∈ R | y2 = −u(y1 ) , y1 ≥ 0}, if u > 0. Then, We can verify that, for any u ∈ R, Y is K-dominated by S and D2 Y (u0 , y0 )(u) = {y ∈ R2 | y1 ≥ 0, y2 ≥ 0} = DS2 Y (u0 , y0 )(0), MinK D2 Y (u0 , y0 )(u) = {(0, 0)}. Hence, DS2 Y (u0 , y0 )(0) ∩ (−K) = {0}. On the other hand, D2 S(u0 , y0 )(u) = {(0, 0)}, if u < 0, 2 {y ∈ R | y1 ≥ 0, y2 = 0}, if u ≥ 0. Hence, for u ≥ 0, D2 S(u0 , y0 )(u) MinK D2 Y (u0 , y0 )(u). The next example proves the same fact for assertion (b) in Proposition 5.1 (i),(ii). Example 5.3 Let p = q = 1, U = R, K = R+ , (u0 , y0 ) = (0, 0) and Y (u) = {y ∈ R | y ≥ |u|4/3 }. Then, W (u) = {y ∈ R| y = |u|4/3 } and D2 Y (u0 , y0 )(u) = R+ , if u = 0, ∅, if u = 0. 29 DS2 Y (u0 , y0 )(u) ≡ R+ . ˜ Y is K-dominated ˜ Hence, for K = K, by W for all u ∈ R, and DS2 Y (u0 , y0 )(0)∩(−K) = {0}. On the other hand, R+ , if u = 0, ∅, if u = 0. D2 W (u0 , y0 )(u) = Hence, for u = 0, D2 W (u0 , y0 )(0) WMinK D2 Y (u0 , y0 )(0) = {0}. In [16], the lower Studniarski derivative of perturbation maps is investigated. To see some relations, let us recall that, for F : X → 2Y and(x0 , y0 ) ∈ grF , the mth-order lower Studniarski derivative of F at (x0 , y0 ) is Dm F (x0 , y0 ) defined by Dm F (x0 , y0 )(u) := {v ∈ Y | ∀tn → 0+ , ∀un → u, ∃vn → v, y0 + tm n vn ∈ F (x0 + tn un )}. Since both the assumptions and conclusions are in terms of this derivative, the results of [16] are not directly comparable to ours. However, because the domain and image of this derivative are smaller than the ones of our contingent-type derivative (known also as the upper Studniarski derivative), our results have advantages in some cases as illustrated in the following. Example 5.4 Let p = q = 1, U = R, K = R+ , (u0 , y0 ) = (0, 0) and Y (u) = {y ∈ R | y ≥ |u|4/3 }. Then, W (u) = {y ∈ R | y = |u|4/3 }, D2 Y (u0 , y0 )(u) = R+ , if u = 0, ∅, if u = 0, D2 W (u0 , y0 )(u) = Hence, for u = 0, WMinK D2 Y (u0 , y0 )(0) ⊆ D2 W (u0 , y0 )(0). 30 R+ , if u = 0, ∅, if u = 0. This conclusion can be also obtained by applying Proposition 5.1. However, for all u ∈ X, D2 Y (u0 , y0 )(u) = D2 W (u0 , y0 )(u) = ∅, and hence Theorem 4.1 in [16] cannot be employed. Similar examples can be found to see that even in the special case of order one, our results can be applied in cases where proto-derivatives are empty and the results of [8, 9, 11, 12] are out of use. To obtain the inclusion reverse to that in assertion (b) of Proposition 5.1 (i),(ii), we need the following definition. If Dm F (x0 , y0 )(u) = Dm F (x0 , y0 )(u) for any u ∈ dom(Dm F (x0 , y0 )), then F is said to have a mth-order proto-contingent-type derivative at (x0 , y0 ). Proposition 5.2 If Y has a mth-order proto-contingent-type derivative at (u0 , y0 ), then, for any u ∈ U , Dm W (u0 , y0 )(u) ⊆ WMinK Dm Y (u0 , y0 )(u). Proof Let y ∈ Dm W (u0 , y0 )(u). Then, there exist tn → 0+ and (un , yn ) → (u, y) such that y0 + tm n yn ∈ W (u0 + tn un ) = WMinK Y (u0 + tn un ) for all n. Suppose to the contrary that there exists y ∈ Dm Y (u0 , y0 )(u) such that y − y ∈ intK. Since Y has a mth-order protocontingent-type derivative at (u0 , y0 ), for the mentioned sequences {tn } and {un }, there exists yn → y such that y0 + tm n yn ∈ Y (u0 + tn un ). For large n, we have m m (y0 + tm n yn ) − (y0 + tn yn ) = tn (yn − yn ) ∈ intK. This contradicts the fact that y0 + tm n yn ∈ WMinK Y (u0 + tn un ). 31 The following example asserts that the assumption that Y has the mth-order protocontingent-type derivative cannot be dropped. Example 5.5 Let p = q = 1, U = R, K = R+ , (u0 , y0 ) = (0, 0), and Y (u) = {−|u|4/3 }, if u ≤ 0, {−1, u2 }, if u > 0. W (u) = {−|u|4/3 }, if u ≤ 0, {−1}, if u > 0. Then, We easily check that D2 Y (u0 , y0 )(0) = R− = D2 Y (u0 , y0 )(0) = {0}, D2 W (u0 , y0 )(0) = R− WMinK D2 Y (u0 , y0 )(0) = ∅. The perturbation map does not enjoy the property given in Proposition 5.2 as indicated by the following. Example 5.6 Let p = 1, q = 2, U = R, K = R2+ , (u0 , y0 ) = (0, (0, 0)), and Y (u) = {(y1 , y2 ) ∈ R2 | y1 ≥ 0, y2 ≥ −y12 } for all u. Then, W (u) = {(y1 , y2 ) ∈ R2 | y1 = 0, y2 ≥ 0} ∪ {(y1 , y2 ) ∈ R2 | y1 ≥ 0, y2 = −y12 } for all u. We have D2 Y (u0 , y0 )(u) = D2 Y (u0 , y0 )(u) = {(y1 , y2 ) ∈ R2 : y1 ≥ 0, y2 ≥ 0}, D2 W (u0 , y0 )(u) = {(y1 , y2 ) ∈ R2 | y1 = 0, y2 ≥ 0} ∪ {(y1 , y2 ) ∈ R2 : y1 ≥ 0, y2 = 0}. Hence, for all u, D2 W (u0 , y0 )(u) = WMinK D2 Y (u0 , y0 )(u). 32 On the other hand, S(u) = {(y1 , y2 ) ∈ R2 | y1 ≥ 0, y2 = −y12 }. Then, D2 S(u0 , y0 )(u) = {(y1 , y2 ) ∈ R2 | y1 ≥ 0, y2 = 0}. Thus, for all u, D2 S(u0 , y0 )(u) ⊆ MinK D2 Y (u0 , y0 )(u) = {(0, 0)}. However, we have the following close relations. Proposition 5.3 Assume that (a) Y has a mth-order proto-contingent-type derivative at (u0 , y0 ); ˜ with (b) intK = ∅ ; and there exist a neighborhood U of x0 and a closed, convex cone K ˜ ⊆ intK ∪ {0}, such that Y is K-dominated ˜ K by S on U . Then, if either Y is mth-order u-directionally contingent compact at (x0 , y0 ) for u ∈ U , or DSm Y (u0 , y0 )(0) ∩ (−K) = {0}, then, for such u, Dm S(u0 , y0 )(u) = Dm W (u0 , y0 )(u) = WMinK Dm Y (u0 , y0 )(u). Proof (i) In view of Propositions 5.1 and 5.2, we need to verify only WMinK Dm Y (u0 , y0 )(u) ⊆ Dm S(u0 , y0 )(u). By Proposition 4.3 and Remark 5.1, ˜ 0 , y0 )(u) WMinK Dm Y (u0 , y0 )(u) = WMinK Dm (Y + K)(u ˜ 0 , y0 )(u) = WMinK Dm S(u0 , y0 )(u) ⊆ Dm S(u0 , y0 )(u). = WMinK Dm (S + K)(u 33 6 Conclusions This paper is devoted to higher-order sensitivity analysis for nonsmooth vector optimization. Unlike the earlier works [16, 17] on the same subject, the results here are expressed in terms of the (higher-order) radial-contingent and contingent-type derivatives. The former derivative is introduced here, combining the ideas of the contingent and radial derivatives. Though being different from the contingent-type derivative only at the origin, it is suitable for stating relaxed assumptions. The conclusions are expressed in terms of the contingenttype derivative. This kind of derivatives is employed here for the first time in sensitivity analysis in optimization. Relations between the set of minimum or weak minimum points of the contingent-type derivative of a feasible set map Y (.) and this derivative of the set of minimum or weak minimum points, resp, of Y (.) are established. We provide examples to ensure that our assumptions are essential and our results have advantages over that of [16, 17] (the only contributions to higher-order sensitivity analysis in optimization so far), since they can be effectively applied in some cases where the latter cannot. To illustrate the new notion of the higher-order radial-contingent derivative, besides the results in the main stream of sensitivity analysis, calculus rules of this derivative and applications in optimality conditions are also discussed in the paper. For possible developments of this work, we think that the results of Sect. 5 can be concretized for the most important case where the constraint set X(u) is determined by equalities and inequalities. Furthermore, one can try to use other generalized derivatives, since many have been applied. successfully in studies of optimality conditions. 34 Acknowledgements This research was supported by the Vietnam National University Hochiminh City (VNU-HCM) under the grant number B2013-28-01. A part of the work of the second author was completed during his stay as a visiting professor at the Vietnam Institute for Advanced Study in Mathematics (VIASM), whose hospitality is gratefully acknowledged. The authors are indebted to the anonymous referees for many valuable detailed remarks which have helped to improve significantly the paper. References 1. Fiacco, A.V.: Introduction to Sensitivity and Stability Analysis in Nonlinear Programming. Academic Press, New York (1983) 2. Tanino, T.: Sensitivity analysis in multiobjective optimization. J. Optim. Theory Appl. 56, 479-499 (1988) 3. Tanino, T.: Stability and sensitivity analysis in convex vector optimization. SIAM J. Control Optim. 26, 521-536 (1988) 4. Shi, D.S.: Contingent derivative of the perturbation map in multiobjective optimization. J. Optim. Theory Appl. 70, 385-396 (1991) 5. Kuk, H., Tanino, T., Tanaka, M.: Sensitivity analysis in vector optimization. J. Optim. Theory Appl. 89, 713-730 (1996) 6. Shi, D.S.: Sensitivity analysis in convex vector optimization. J. Optim. Theory Appl. 77, 145-159 (1993) 35 7. Kuk, H., Tanino, T., Tanaka, M.: Sensitivity analysis in parameterized convex vector optimization. J. Math. Anal. Appl. 202, 511-522 (1996) 8. Levy, A.B., Rockafellar, R.T.: Sensitivity analysis of solutions to generalized equations. Trans. Amer. Math. Soc. 345, 661-671 (1994) 9. Levy, A.B.: Lipschitzian multifunctions and a Lipschitzian inverse mapping theorem. Math. Oper. Res. 26, 105-118 (2001) 10. Rockafellar, R.T.: Proto-differentiability of set-valued mappings and its applications in optimization. Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 6, 449-482 (1989) 11. Huy, N.Q., Lee, G.M.: Sensitivity of solutions to a parametric generalized equation. Set-valued Anal. 16, 805-820 (2008) 12. Lee, G.M., Huy, N.Q.: On proto-differentiability of generalized perturbation maps. J. Math. Anal. Appl. 324, 1297-1309 (2006) 13. Mordukhovich, B.S.: Coderivetive analysis of variational systems. J. Global Optim. 28, 347-362 (2004) 14. Levy, A.B., Mordukhovich, B.S.: Coderivetives in parametric optimization. Math. Program. Ser. A 99, 311-327 (2004) 15. Chuong, T.D., Yao, J.C.: Generalized Clarke epiderivatives of Parametric vector optimization problems. J. Optim. Theory Appl. 146, 77-94 (2010) 36 16. Sun, X.K., Li, S.J.: Lower Studniarski derivative of the perturbation map in parameterized vector optimization. Optim. Lett. 5, 601-614 (2011) 17. Anh, L.N.H., Khanh, P.Q.: Variational sets of perturbation maps and applications to sensitivity analysis for constrained vector optimization. J. Optim. Theory Appl. Onlinefirst, DOI 10.1007/s10957-012-0257-5 18. Studniarski, M.: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM J. Control Optim. 25, 1044-1049 (1986) 19. Khanh, P.Q., Tuan, N.D.: Variational sets of multivalued mappings and a unified study of optimality conditions. J. Optim. Theory Appl. 139, 45-67 (2008) 20. Khanh, P.Q., Tuan, N.D.: Higher-order variational sets and higher-order optimality conditions for proper efficiency in set-valued nonsmooth vector optimization. J. Optim. Theory Appl. 139, 243-261 (2008) 21. Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Variational sets : calculus and applications to nonsmooth vector optimization, Nonlinear Anal. TMA. 74, 2358-2379 (2011) 22. Luc, D.T.: Contingent derivatives of set-valued maps and applications to vector optimization. Math. Program. 50, 99-111 (1991) 23. Anh, L.N.H., Khanh, P.Q.: Higher-order optimality conditions in set-valued optimization using radial sets and radial derivatives. J. Global Optim. Onlinefirst, DOI 10.1007/s10898-012-9861-z 37 24. Ha, T.D.X.: Optimality conditions for several types of efficient solutions of set-valued optimization problems. In: Pardalos, P., Rassis, Th. M., Khan, A. A. (eds.) Nonlinear Analysis and Variational Problems, pp. 305-324. Springer, Berlin (2009) 25. Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization. Nonlinear Anal. TMA. 74, 7365-7379 (2011) 26. Khanh, P.Q.: Proper solutions of vector optimization problems. J. Optim. Theory Appl. 74, 105-130 (1992) 27. Guerraggio, A., Molho, E., Zaffaroni, A.: On the notion of proper efficiency in vector optimization. J. Optim. Theory Appl. 82, 1-21 (1994) 28. Makarov, E.K., Rachkovski, N.N.: Unified representation of proper efficiency by means of dilating cones. J. Optim. Theory Appl. 101, 141-165 (1999) 29. Aubin, J.P., Frankowska, H.: Set-valued Analysis. Birkhauser, Berlin (1990) 30. Penot, J.-P.: Differentiability of relations and differential stability of perturbed optimization problems. SIAM J. Control Optim. 22, 529-551 (1984) 31. Taa, A.: Necessary and sufficient conditions for multiobjective optimization problems. Optimization 36, 97-104 (1996) 32. Jahn, J., Khan, A.A.: Some calculus rules for contingent epiderivatives. Optimization 52, 113-125 (2003) 38 33. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. 3rd Edit, Springer, Berlin (2009) 39 [...]... H., Tanino, T., Tanaka, M.: Sensitivity analysis in vector optimization J Optim Theory Appl 89, 713-730 (1996) 6 Shi, D.S.: Sensitivity analysis in convex vector optimization J Optim Theory Appl 77, 145-159 (1993) 35 7 Kuk, H., Tanino, T., Tanaka, M.: Sensitivity analysis in parameterized convex vector optimization J Math Anal Appl 202, 511-522 (1996) 8 Levy, A.B., Rockafellar, R.T.: Sensitivity analysis. .. Stability Analysis in Nonlinear Programming Academic Press, New York (1983) 2 Tanino, T.: Sensitivity analysis in multiobjective optimization J Optim Theory Appl 56, 479-499 (1988) 3 Tanino, T.: Stability and sensitivity analysis in convex vector optimization SIAM J Control Optim 26, 521-536 (1988) 4 Shi, D.S.: Contingent derivative of the perturbation map in multiobjective optimization J Optim Theory Appl... contributions to higher- order sensitivity analysis in optimization so far), since they can be effectively applied in some cases where the latter cannot To illustrate the new notion of the higher- order radial-contingent derivative, besides the results in the main stream of sensitivity analysis, calculus rules of this derivative and applications in optimality conditions are also discussed in the paper For... the (higher- order) radial-contingent and contingent-type derivatives The former derivative is introduced here, combining the ideas of the contingent and radial derivatives Though being different from the contingent-type derivative only at the origin, it is suitable for stating relaxed assumptions The conclusions are expressed in terms of the contingenttype derivative This kind of derivatives is employed... Proposition 4.3 and Remark 5.1, ˜ 0 , y0 )(u) WMinK Dm Y (u0 , y0 )(u) = WMinK Dm (Y + K)(u ˜ 0 , y0 )(u) = WMinK Dm S(u0 , y0 )(u) ⊆ Dm S(u0 , y0 )(u) = WMinK Dm (S + K)(u 33 6 Conclusions This paper is devoted to higher- order sensitivity analysis for nonsmooth vector optimization Unlike the earlier works [16, 17] on the same subject, the results here are expressed in terms of the (higher- order) radial-contingent... weaker conclusion, resp, MinC Dm (F + C)(x0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u), ˜ 0 , y0 )(u) ⊆ Dm F (x0 , y0 )(u) WMinC Dm (F + K)(x Observe that, in the particular case where Y = R (scalar optimization) , the weak minimum, minimum and proper minimum coincide and it is easy to see that Proposition 4.3 becomes trivial 5 Higher- order contingent-type derivatives of perturbation maps In this section, we consider... 4.1 in [31], using first -order considerations Applying the above chain rule of mth -order radial-contingent derivatives, we easily establish necessary optimality conditions for local Q-minimal solutions of the following problem (P1 ) minF (x ) subject to x ∈ X and x ∈ G(x), where F : X → 2Y and G : X → 2X This problem can be restated as the unconstrained problem min(F ◦ G)(x) s.t x ∈ X 16 Proposition... time in sensitivity analysis in optimization Relations between the set of minimum or weak minimum points of the contingent-type derivative of a feasible set map Y (.) and this derivative of the set of minimum or weak minimum points, resp, of Y (.) are established We provide examples to ensure that our assumptions are essential and our results have advantages over that of [16, 17] (the only contributions... following parameterized vector optimization problem, minK f (x, u) = (f1 (x, u), f2 (x, u), , fq (x, u)), s.t x ∈ X(u) ⊆ Rl , where x is a l-dimensional decision variable, u is a p-dimensional parameter, fi is a real valued objective function on Rl × Rp for i = 1, 2, , q, X is a set-valued map from Rp to Rl , which defines a feasible decision set, and K is a nonempty pointed closed convex ordering cone in. .. particular optimization problems Consider first the following unconstrained problem, for F : X → 2Y , (P) min F (x), x ∈ X For a vector optimization problem, from the concepts of optimality/efficiency recalled at the end of Sect 1, we define in the usual and natural way, the corresponding solution notions For instance, (x0 , y0 ) ∈ grF is called a local Q-minimal solution of (P) iff there exists U ∈ U(x0 ... establishing higher- order optimality conditions, were employed to deal with sensitivity of perturbation and weak perturbation maps of vector optimization Since higher- order considerations for sensitivity, ... the corresponding global notions Note that the notion of Q-minimal solutions contains as special cases many kinds of solutions in vector optimization, see [24, 25] We mention the concept of Henig-proper... mentioned works dealt with only first -order sensitivity analysis For higher- order considerations we observe only references [16, 17] In [16], the (higher- order) lower Studniarski derivative (defined

Ngày đăng: 16/10/2015, 14:06

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN