Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 157 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
157
Dung lượng
1,93 MB
Nội dung
VIETNAM NATIONAL UNIVERSITY - HCMC UNIVERSITY OF SCIENCE Le Thanh Tung ON GENERALIZED DERIVATIVES, OPTIMALITY CONDITIONS AND UNIQUENESS OF SOLUTIONS IN NONSMOOTH OPTIMIZATION PhD THESIS IN MATHEMATICS Hochiminh City - 2012 VIETNAM NATIONAL UNIVERSITY - HCMC UNIVERSITY OF SCIENCE Le Thanh Tung ON GENERALIZED DERIVATIVES, OPTIMALITY CONDITIONS AND UNIQUENESS OF SOLUTIONS IN NONSMOOTH OPTIMIZATION Major: Mathematical Optimization Codes: 62 46 20 01 Referee 1: Assoc Prof Dr Nguyen Dinh Huy Referee 2: Assoc Prof Dr Pham Hoang Quan Referee 3: Dr Duong Dang Xuan Thanh Independent Referee 1: Prof D.Sc Vu Ngoc Phat Independent Referee 2: Assoc Prof Dr Do Van Luu SCIENTIFIC SUPERVISOR: Prof D.Sc Phan Quoc Khanh Hochiminh City - 2012 Confirmation I confirm that all the results of this thesis come from my work under the supervision of Professor Phan Quoc Khanh and helps of many my Professors and collaborations, especially Professor Dinh The Luc They have never been published by other authors Hochiminh City, 2012 The author Le Thanh Tung i Acknowlegdement For the completion of this Thesis I am indebted to many people who have significantly contributed to the thesis First of all, I am deeply grateful to Professor Phan Quoc Khanh, my supervisor, for his kind, valuable guidance, encouragement and his helps during my research My warmest thanks are addressed to Professor Dinh The Luc, my French cotutelle supervisor, for his useful direction in research methods as well as his enthusiastic helps during my stay at Universit´e d’Avignon, France I am grateful to the referees for their valuable remarks which helped to improve the previous version of the thesis I would like to thank very much the University of Science of Hochiminh City and Cantho University for providing favorable conditions and facilities for my study I am grateful to Pediguier bourse of Universit´e d’Avignon et de pays Vaucluse, FORMATH Vietnam fund and NAFOSTED fund for supporting me on my research and my three months period in Avignon My thanks are devoted to all members of Laboratoire d’Analyse Non Lin´eaire et G´eom´etrie, Universit´e d’Avignon et de pays Vaucluse, especially Professor M.Volle, for their hospitality during my stay in Avignon I would like to address my thanks to my colleagues from the seminar of the Section of Optimization and System Theory, headed by Professor Phan Quoc Khanh, especially Nguyen Le Hoang Anh, who has collaborated with me in working on our two joint papers The last but not least thanks are devoted to my teachers and colleagues from Department of Mathematics, College of Science, Cantho University, my family and my friends who always encourage me during my research ii Foreword Nonsmooth analysis has been intensively developed for more than half century One of its major purposes is applying in nonsmooth optimization Various generalized derivatives have been introduced to replace the classical Fr´echet and Gˆateaux derivatives to meet the continually increasing diversity of practical problems For comprehensive books, the reader is referred to Clarke (1983), Aubin and Frankowska (1990), Rockafellar and Wets (1998) and Mordukhovich (2006) We can observe a domination in use of the Clarke derivative (1973), Aubin contingent derivative (1981) and Mordukhovich coderivative and limiting subdifferential (1976) However, in particular problems, sometimes many other generalized derivatives have advantages For instance, variational sets, proposed by Khanh and Tuan (2008), are defined as follows Let X and Y be real normed spaces, F : X → 2Y , (x0 , y0 ) ∈ grF and v1 , , vm−1 ∈ Y The variational sets of type and type are defined as follows: V m (F, x0 , y0 , v1 , · · · , vm−1 ) = Limsup F (F (x) tm − y0 − tv1 − · · · − tm−1 vm−1 ), x→x0 , t→0+ W m (F, x0 , y0 , v1 , · · · , vm−1 ) = Limsup F (cone+ (F (x) − y0 ) − v1 − · · · − tm−2 vm−1 ) tm−1 x→x0 , t→0+ These subsets of the image space are larger than the images of the pre-image space through many known generalized derivatives, which are shown as follows Let S ⊆ X and x, u1 , u2 , , um−1 ∈ X, m ≥ The mth-order contingent set of S at (x, u1 , u2 , , um−1 ) is TSm (x, u1 , u2 , , um−1 ) = Limsup t→0+ (S − x − tu1 − − tm−1 um−1 ) tm iii Foreword iv The mth-order adjacent set of S at (x, u1 , u2 , , um−1 ) is TSbm (x, u1 , u2 , , um−1 ) = Liminf + t→0 (S − x − tu1 − − tm−1 um−1 ) tm The mth-order Clarke (or circatangent) set of S at (x, u1 , u2 , , um−1 ) is (S − z − tu1 − − tm−1 um−1 ) m S t + t→0 ,z →x TScm (x, u1 , u2 , , um−1 ) = Liminf Based on these sets, some generalized derivatives were proposed as follows Let F : X → 2Y , (x0 , y0 ) ∈ grF and (u1 , v1 ), , (um−1 , vm−1 ) ∈ X ×Y The mth-order contingent derivative, Aubin and Frankowska (1981), of F at (x0 , y0 ) with respect to (wrt) (u1 , v1 ), , (um−1 , vm−1 ) has the value at x ∈ X : m Dm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = {y ∈ Y : (x, y) ∈ TgrF (x0 , y0 , u1 , v1 , , um−1 , vm−1 )} The mth-order adjacent derivative, Aubin and Frankowska (1981), of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) has the following value at x ∈ X : bm Dbm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = {y ∈ Y : (x, y) ∈ TgrF (x0 , y0 , u1 , v1 , , um−1 , vm−1 )} The mth-order Clarke derivative, Khanh and Tuan (2008), of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) has the value at x ∈ X : cm Dcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = {y ∈ Y : (x, y) ∈ TgrF (x0 , y0 , u1 , v1 , , um−1 , vm−1 )} Then, we have the following comparison ∀x ∈ X, Dcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dbm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ V m (F, x0 , y0 , v1 , · · · , vm−1 ) Foreword v The mth-order contingent epiderivative, Jahn and Rauh (1997) for m = and Jahn et al (2005) for m = 2, of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) is the single-valued mapping Dem F (x0 , y0 , u1 , v1 , , um−1 , vm−1 ), whose epigraph is m epiDem F (x0 , y0 , u1 , v1 , , um−1 , vm−1 ) = TepiF (x0 , y0 , u1 , v1 , , um−1 , vm−1 ) Then, ∀x ∈ X, Dem F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ V m (F+ , x0 , y0 , v1 , · · · , vm−1 ), where, C ⊆ Y be a ordering cone and F+ (x) = F (x) + C The mth-order generalized contingent epiderivative, Li and Chen (2006), Chen and Jahn (1998) for m = 1, Jahn et al (2005) for m = 2, of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) kas the value Dgm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = MinC {y ∈ Y : y ∈ Dm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x)} Here, MinC {.} denotes the set of efficient points of the set {.} wrt C The mth-order generalized Clarke epiderivative, Lalitha and Arora (2008) for m = 1, of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) of x ∈ X ishas the value Dgcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = MinC {y ∈ Y : y ∈ Dcm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x)} The mth-order weak contingent epiderivative, Chen et al (2009), of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) has the value Dwm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = WMinC {y ∈ Y : y ∈ Dm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x)} Foreword vi Here, WMinC {.} denotes the set of weak efficient points of the set {.} wrt C The mth-order weak Clarke epiderivative, Lalitha and Arora (2008) for m = 1, of F at (x0 , y0 ) wrt (u1 , v1 ), , (um−1 , vm−1 ) has the value Dwcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) = WMinC {y ∈ Y : y ∈ Dcm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x)} Then, ∀x ∈ X, we have the following comparisions Dgm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dwm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ V m (F+ , x0 , y0 , v1 , · · · , vm−1 ), and, Dgcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dwcm F (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ Dcm F+ (x0 , y0 , u1 , v1 , , um−1 , vm−1 )(x) ⊆ V m (F+ , x0 , y0 , v1 , · · · , vm−1 ) As being illustrated above, the variational sets are bigger than the corresponding sets defined by the mentioned derivatives and hence the resulting necessary conditions obtained by separations are stronger than many known ones Of course, sufficient optimality conditions based on separations of bigger sets may be weaker But using variational sets we can establish sufficient conditions which have almost no gap with the corresponding necessary ones The second advantage of the variational sets is that we can define these sets of any order to get higher-order optimality conditions This feature is significant since Foreword vii many important and powerful generalized derivatives can be defined only for the first and second orders and the higher-order optimality conditions available in the literature are much fewer than the first and second-order ones The third strong point of the variational sets is that almost no assumptions are needed to be imposed for their being well-defined and nonempty and also for establishing optimality conditions Another clear example is approximations, introduced by Jourani and Thibault (1993), defined as follows A set Af (x0 ) ⊆ L(X, Y ) is said to be a first-order approximation of f : X → Y at x0 ∈ X if there exists a neighborhood U of x0 such that, for all x ∈ U , f (x) − f (x0 ) ∈ Af (x0 )(x − x0 ) + o( x − x0 ) This kind of generalized derivatives contains a major part of known notions of derivatives, as illustrated now If f is Fr´echet differentiable at x0 , then {f (x0 )} is a first-order approximation of f at x0 Let X = Rn , Y = Rm and f be a mapping of class C 0,1 ; i.e., f is locally Lipschitz The Clarke generalized Jacobian, Clarke (1983), of f at x0 ∈ Rn , denoted by ∂C f (x0 ), is defined by ∂C f (x0 ) = {lim f (xi ) : xi → x0 , f (xi ) exists} Then, {∂C f (x0 )} is a first-order approximation of f at x0 Let f : Rn → Rm be continuous A closed subset ∂f (x0 ) ⊆ L(Rn , Rm ) is called a approximate Jacobian, Jeyakumar and Luc (1998), of f at x0 ∈ Rn if, for each v ∈ Rm and u ∈ Rn , (vh)+ (x0 , u) ≤ sup v, M u , M ∈∂f (x0 ) where (f )+ denotes the upper Dini directional derivative of a scalar function f , i.e., (vf )+ (x0 , u) = lim sup t↓0 v, f (x0 + tu) − f (x0 ) t Foreword viii An approximate Jacobian ∂f (x0 ) is termed a Fr´echet approximate Jacobian, Luc (2001), of f at x0 if there is a neighbourhood U of x0 such that, for each x ∈ U, f (x) − f (x0 ) ∈ ∂f (x0 )(x − x0 ) + o( x − x0 ) It is obvious that any Fr´echet approximate Jacobian is a first-order approximation If an approximate Jacobian ∂f (x0 ) is upper semicontinuous at x0 , then clco∂f (x0 ) is a first-order approximation Furthermore, it is advantageous that even an infinitely discontinuous maps may have approximations, as shown by the following example Let X = Y = R, x0 = 0, f (x) = − x1 , if x > 0, −x, if x ≤ Then, f is infinitely discontinuous at x0 , but it admits Af (x0 ) = {(−∞, α), −1 < α < 0} as an approximation In this thesis, three subjects of generalized derivatives and their applications in nonsmooth optimization are discussed The first one is calculus rules and their applications for three generalized derivatives : higher-order variational sets in Chapter 1, higher-order radial derivatives in Chapter and approximations in Chapter In Chapter 1, we establish elements of calculus for variational sets to ensure that they can be used in practice Firstly, we establish the union rules and intersection rules for two types of variational sets To get sum rules and Descartes product rules, we propose a definition of proto-variational sets as follows Let F : X → 2Y , (x0 , y0 ) ∈ grF and v1 , , vm−1 ∈ Y If the upper limit defining V m (F, x0 , y0 , v1 , , vm−1 ) is a full limit, i.e., the upper limit coincides with the lower limit, then this set is called a proto-variational set of order m of type of F at (x0 , y0 ) If the similar coincidence occurs for W m we say that this set is a proto-variational set of order m of type of F at (x0 , y0 ) Then, applying these definitions, sum rules and Descartes product rules are obtained Next, some If at least one of {Mi } and {Ni } is unbounded, say {Ni }, dividing (20) by Ni xi − x0 and passing to limit, we get N (M (v)) ∈ cone([T (K, g(x0 ))]∗C − ϕ2 (f (x0 ), g(x0 ))) Consequently, with some t > 0, we have ϕ2 (f (x0 ), g(x0 )) + N (M (tv)) ∈ [T (K, g(x0 ))]∗C (22) ✷ By (21) and (22), the proof is complete Note that using fˆ and gˆ makes Theorem 3.1 slightly stronger Of course, we can use f and g, if determining their approximations are easier Theorem 3.1 can be in use even in many particular cases considered by known results, when these results are not applicable due to imposed restrictive assumptions, as we will see in examples in Sect Now, we pass to the weak equilibrium problem (WEP) We need the following modifications of notions used above for (SEP) For S ⊆ X and a convex cone C ⊆ Z, the weak vector polar cone of S wrt C is SCw := {M ∈ L(X, Z) : M x ∈ −intC, ∀x ∈ S} Let f : X → Y , ϕ : Y × X → Z and x0 ∈ S The weak ϕ-critical cone of f on S at x0 is defined as Cwϕ (S, x0 ) := {v ∈ T (S, x0 ) : ϕx (f (x0 ), x0 )(v) ∈ intC ∪ −intC} Theorem 3.2 For (WEP) let x0 ∈ Ω, fˆ be continuous at x0 , g(H) ⊇ K, and Afˆ(x0 ) and Agˆ(x0 ) be approximations of fˆ and gˆ, respectively, where Agˆ(x0 ) is bounded Let the following assumptions be satisfied 116 (i) For each y in a neighbourhood of f (x0 ), the map ϕ(y, ) has first and second Fr´echet derivatives, denoted by ϕ2 and ϕ22 , which are jointly continuous (in both variables) at (f (x0 ), g(x0 )) (ii) ϕ2 (., g(x0 )) and ϕ22 (., g(x0 )) admit approximations at f (x0 ), denoted by (Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))] and (Aϕ )1 [ϕ22 (f (x0 ), g(x0 ))], respectively, where (Aϕ )1 [ϕ22 (f (x0 ), g(x0 ))] is bounded If x0 is a solution of (WEP), then each of the following conditions is sufficient for its local uniqueness: (a) for every M ∈ clAfˆ(x0 ) ∪ (Afˆ(x0 )∞ \ {0}), G ∈ clAgˆ(x0 ) and N ∈ (Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))] ∪ ((Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))]∞ \ {0}), one has [N (M (v))]G(v) ∈ intC, for all v ∈ T (H, x0 ) \ {0} with G(v) ∈ Cwϕ (K, g(x0 )); (b) K is polyhedral and condition (a) is satisfied, for all v ∈ T (H, x0 ) \ {0} with G(v) ∈ Cwϕ (K, g(x0 )), and ϕ2 (f (x0 ), x0 ) + N (M (v)) ∈ [T (K, g(x0 ))]C ∪ ([T (K, g(x0 ))]C − ϕ22 (f (x0 ), g(y0 ))G(v)) Proof The proof is similar to that of Theorem 3.1 and we only highlight the main steps and changes 117 (a) From an assumed-by-contradiction sequence {xi } of solutions to (WEP) converging to x0 , we get some v ∈ T (H, x0 ) and G ∈ clAgˆ(x0 ) such that G(v) ∈ T (K, g(x0 )) Instead of (4), we have ϕ2 (f (x0 ), g(x0 )) ∈ [T (K, g(x0 ))]C and ϕ2 (f (xi ), g(xi )) ∈ [T (K, g(xi )]C Applying the mean value theorem, likewise as (8) and (9), we obtain (ϕ1 )2 (f (x0 ), αi1 ) (ϕ2 )2 (f (x0 ), αi2 ) (g(xi ) − g(x0 )) ∈ −intC, ··· (ϕl )2 (f (x0 ), αil ) (ϕ1 )2 (f (xi ), βi1 ) (ϕ2 )2 (f (xi ), βi2 ) (g(xi ) − g(x0 )) ∈ intC ··· (ϕl )2 (f (xi ), βil ) By the argument with passing to limits, we arrive at (the counterpart of (10)) G(v) ∈ Cwϕ (K, g(x0 )) To see that, for the above obtained v and G(v) and for some M ∈ clAfˆ(x0 ) ∪ (Afˆ(x0 )∞ \ {0}) and N ∈ cl(Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))] ∪ ((Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))]∞ \ {0}), one has [N (M (v))](G(v)) ∈ intC, we employ the Taylor expansions of ϕ(f (x0 ), ) and ϕ(f (xi ), ) at g(x0 ) and pass to limits the three cases, depending on the obtained approximations are bounded or not (b) When K is polyhedral, the reasoning is also almost the same as for Theorem 3.1, ✷ with clear modifications 118 Special Cases and Examples As equilibrium problems encompass many optimization-related problems, we can derive for them consequences from the results obtained in Sect In this section, we concretize our problems to only several important particular cases as examples and discuss some illustrative examples to see advantages of the encountered results If l = 1, C = R+ and ϕ(y, x) = y, x , our two problems (SVP) and (WEP) come down to the (scalar) generalized variational inequality of (GVI): finding x0 ∈ Ω s.t., ∀x ∈ Ω, f (x0 ), g(x) − g(x0 ) ≥ More specifically, when g is the identity, (GVI) becomes the classical (Stampacchia) variational inequality, denoted by (VI) Corollary 4.1 Let x0 ∈ Ω, fˆ be continuous at x0 , g(H) ⊇ K and Afˆ(x0 ) and Agˆ(x0 ) be approximations of fˆ and gˆ, respectively, where Agˆ(x0 ) is bounded If x0 is a solution of (GVI), then each of the following conditions is sufficient for the local uniqueness of x0 : (a) for every M ∈ clAfˆ(x0 ) ∪ (Afˆ(x0 )∞ \ {0}) and G ∈ clAgˆ(x0 ), one has M (v), G(v) > 0, for all v ∈ T (H, x0 ) \ {0} with G(v) ∈ C(f,g) (K, g(x0 )) := {u ∈ T (K, g(x0 )) : f (x0 ), u = 0}; (b) K is polyhedral and condition (a) is satisfied, for all v ∈ T (H, x0 ) \ {0} with 119 G(v) ∈ C(f,g) (K, g(x0 )) such that f (x0 ) + M (v) ∈ [T (K, g(x0 ))]∗ Proof Since ϕ(y, x) = y, x , for all x, y ∈ Rn , one has ϕ2 (y, x)(.) = y, , ϕ22 (y, x)(., ) = The Fr´echet derivatives of ϕ2 (y, x)(.), ϕ22 (y, x)(., ) are approximations Hence, (Aϕ )1 [ϕ2 (y, x)](., ) = { , } = cl(Aϕ )1 [ϕ2 (y, x)] ∪ ((Aϕ )1 [ϕ2 (y, x)])∞ \ {0}) (., ), (Aϕ )1 [ϕ22 (y, x)](., , ) = {0} ✷ The conclusion is implied directly from Theorem 3.1 We illustrate the use of Theorem 3.1 in the following example Example 4.1 Consider problem (SVP) with n = 2, m = 1, l = 2, H = R2 , K = [0, 1] × [0, 1], C = R2+ , and ϕ(y, x) = ((y + y)(x21 + x22 + x1 + x2 ), y + y), f (x) = √ x + x , if x1 ≥ 0, + x2 , if x1 < 0, x1 g(x) = (x1 , x2 ) Then, fˆ is continuous, though f is not Direct verifications yield that we can take, for arbitrary positive α, Afˆ(x0 ) = {(β, 1)} with β > α, Agˆ(x0 ) = 0 Hence, clAfˆ(x0 ) = {(β, 1)} with β ≥ α, Afˆ(x0 )∞ = {(γ, 0)} with γ ≥ 0, and clAgˆ(x0 ) = 120 0 For e = (1, 1) ∈ R2 , one gets, for all u, v ∈ R2 , ϕ2 (y, x)(v) = (y + y)( 2x + e, v , 0), ϕ22 (y, x)(v)(u) = (y + y)( v, u , 0) Therefore, ϕ2 (0, (0, 0)) = (0, 0), C ϕ (K, x0 ) = T (K, x0 ) = R2+ , and [T (K, x0 )]∗C = a11 a12 a21 a22 with aij ≥ 0, i, j = 1, Also by direct calculations, one has ϕ21 (y, x)(s)(u) = (2y +1)s( 2x+e, u , 0), for every s ∈ R and u ∈ R2 Consequently, ϕ21 (0, (0, 0))(s)(u) = s( e, u , 0) and (Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))] = {ϕ21 (0, (0, 0))} On the other hand, (Aϕ )1 [ϕ22 (f (x0 ), g(x0 ))] = {A}, where A(s)(u)(v) = (2s + 1)( u, v , 0) Hence, Agˆ(x0 ), (Aϕ )1 [ϕ22 (f (x0 ), g(x0 ))] are bounded The conditions in Theorem 3.1 (b) are satisfied Indeed, we can check that ϕ2 (f (x0 ), g(x0 )) = ϕ22 (f (x0 ), g(x0 )) = Consider arbitrary G ∈ clAgˆ(x0 ), M ∈ clAfˆ(x0 ) ∪ (Afˆ(x0 )∞ \ {0}) = {(β, 1)} ∪ {(γ, 0)} with β ≥ α and γ > 0, N ∈ (Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))] ∪ ((Aϕ )1 [ϕ2 (f (x0 ), g(x0 ))]∞ \ {0}) = {A} Let v ∈ T (H, x0 ) \ {0} with G(v) = v ∈ C ϕ (K, x0 ) = R2+ \ {(0, 0)} and ϕ2 (f (x0 ), g(x0 )) + N (M (v)) = N (M (v)) ∈ [T (K, (0, 0))]∗C 121 One gets M (v) = αv1 + v2 and N (M (v))(G(v)) = N (M (v))(v) If M = (β, 1) with β ≥ α > 0, then N (M (v))(v) = (βv1 + v2 )( e, v , 0) = (βv1 + v2 )(v1 + v2 , 0) ∈ −C If M = (γ, 0) with γ > 0, then N (M (v))(v) = γv1 ( e, v , 0) = γv1 (v1 + v2 , 0) ∈ −C According to Theorem 3.1, x0 = (0, 0) is a locally unique solution of (VEP) (Checking directly we see that x0 = (0, 0) is a locally unique solution of (SEP).) However, since f is not continuous at x0 , we cannot apply the earlier results in [25] to this situation Now, we apply Corollary 4.1 to a case of (VI) Example 4.2 Let us consider (VI) with m = n = 1, K = [0, 1] and √ f (x) = x, if x ≥ 0, if x < , x Then, f is infinitely discontinuous at Hence, the results in [25] cannot be employed But fˆ : [0, 1] → R is continuous Afˆ(x0 ) = ]α, +∞[ for any α > The recession cone Afˆ(x0 )∞ = [0, +∞[ The critical cone Cfˆ(K, 0) = T (K, 0) = R+ Moreover, every element of clAfˆ(x0 ) ∪ (Afˆ(x0 )∞ \ {0}) is strictly positive on Cf (K, 0) \ {0} Therefore, by Corollary 4.1, we conclude that x0 = is a locally unique solution of (VI) Turning now to a result for weak problems as example, we have the following immediate consequence of Theorem 3.2 for the classical weak vector equilibrium problem, i.e., the case with n = m, H = K, f = g being the identity and ϕ(x, x) = 0, for all x ∈ Rn 122 Corollary 4.2 Consider the classical weak vector equilibrium problem Let the following assumptions be fulfilled (i) For each y in a neighbourhood of x0 , the map ϕ(y, ) has first and second Fr´echet derivatives, denoted by ϕ2 and ϕ22 , which are jointly continuous (in both variables) at (x0 , x0 ) (ii) ϕ2 (., x0 ) and ϕ22 (., x0 ) have approximations at x0 , denoted by (Aϕ )1 [ϕ2 (x0 , x0 )] and (Aϕ )1 [ϕ22 (x0 , x0 )], respectively, with the latter being bounded If x0 is a solution, then each of the following conditions is sufficient for its local uniqueness (a) for every N ∈ cl(Aϕ )1 (ϕ2 (x0 , x0 )) ∪ ((Aϕ )1 (ϕ2 (x0 , x0 ))∞ \ {0}, one has [N (v)](v) ∈ intC, for all v ∈ C1ϕ (K, x0 ) \ {0}; (b) K is polyhedral and, for every N ∈ cl(Aϕ )1 (ϕ2 (x0 , x0 )) ∪ ((Aϕ )1 (ϕ2 (x0 , x0 ))∞ \ {0}, one has [N (v)](v) ∈ intC, for all v ∈ C1ϕ (K, x0 ) \ {0} with ϕ2 (x0 , x0 ) + N (v) ∈ [T (K, x0 )]C ∪ ([T (K, x0 )]C − ϕ22 (x0 , x0 )) We illustrate this corollary by the following example 123 Example 4.3 Consider the classical weak vector equilibrium problem with n = m = 1, l = 2, C = R2+ , K = [0, 1], and √ √ ϕ(y, x) = ( y(x3 + x − y − y), y(x3 + x − y − y)), for x, y ∈ R Observe that is a solution Let α1 and α2 be positive and fixed One has (Aϕ )1 (ϕ2 (0, 0)) = {(β1 β2 )}, with βi > αi , i = 1, 2, ((Aϕ )1 (ϕ2 (0, 0)))∞ = {(γ1 γ2 )}, with γi ≥ 0, i = 1, 2, (Aϕ )1 (ϕ22 (0, 0)) = {0}, T (K, 0) = C1ϕ (K, 0) = [0, ∞[, [T (K, 0)]C = {(a b)} with a ≥ or b ≥ 0} It is not hard to check that all the assumptions of Corollary 4.2 (b) are satisfied at x0 = Observe directly that we have only two locally unique solutions and in K Conclusion Observing that, for equilibrium problems, there have not been contributions to the uniqueness of solutions for vector problems, we consider this topic for a strong and a weak vector equilibrium problems Moreover, our major tool is the approximation notion, which is a kind of generalized derivatives This notion has an important advantage that even a map with infinite discontinuity at a point can admit an approximation at this point Hence, when applied to particular cases, our sufficient conditions for the local uniqueness of solutions in terms of approximations improve recent existing results Note also that, though this kind of generalized derivatives has been employed in studies of some topics like metric regularity, optimality conditions, etc, it is used for the first time in investigating the uniqueness of solutions in this paper 124 References Bianchi, M., Schaible, S.: Equilibrium problems under generalized convexity and generalized monotonicity J Global Optim 30 , 124-134 (2004) Hai, N.X., Khanh, P.Q.: Existence of solutions to general quasi-equilibrium problems and applications J Optim Theory Appl 133, 317-327 (2007) Li, S.J., Teo, K.L., Yang, X.Q.: On generalized vector quasi-equilibrium problems Pacific J Optim 3, 301-307 (2007) Hai, N.X., Khanh, P.Q., Quan, N.H.: On the existence of solutions to quasivariational inclusion problems J Global Optim 45, 565-581 (2009) Anh, L.Q., Khanh, P.Q.: Semicontinuity of the solution sets to parametric quasiequilibrium problems J Math Anal Appl 294, 699-711 (2004) Li, S.J., Li, X.B., Teo, K.L.: The Holder continuity of solutions to generalized vector equilibrium European J Oper Res 199, 334-338 (2009) Anh, L.Q., Khanh, P.Q.: Uniqueness and Holder continuity of the solution to multivalued equilibrium problems in metric spaces J Global Optim 37, 449-465 (2007) Tuan, L.A., Lee, G.M., Sach, P.H.: Upper semicontinuity results for the solution mapping of a mixed parametric generalized vector quasiequilibrium problem with moving cones J Global Optim 47, 639-660 (2010) 125 Lignola, M.B., Morgan, J.: α-well-posedness for Nash equilibria and for optimization problems with Nash equilibrium constraints J Global Optim 36, 439-459 (2006) 10 Fang, Y.P., Hu, R., Huang, N.J.: Well-posedness for equilibrium problems and for optimization problems with equilibrium constraints Computers Math Appl 55, 89-100 (2008) 11 Anh, L.Q., Khanh, P.Q., Van, D.T.M., Yao, J.C.: Well-posedness for vector quasiequilibria, Taiwanese J Math 13, 713-737 (2009) 12 Ceng, L.C., Petrucel, A., Yao, J.C.: Iterative approaches to solving equilibrium problems and fixed-point problems of infinitely many nonexpansive mappings J Optim Theory Appl 143, 37-58 (2009) 13 Peng, J.W., Yao, J.C.: Some new extragradient-like methods for generalized equilibrium problems, fixed point problems and variational inequality problems Optim Meth Soft 25, 677-698 (2010) 14 Martinez-Legaz, J.E., Sosa, W.: Duality for equilibrium problems J Global Optim 35, 311-319 (2006) 15 Khanh, P.Q., Tung, N.M.: Optimality and duality for nonsmooth set-valued vector equilibrium problems Submitted for publication 16 Mordukhovich, B.S.: Multiobjective optimization with equilibrium constraints Math Prog B 117, 331-354 (2009) 126 17 Mordukhovich, B.S.: Characterizations of linear suboptimality for mathematical programs with equilibriums constraints Math Prog B 120, 261-283 (2009) 18 Khanh, P.Q., Tung, L.T.: First and second-order optimality conditions using approximations for vector equilibrium problems with constraints Submitted for publication 19 Cottle, R.W.: Nonlinear programs with positively bounded Jacobians SIAM J Appl Math 14, 147-158 (1966) 20 Cottle, R.W., Stone, R.E.: On the uniqueness of solutions to linear complementarity problems Math Prog 27, 191-213 (1983) 21 Kyparisis, J.: Uniqueness and differentiability of solutions of parametric nonlinear complementarity Problems Math Prog 36, 105-113 (1986) 22 Tawhid, M.A.: On the local uniqueness of solutions of variational inequalities under H-differentiability J Optim Theory Appl 113, 149-164 (2002) 23 Luc, D.T.: Fr´echet approximate Jacobian and local uniqueness of solutions in variational inequalities J Math Anal Appl 268, 629-646 (2002) 24 Luc, D.T., Noor, M.A.: Local uniqueness of solutions of general variational inequalities J Optim Theory Appl 117, 103-119 (2003) 25 Khanh, P.Q., Luc, D.T., Tuan, N.D.: Local uniquess of solutions for equilibrium problem Adv Ineq Var 15, 127-145 (2006) 127 26 Jourani, A., Thibault, L.: Approximations and metric regularity in mathematical programming in Banach spaces Math Oper Res 18, 390 - 400 (1993) 27 Khanh, P.Q., Tuan, N.D.: First and second-order optimality conditions using approximations for nonsmooth vector optimization in Banach spaces J Optim Theory Appl 136, 238-265 (2006) 28 Khanh, P.Q., Tuan, N.D.: First and second-order approximations as derivatives of mappings in optimality conditions for nonsmooth vector optimization Appl Math Optim 58, 147-166 (2008) 29 Khanh, P.Q., Tuan, N.D.: Optimality conditions without continuity in multivalued optimization using approximations as generalized derivatives Recent Contrib Nonconvex Optim., ed S.K Mishra, Springer, 47-61 (2011) 30 Anh, L.Q., Khanh, P.Q.: On the Holder continuity of solutions to parametric multivalued vector equilibrium problems J Math Anal Appl 321, 308-315 (2006) 31 Gowda, M.S., Ravindran, G.: Algebraic univalence theorems for nonsmooth functions J Math Anal Appl 252, 917-935 (2000) 32 Jeyakumar, V., Luc, D.T.: Nonsmooth Vector Optimization and Continuous Optimization, Springer (2008) 128 Conclusion In this thesis three subjects in nonsmooth optimization are discussed The first one is calculus rules for three generalized derivatives : higher-order variational sets (defined by Khanh-Tuan in 2008) in chapter 1, higher-order radial derivatives (proposed in this thesis) in chapter and approximations (introduced by Jourani-Thibault in 1993) in chapter The second subject is optimality conditions for nonsmooth optimality problems The optimality conditions for some type of solutions of vector constrained minimization using higher-order variational sets and higher-order radial derivatives and their calculus were presented in Chapter and And using first and second-order approximations, the optimality conditions for solution of vector equilibrium problems with constraints and vector fractional programming were established in Chapter and The third topic is establishing sufficient conditions for the uniqueness of solutions to nonsmooth vector equilibrium problems in terms of approximations used as derivatives in Chapter We observe the following several possible development subjects of our results for our next research The first subject is considering optimality conditions for set-valued equilibrium problems, using variational sets and set-valued first and second order approximations The second subject is using set-valued derivatives to investigate quantitative stability of solutions of parametrized optimization problems And the last subject is that variational sets can be generalized by using the radial and asymptotic properties 129 LIST OF THE PAPERS RELATED TO THE THESIS Journals [1] Anh N.L.H., Khanh P.Q., Tung L.T (2011), Variational sets: calculus and applications to nonsmooth vector optimization, Nonlinear Analysis 74, pp 2358-2379 [2] Anh N.L.H., Khanh P.Q., Tung L.T (2011), Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization, Nonlinear Analysis 74, pp 7365-7379 [3] Khanh P.Q., Tung L.T., First and second-order optimality conditions using approximations for vector equilibrium problems with constraints, submitted to Journal of Global Optimization [4] Khanh P.Q., Tung L.T., First and second-order optimality condition for multiobjective fractional programming, submitted to Journal of Optimization Theory and Applications [5] Khanh P.Q., Tung L.T., Local uniqueness of solutions to vector equilibrium problems using approximations, submitted to Journal of Optimization Theory and Applications [6] Khanh P.Q., Tung L.T., Higher-order sensitivity analysis in nonsmooth vector optimization, submitted to Journal of Optimization Theory and Applications Conferences [1] Khanh P.Q., Tung L.T., Variational sets and optimality conditions for firm and Benson efficiency in set-valued nonsmooth vector optimization, 7th Workshop on Optimization and Scientific Computing, Ba Vi, Ha Noi, April 22-24, 2009 [2] Khanh P.Q., Tung L.T., Local uniqueness of solutions to vector equilibrium problems using approximations, CIMPA-UNESCO-VIETNAM SCHOOL Variational Inequalities and Related Problems, Ha Noi, May 10-21, 2010 [3] Khanh P.Q., Tung L.T., First and second-order optimality conditions using approximations for vector equilibrium problems with constraints, 8th Vietnam-Korea Workshop Mathematical Optimization Theory and Applications, Da Lat, December 8-11, 2011 130 [...]... radial derivatives 27 3 Optimality conditions 35 References 38 Chapter 3 First and second-order optimality conditions using approximations for vector equilibrium problems with constraints 40 1 Introduction 41 2 Preliminaries 42 xxiv 3 First-order optimality conditions 46 3.1 Necessary optimality conditions 46 3.2 Sufficient optimality conditions 49 4 Second-order optimality conditions 54 4.1 The first-order... mappings, which were recently introduced in Khanh and Tuan (2008) [1,2] to replace generalized derivatives in establishing optimality conditions in nonsmooth optimization Most of the usual calculus rules, from chain and sum rules to rules for unions, intersections, products and other operations on mappings, are established Direct applications in stability and optimality conditions for various vector optimization. .. Calculus of variational sets 5 3.1 Algebraic and set operations 5 3.2 Compositions 9 3.3 More calculus 18 4 Applications 21 4.1 Variational sets of solution maps to variational inequalities 21 4.2 Optimality conditions for weak solutions to vector optimization 22 References 23 Chapter 2 Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization 24 1 Introduction and preliminaries... f is continuous at x0 , for any m ≥ 1 The second subject, discussed in the thesis, is optimality conditions for nonsmooth optimization problems Optimality conditions for some types of solutions to special vector constrained minimization problems using calculus rules of higher-order variational sets and higher-order radial derivatives have been obtained in Chapters 1 and 2 We also investigate in Chapter... optimality conditions for many kinds of solutions to nonsmooth vector equilibrium problems with functional constraints Firstly, using first approximations as generalized derivatives, we establish necessary conditions for local weakly efficient solutions of nonsmooth vector equilibrium problems with functional constraints, defined as follows If intC = ∅, a vector x0 ∈ Ω is said to be a local weak solution of. .. 3,4 and 5, the approximation notion, as a kind of generalized derivatives is used to get optimality conditions for (VP) and (FP) and sufficient conditions for local uniqueness of solutions for (SEP) and (WEP) These results have important advantages that even a Foreword xxi map with infinite discontinuity at a point can be applied, and, in some cases, these results can be used in finite dimensional... x0 , then second-order approximations (AFx0 (x0 ), BFx0 (x0 )) and (Ag (x0 ), Bg (x0 )) of Fx0 and g are used to obtain second order optimality conditions In optimality conditions, we have to impose the assumption that first and second order approximations are asymptotically pointwise compact, defined as follows Let L(X, Y ) stand for space of the continuous linear mappings from X into Y and B(X, X,... powerful generalized derivatives can be defined only for the first and second orders and the higher-order optimality conditions available in the literature are much fewer than the first and second-order ones The third strong point of the variational sets is that almost no assumptions are needed to be imposed for their being well-defined and nonempty and also for establishing optimality conditions Calculating... during the last three years References [Allali and Amaroq 1997] K Allali, T Amahroq, Second-Order Approximations and Primal and Dual Necessary Optimality Conditions, Optimization 40 (1997), pp 229-246 [Anh and Khanh 2011] N.L.H Anh, P.Q Khanh, Optimality conditions in set-valued optimization using radial sets and derivatives, submitted for publication [Aubin 1981] J.-P Aubin, Contingent derivatives of. .. particular cases, our sufficient conditions for the local uniqueness of solutions in terms of approximations improve recent existing results Note also that, though this kind of generalized derivatives has been employed in studies of some topics like metric regularity, optimality conditions, etc, it is used for the first time in investigating the uniqueness of solutions in Chapter 5 As Foreword xx equilibrium