Karush-Kuhn-Tucker necessary conditions for local Pareto minima of constrained multiobjective programming problems

9 204 0
Karush-Kuhn-Tucker necessary conditions for local Pareto minima of constrained multiobjective programming problems

Đang tải... (xem toàn văn)

Thông tin tài liệu

Karush-Kuhn-Tucker necessary conditions for local Pareto minima of constrained multiobjective programming problems DO VAN LUU Department of Mathematics and Informatics Thang Long University Abstract Under the constraint qualification of Abadie type we establish necessary efficiency conditions described by inconsistent inequalities, Karush-Kuhn-Tucker necessary conditions and strong Karush-Kuhn-Tucker necessary conditions for lo- cal Pareto minima of nonsmooth multiobjective programming problems involving inequality, equality and set constraints in Banach spaces in terms of convexificators. Note that all the constraint functions involving in the considering problem are not necessarily continuous, except inactive constraints. 1 Introduction The Karush-Kuhn-Tucker conditions of nonsmooth multiobjective programming problems is a significal topic in optimization. If Lagrange multipliers corresspond- ing to all the components of the objective function are positive, they are called strong Karush-Kuhn-Tucker conditions. The Lagrange multiplier rules in terms of different subdifferentials for nonsmooth optimization problems have been studied by many authors (see, e. g., [4], [6 - 16], and references therein).The notion of convex and compact convexificator was first introduced by Demyanov [2]. This is a generalization of the notions of upper convex and lower concave approximations in [3]. The notions of nonconvex closed convexificator and approximate Jacobian were introduced by Jeyakumar and Luc in [6] and [7], respectively. They have provided good calculus rules for establishing necessary optimality conditions in nonsmooth optimization. The notion of convexificator is a generalization of some notions of known subdifferentials such as the subdifferentials of Clarke [1], Michel-Penot [15], Mordukhovich [16]. In this paper, under the constraint qualifications of Abadie type we establish necessary efficiency conditions described by inconsistent inequalities, Karush-Kuhn- Tucker necessary conditions and strong Karush-Kuhn-Tucker necessary conditions for local Pareto minima of nonsmooth multiobjective programming problems involv- ing inequality, equality and set constraints in Banach spaces in terms of convexifi- cators .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 52 2 Notions and definitions Let X be a Banach space, X ∗ topological dual of X and f a extended-real-valued function defined on X. The lower (resp. upper) Dini directional derivatives of f at x ∈ X in a direction v ∈ X are defined, respectively, by f − (x; v) = lim inf t↓0 f(x + tv) − f (x) t ,  resp. f + (x; v) = lim sup t↓0 f(x + tv) − f (x) t  . In case f + (x; v) = f − (x; v), we denote their common value by f  (x; v), which is called Dini derivative of f at x in the direction v. The function f is Dini differentiable at x if its Dini derivative at x exists in all directions. Recall [6] that the function f is said to have an upper (resp. lower) convexificator ∂ ∗ f(x) (resp. ∂ ∗ f(x)) at x if ∂ ∗ f(x) (resp. ∂ ∗ f(x)) ⊆ X ∗ is weak ∗ closed, and f − (x; v)  sup ξ∈∂ ∗ f(x) ξ, v (∀v ∈ X),  resp. f + (x; v)  inf ξ∈∂ ∗ f(x) ξ, v (∀v ∈ X)  . A weak ∗ closed set ∂ ∗ f(x) ⊆ X ∗ is said to be a convexificator of f at x if it is both upper and lower convexificator of f at x. Note that upper and lower convexificators are not unique. For a locally Lipschitz function, the Clarke subdifferential and the Michel-Penot subdifferential are convexificators of f at x (see [6]). The function f is said to have an upper semi-regular convexificator ∂ ∗ f(x) at x if ∂ ∗ f(x) ⊆ X ∗ is weak ∗ closed, and f + (x; v)  sup ξ∈∂ ∗ f(x) ξ, v (∀v ∈ X). (1) Following [6], if equality holds in (1) then ∂ ∗ f(x) is called an upper regular convex- ificator. For a locally Lipschitz function and regular in the sense of Clarke [1], the Clarke subdifferential is an upper regular convexificator (see [4]). Example 2.1 Let f : R → R be defined as f(x) :=        x, if x ∈ Q ∩ [0, +∞[, x 4 − 4x 3 + 4x 2 , if x ∈ Q∩] − ∞, 0], 0, otherwise, where Q is the set of rationals. Then f + (0; v) =  v, if v  0, 0, if v < 0, .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 53 f − (0; v) = 0 (∀v ∈ R). The set {0; 1} is an upper semi-regular convexificator of f at x = 0, and so it is upper convexificator of f at x. The set {0} is lower convexificator of f at x. Recall [1] that the contingent cone to a set C ⊆ X at a point x ∈ C is defined as K(C; x) =  v ∈ X : ∃ v n → v, ∃ t n ↓ 0 such that x + t n v n ∈ C, ∀ n  . The cone of sequential linear directions (or sequential radial cone) to C at x ∈ C is Z(C; x) =  v ∈ X : ∃ t n ↓ 0 such that x + t n v ∈ C, ∀ n  . Note that both these cones are nonempty and Z(C; x) ⊆ K(C; x). 3 Karush-Kuhn-Tucker necessary conditions for local Pareto minima Let f, g, h be mappings from a Banach space X into R m , R n , R  , respectively, and let C be a subset of X. Then f, g, h are of the forms: f = (f 1 , . . . , f m ), g = (g 1 , . . . , g n ), h = (h 1 , . . . , h  ), where f 1 , . . . , f m , g 1 , . . . , g n , h 1 , . . . , h  are extended- real-valued functions defined on X. For the sake of simplicity, we set: I = {1, . . . , n}, J = {1, . . . , m} and L = {1, . . . , }. We shall be concerned with the following multiobjective programming problem (MP): min f(x) s.t. x belongs to M :=  x ∈ C : g i (x)  0, i ∈ I, h j (x) = 0, j ∈ L  . We set I(x) = {i ∈ I : g i (x) = 0}. Recall that a point x ∈ M is said to be a local Pareto minimizer of Problem (MP) if there exists a number δ > 0 such that there is no x ∈ M ∩ B(x; δ) satisfying f k (x)  f k (x) (∀ k ∈ J), f s (x) < f s (x) at least one s ∈ J, where B(x; δ) stands for the open ball of radius δ around x. For x ∈ X and s ∈ J, we set Q s (x) =  x ∈ C : f k (x)  f k (x) (∀k ∈ J, k = s), g i (x)  0 (∀i ∈ I(x)), h j (x) = 0 (∀j ∈ L)  . If h j is Dini differentiable at x for all j ∈ L, we put C(Q s (x); x) =  v ∈Z(C; x) : f − k (x; v)  0 (∀k ∈ J, k = s), g − i (x; v)  0 (∀i ∈ I(x)), h  j (x; v) = 0 (∀j ∈ L)  . .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 54 The relationship between Z(Q s (x); x) and C(Q s (x); x) is shown in the following result. Theorem 3.1 Let x ∈ M and h j is Dini differentiable at x for all j ∈ L. Then, for s ∈ J, Z(Q s (x); x) ⊆ C(Q s (x); x). Proof For v ∈ Z(Q s (x); x), there exists t n ↓ 0 such that x + t n v ∈ Q s (x). Hence, x + t n v ∈ C, and f k (x + t n v)  f k (x) (∀k ∈ J, k = s), g i (x + t n v)  0 = g i (x) (∀i ∈ I(x)), h j (x + t n v) = 0 (∀j ∈ L). These reduce to that v ∈ Z(C; x), and f − k (x; v)  lim inf n→+∞ f k (x + t n v) − f k (x) t n  0 (∀k ∈ J, k = s), g − i (x; v)  lim inf n→+∞ g i (x + t n v) − g i (x) t n  0 (∀i ∈ I( x)), h  j (x; v) = lim n→+∞ h j (x + t n v) − h j (x) t n = 0 (∀j ∈ L). Consequently, v ∈ C(Q s (x); x), and so the result follows. To derive Kuhn-Tucker necessary conditions for local Pareto minima of Problem (MP), we introduce the following constraint qualification of Abadie type (CQ s ): C(Q s (x); x) ⊆ Z(Q s (x); x). A necessary condition for local Pareto minima of (MP) can be stated as follows. Theorem 3.2 Let x be a local Pareto minimum of (MP), and let T be an arbitrary nonempty closed convex subcone of Z(C; x) with vertex at the origin. Assume that the constraint qualification (CQ s ) holds for some s ∈ J, and the function h j is Dini differentiable at x for all j ∈ L, the function g i is continuous at x for all i /∈ I(x). Then, the following system has no solution v ∈ T : f + s (x; v) < 0, (2) f − k (x; v)  0 (∀k ∈ J, k = s), (3) g − i (x; v)  0 (∀i ∈ I(x)), (4) h  j (x; v) = 0 (∀j ∈ L). (5) .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 55 Proof Contrary to the conclusion, suppose that there exists v 0 ∈ T satisfying (2) - (5). Then v 0 ∈ C(Q s (x); x), and hence, v 0 ∈ Z(Q s (x); x) as (CQ s ) holds. Therefore, there exists a sequence t n ↓ 0 such that x + t n v 0 ∈ Q s (x), and so x + t n v 0 ∈ C and f k (x + t n v 0 )  f k (x) (∀k ∈ J, k = s), g i (x + t n v 0 )  0 (∀i ∈ I(x)), h j (x + t n v 0 ) = 0 (∀j ∈ L). For i /∈ I(x), one has that g i (x) < 0. In view of the continuity of g i (i /∈ I(x)), there is a natural number N such that for all n  N, g i (x + t n v 0 )  0 (∀i /∈ I(x)). Hence, for all n  N, g i (x + t n v 0 )  0 (i ∈ I). Since x is a local Pareto minimum of (MP), it results that for all n  N, f s (x + t n v 0 )  f s (x), which leads to the following f + s (x; v 0 )  lim sup n→+∞ f s (x + t n v 0 ) − f s (x) t n  0. But this conflicts with (2). This completes the proof. To derive necessary conditions for efficiency of Problem (MP) we introduce the following assumption. Assumption 3.1 There exists an index s ∈ J such that the function f s admits a nonempty bounded upper semi-regular convexificator ∂ ∗ f s (x) at x; for every k ∈ J, k = s, and i ∈ I(x), the functions f k and g i admit upper convexificators ∂ ∗ f k (x) and ∂ ∗ g i (x) at x, respectively; all the functions g i (i /∈ I(x)) are continuous at x; all the functions h j (j ∈ L) are Gˆateaux differentiable at x with Gˆateaux derivatives ∇ G h j (x). For s ∈ J and a nonempty closed convex subcone T of Z(C; x), we set H s T (x) =   co∂ ∗ f s (x) +  k∈J,k=s λ k co∂ ∗ f k (x) +  i∈I(x) µ i co∂ ∗ g i (x) +  j∈L γ j ∇ G h j (x) + T 0 : λ k  0 (∀k ∈ J, k = s), µ i  0 (∀i ∈ I(x)), γ j ∈ R (∀j ∈ L)  , where co indicates the convex hull. We are now in a position to formulate a Karush-Kuhn-Tucker necessary condition for local Pareto minimum of Problem (MP). Theorem 3.3 Let x be a local Pareto minimum of Problem (MP). Assume that Assumption 3.1 is fulfilled and the constraint qualification (CQ s ) holds for some s ∈ J. Suppose, in addition, that the set H s T (x) is weak ∗ closed for some nonempty closed convex subcone T ⊆ Z(C; x) with vertex at the origin. Then, there exist λ k  0 (∀k ∈ J, k = s), µ i  0 (∀i ∈ I(x)), γ j ∈ R (∀j ∈ L) such that 0 ∈ co∂ ∗ f s (x) +  k∈J,k=s λ k co∂ ∗ f k (x) +  i∈I(x) µ i co∂ ∗ g i (x) +  j∈L γ j ∇ G h j (x) + T 0 . (6) .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 56 Proof We invoke Theorem 3.2 to deduce that the system (2) - (5) has no solution v ∈ T , where (9) is of the form: ∇ G h j (x), v = 0 (∀j ∈ L). Hence, according to Assumption 3.1, the following system is also impossible v ∈ T : sup ξ s ∈co∂ ∗ f s (x) ξ s , v < 0, (7) sup ξ k ∈co∂ ∗ f k (x) ξ k , v  0 (∀k ∈ J, k = s), (8) sup ζ i ∈co∂ ∗ g i (x) ζ i , v  0 (∀i ∈ I(x)), (9) ∇ G h j (x), v = 0 (∀j ∈ L). (10) Let us show that 0 ∈ H s T (x). (11) Assume the contrary, that 0 /∈ H s T (x). Observing that H s T (x) is convex and weak ∗ closed, a separation theorem for a weak ∗ closed convex set and a point outside that set (see, [5], Theorem 3.4) can be applied, and yields the existence of 0 = v 0 ∈ X such that ζ, v 0  < 0 (∀ζ ∈ H s T (x)). (12) This implies that ξ s , v 0  +  k∈J,k=s λ k ξ k , v 0  +  i∈I(x) µ i ζ i , v 0  +  j∈L γ j ∇ G h j (x), v 0  + η, v 0  < 0, (13) for all ξ s ∈ co∂ ∗ f s (x), λ k  0, ξ k ∈ co∂ ∗ f k (x)(k ∈ J, k = s), µ i  0, ζ i ∈ co∂ ∗ g i (x)(i ∈ I(x)), γ j ∈ R(j ∈ L), η ∈ T 0 . For λ k = 0(∀k ∈ J, k = s), µ i = 0(∀i ∈ I(x)), γ j = 0(∀j ∈ L), η = 0, in view of the boundness of ∂ ∗ f s (x), it follows from (13) that sup ξ s ∈co∂ ∗ f s (x) ξ s , v 0  < 0. (14) Let us show that sup ξ k ∈co∂ ∗ f k (x) ξ k , v 0   0 (∀k ∈ J, k = s). (15) If this were not so, there would exist k 0 ∈ J, k 0 = s such that sup ξ k 0 ∈co∂ ∗ f k 0 (x) ξ k 0 , v 0  > 0. .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 57 Then, for λ k = 0(∀k ∈ J, k = k 0 , s), µ i = 0(∀i ∈ I(x)), γ j = 0(∀j ∈ L), η = 0, ξ s ∈ ∂ ∗ f s (x), by taking λ k 0 be large enough, we get that ξ s , v 0  + λ k 0 sup ξ k 0 ∈co∂ ∗ f k 0 (x) ξ k 0 , v 0  > 0, as | ξ s , v 0  |< +∞. But it follows from (13) that ξ s , v 0  + λ k 0 sup ξ k 0 ∈co∂ ∗ f k 0 (x) ξ k 0 , v 0   0. Thus we arrive at a contradiction, and so (15) follows. Analogously, we obtain sup ζ i ∈co∂ ∗ g i (x) ζ i , v 0   0 (∀i ∈ I(x)). (16) Moreover, we also can see that ∇ G h j (x), v 0  = 0 (∀j ∈ L). (17) Indeed, suppose that this were false, that is ∇ G h j 0 (x), v 0  = 0 for some j 0 ∈ L. Then for λ k = 0(∀k ∈ J, k = s), µ i = 0(∀i ∈ I(x)), η = 0 and ξ s ∈ ∂ ∗ f s (x), in view of the boundness of ξ s , by letting γ j 0 be sufficiently large if ∇ G h j (x), v 0  > 0, while γ j 0 < 0 with its absolute value be large enough if ∇ G h j (x), v 0  < 0, we shall arrive at a contradiction with (13), and so (17) holds. It can see that v 0 ∈ T . In fact, if it were not so, there would exist η 0 ∈ T 0 such that η 0 , v 0  > 0. By letting λ k = 0(∀k ∈ J, k = s), µ i = 0(∀i ∈ I(x)), γ j = 0(∀j ∈ L), for α sufficiently large, αη 0 ∈ T 0 , and so we arrive at a contradiction with (13). Therefore, η, v 0   0(∀η ∈ T 0 ). Since T is closed convex, it is also weakly closed, and hence, v 0 ∈ T 00 = T . It follows from (14) - (17) that the system (7) - (10) has a solution v 0 ∈ T : a contradiction. Consequently, (11) holds, and so there exist λ k  0 (∀k ∈ J, k = s), µ i  0 (∀i ∈ I(x)), γ j ∈ R (∀j ∈ L) such that the inclusion (6) holds. The proof is complete. Remark 3.1 In Theorem 3.3, ∂ ∗ f k ( x)(k ∈ J, k = s) and ∂ ∗ g i (x)(i ∈ I(x)) may be unbounded. The functions f k (k ∈ J, k = s), g i (i ∈ I(x)), h j (j ∈ L) may be not continuous. 4 Strong Karush-Kuhn-Tucker necessary conditions To derive strong Karush-Kuhn-Tucker necessary conditions for efficiency of Prob- lem (MP) with positive Lagrange multipliers corresponding to all the components of the objective, we introduce the following assumption. Assumption 4.1 For every k ∈ J, the function f k admits a nonempty bounded upper semi-regular convexificator ∂ ∗ f k (x) at x; for every i ∈ I(x), the function g i .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 58 admits upper convexificator ∂ ∗ g i (x) at x; all the functions g i (i /∈ I(x)) are continuous at x; all the functions h j (j ∈ L) are Gˆateaux differentiable at x. In what follows we shall give a strong Karush-Kuhn-Tucker necessary condition for local Pareto minimum in which Lagrange multipliers corresponding to all the components of the objective function are positive. Theorem 4.1 Let x be a local Pareto minimum of Problem (MP). Assume that Assumption 4.1 is fulfilled, the constraint qualification (CQ s ) holds for all s ∈ J, and the set H s T (x) weak ∗ closed for some nonempty closed convex subcone T of Z(C; x) with vertex at the origin and all s ∈ J. Then, there exist λ k > 0 (∀k ∈ J), µ i  0 (∀i ∈ I(x)), γ j ∈ R (∀j ∈ L) such that 0 ∈  k∈J λ k co∂ ∗ f k (x) +  i∈I(x) µ i co∂ ∗ g i (x) +  j∈L γ j ∇ G h j (x) + T 0 . Proof It is easy to see that Assumption 4.1 implies Assumption 3.1 for all s ∈ J. We invoke Theorem 3.3 to deduce that for every s ∈ J, there exist λ (s) k  0 (∀k ∈ J, k = s), µ (s) i  0 (∀i ∈ I( x)) and γ (s) j ∈ R (∀j ∈ L) such that 0 ∈ co∂ ∗ f s (x) +  k∈J,k=s λ (s) k co∂ ∗ f k (x) +  i∈I(x) µ (s) i co∂ ∗ g i (x) +  j∈L γ (s) j ∇ G h j (x) + T 0 , (18) Taking s = 1, . . . , m in (18) and adding up both sides of the obtained inclusions, we arrive at 0 ∈  k∈J λ k co∂ ∗ f k (x) +  i∈I(x) µ i co∂ ∗ g i (x) +  j∈L γ j ∇ G h j (x) + T 0 , where λ k = 1 +  s∈J,s=k λ (s) k > 0 (∀k ∈ J), µ i =  s∈J µ (s) i  0 (∀i ∈ I(x)), γ j =  s∈J γ (s) j ∈ R (∀j ∈ L), as was to be shown. Remark 4.1 In Theorem 4.1, ∂ ∗ g i (x)(i ∈ I(x)) may be unbounded. The functions g i (i ∈ I(x)), h j (j ∈ L) may be not continuous. References [1] Clarke, F.H., Optimization and Nonsmooth Analysis, Wiley Interscience, New York, 1983. [2] Demyanov, V.F., Convexification and concavification of a positively homoge- neous function by the same family of linear functions, Universia di Pisa, Report 3, 208, 802 (1994) .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 59 [3] Demyanov, V.F. and Rubinov, A.M., Constructive Nonsmooth Analysis, Verlag Peter Lang, Frankfurt, 1995. [4] Dutta, J. and Chandra, S., Convexifactors, generalized convexity and vector optimization, Optimization 53 (2004), 77-94. [5] Girsanov, I.V., Lectures on Mathematical Theory of Extremum Problems, Berlin-Heidenberg, Springer-Verlag, 1972. [6] Jeyakumar, V. and Luc, D.T., Nonsmooth calculus, minimality, and monotonic- ity of convexificators, J. Optim. Theory Appl. 101 (1999), 599-621. [7] Jeyakumar, V. and Luc, D.T., Approximate jacobian matrices for nonsmooth continuous maps and C 1 -optimization, SIAM J. Control Optim. 36 (1998), 1815-1832. [8] Jeyakumar, V., Luc, D.T., and Schaible, S., Characterizations of generalized monotone nonsmooth continuous maps using approximate jacobians, J. Convex Anal. 5 (1998), 119-132. [9] Luc, D.T., A multiplier rule for multiobjective programming problems with continuous data, SIAM J. Optim. 13 (2002), 168-178. [10] Luu, D.V., Convexificators and necessary conditions for efficiency, Optimiza- tion, 63 (2014), 321-335. [11] Luu, D.V., Necessary and sufficient conditions for efficiency via convexificators, J. Optim. Theory Appl., 160 (2014), 510-526. [12] Luu, D.V. and Hang, D.D., On optimality conditions for vector variational inequalities, J. Math. Anal. Appl., 412 (2014), 792-804. [13] Luu, D.V., Necessary conditions for efficiency in terms of the Michel-Penot subdifferentials, Optimization, 61 (2012), 1099-1117. [14] Luu, D.V. and Hung, N.M., On alternative theorems and necessary conditions for efficiency, Optimization, 58 (2009), 49-6 [15] Michel, P. and Penot, J P., Calcul sous-diff´erentiel pour des fonctions lips- chitziennes et nonlipschitziennes, C. R. Acad. Sci. Paris S´er. I Math. 12 (1984), 269-272. [16] Mordukhovich, B.S. and Shao, Y., On nonconvex subdifferential calculus in Banach spaces, J. Convex Anal. 2 (1995), 211-228. .\WXF{QJWUuQKNKRDKeF3K?Q, 7UuQJ9LKeF7KQJ/RQJ 7UuQJ9LKeF7KQJ/RQJ 60 . Karush-Kuhn-Tucker necessary conditions for local Pareto minima of constrained multiobjective programming problems DO VAN LUU Department of Mathematics and Informatics Thang Long. inequalities, Karush-Kuhn- Tucker necessary conditions and strong Karush-Kuhn-Tucker necessary conditions for local Pareto minima of nonsmooth multiobjective programming problems involv- ing inequality,. qualification of Abadie type we establish necessary efficiency conditions described by inconsistent inequalities, Karush-Kuhn-Tucker necessary conditions and strong Karush-Kuhn-Tucker necessary conditions for

Ngày đăng: 23/04/2015, 10:02

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan