1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: One step from DC optimization to DC mixed variational inequalities

16 199 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 178,5 KB

Nội dung

This article was downloaded by: [Universidad Autonoma de Barcelona] On: 23 October 2014, At: 01:29 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Optimization: A Journal of Mathematical Programming and Operations Research Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gopt20 One step from DC optimization to DC mixed variational inequalities a Le Dung Muu & Tran Dinh Quoc b a Hanoi Institute of Mathematics , VAST, 18 Hoang Quoc Viet road, Cau Giay district, Hanoi, Vietnam b Hanoi University of Science , 334-Nguyen Trai road, Thanh Xuan district, Hanoi, Vietnam Published online: 12 Feb 2010 To cite this article: Le Dung Muu & Tran Dinh Quoc (2010) One step from DC optimization to DC mixed variational inequalities, Optimization: A Journal of Mathematical Programming and Operations Research, 59:1, 63-76, DOI: 10.1080/02331930903500282 To link to this article: http://dx.doi.org/10.1080/02331930903500282 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content This article may be used for research, teaching, and private study purposes Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden Terms & Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions Optimization Vol 59, No 1, January 2010, 63–76 One step from DC optimization to DC mixed variational inequalities Le Dung Muua* and Tran Dinh Quocb Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 a Hanoi Institute of Mathematics, VAST, 18 Hoang Quoc Viet road, Cau Giay district, Hanoi, Vietnam; bHanoi University of Science, 334-Nguyen Trai road, Thanh Xuan district, Hanoi, Vietnam (Received 10 April 2008; final version received 15 March 2009) We apply the proximal point method to mixed variational inequalities by using DC decompositions of the cost function An estimation for the iterative sequence is given and then applied to prove the convergence of the obtained sequence to a stationary point Linear convergence rate is achieved when the cost function is strongly convex For nonconvex case, global algorithms are proposed to search a global equilibrium point A Cournot–Nash oligopolistic market model with concave cost function which motivates our consideration is presented Keywords: mixed variational inequality; splitting proximal point method; DC decomposition; local and global equilibria; Cournot–Nash model Introduction Let ; 6¼ C & Rn be a closed convex subset, F be a mapping from Rn to C and ’ be a real-valued (not necessarily convex) function defined on Rn We consider the following mixed variational inequality problem (MVIP): Find xà C such that: Fðxà ÞT ð y x ị ỵ yị x ị ! for all y C: ð1Þ We call such a point xà a global solution in contrast to a local solution defined by a point xà C that satisfies Fx ịT y x ị ỵ ’ð yÞ À ’ðxÃ Þ ! 0, 8y C \ U, ð2Þ where U is an open neighbourhood of xà These points are sometimes referred to local and global equilibrium points, respectively Note that when ’ is convex on C, a local solution is a global one When ’ is not convex, a local solution may not be a global one MVIPs of the form (1) is extensively studied in the literature The results on existence, stability and solution-approach when ’ is convex are obtained in many research papers (see, e.g [2,5,7,8,10,12,19] and the references quoted therein) However, when ’ is nonconvex, these results might be no longer preserved *Corresponding author Email: ldmuu@math.ac.vn ISSN 0233–1934 print/ISSN 1029–4945 online ß 2010 Taylor & Francis DOI: 10.1080/02331930903500282 http://www.informaworld.com Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 64 L.D Muu and T.D Quoc The proximal point method was first introduced by Martinet [9] to variational inequalities and then extended by Rockafellar [18] to finding a zero point of a maximal monotone operator Sun et al in [20] applied the proximal point method to DC optimization It is observed that with a suitable DC decomposition, DC algorithm introduced by Pham [13] for DC optimization problems becomes a proximal point algorithm Recently, DC optimization has been successfully applied to many practical problems (see [1,14–16] and the references therein) It is motivated from the well-known Cournot–Nash oligopolistic market model with an observation that the cost function is not only always linear or convex but also can be concave when amount of production increases In this article, we further apply the proximal point method to mixed variational inequality (1) by using a DC decomposition of the cost function ’ The DC decomposition of ’ ¼ g À h allows us to develop a splitting proximal point algorithm to find a stationary point of (1) This splitting algorithm is useful when g is a convex function such that the resulting convex subproblem is easy to minimize, since the resolvent is defined by using only the subgradient of g The rest of the article is organized as follows In Section 2, a splitting proximal point method for mixed variational inequalities with DC cost function is proposed An estimation for the iterative sequence to a stationary point is given Then it is used to prove the convergence to a stationary point when the cost function happens to be convex The linear convergence rate is achieved when the cost function is strongly convex In order to apply to nonconvex cases, global algorithms are also presented in Section to search a global solution We close this by the Cournot–Nash oligopolistic market model which gives an evidence of our consideration The proximal point method to DC mixed variational inequality In this section, firstly, we investigate the properties regarding local and global solutions to MVIP (1) where ’ is a DC function Next, we extend the proximal point method to find a stationary point of this problem Finally, we prove convergence results when ’ happens to be convex and F is strongly monotone 2.1 Condition for equilibrium As usual, for problem (1), we call C the feasible domain, F the cost operator and ’ the cost function Since C is closed convex, it is easy to see that any local solution to (1) is a global one provided that ’ is convex on C Motivated by this fact, we name problem (1) a convex mixed variational inequality when ’ is convex in contrast to nonconvex mixed variational inequalities where the cost function is not convex Let us denote N C :ẳ fx, U ị: x C, U is a neighbourhood of xg, and define the mapping S : N C ! 2C and the function m : N C ! R by taking Optimization 65 È ẫ Sx, U ị :ẳ argmin FxịT y xị ỵ yị: y C \ U , 3ị ẩ ẫ mx, U ị :ẳ FxịT y xị ỵ yị xị: y C \ U , ð4Þ respectively As usual, we refer to m(x, U ) as a local gap function for problem (1) The following proposition gives necessary and sufficient conditions for a point to be a solution (local or global) to (1) Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 PROPOSITION 2.1 Suppose that S(x, U ) 6¼ ; for every (x, U ) N C Then the following statements are equivalent: (a) xà is a local solution to (1); (b) xà C and xà S(xà , U ); (c) xà C and m(xà , U ) ¼ Proof Then We first prove that (a) is equivalent to (b) Suppose xà C and xà S(xà , U ) ¼ Fx ịT x x ị ỵ x ị x ị Fx ịT y x ị ỵ ’ð yÞ À ’ðxà Þ, 8y C \ U: Hence, xà is a local solution to problem (1) Conversely, if Fx ịT y x ị ỵ yÞ À ’ðxÃ Þ ! 0, 8y C \ U ð5Þ then it is clear that xà S(xà , U ) It is observed that m(x, U ) for every x C \ U Thus xà C \ U and m(xà , U ) ¼ if and only if F(xà )T( y À xà ) ỵ ( y) (x ) ! for all y C \ U which means that (a) and (c) are equivalent g Clearly, if, in Proposition 2.1, U contains C then xà is a global solution to (1) Unlike convex mixed variational inequality (including convex optimization and variational inequalities), a DC mixed variational inequality may not have solution for all the related functions are continuous and the feasible domain is compact For example, if we take C :¼ [À1, 1] & R, F(x) ¼ x and ’(x) ¼ Àx2 (a concave function) then problem (1) has no solution in this such case Conditions for existence of solution of MVIPs lacking convexity have been considered in some recent papers (see, e.g [7,12]) However, in those papers, we focus only on solution approaches to (1) We denote ÀC the set of proper, lower semicontinuously subdifferentiable convex functions on C and suppose that both convex functions g and h belong to C Moreover, since (x) ẳ (g(x) ỵ g1(x)) (h(x) ỵ g1(x)) for arbitrary function g1 C, we may assume that both g and h are strongly convex on C By Proposition 2.1, x is a solution to (1) if and only if x solves the following optimization problem: ẫ ẩ FxịT y xị ỵ gð yÞ À hð yÞ: y C : Motivated by this fact we can borrow the concept of stationary point from optimization to MVIP (1) 66 L.D Muu and T.D Quoc Definition 2.2 A point x C is called a stationary point to problem (1) if Fxị ỵ @gxị @hxị ỵ NC xị, where ẩ NC xị :ẳ w: wT y xị 6ị É 0, 8y C Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 denotes the (outward) normal cone of C at x C, and @g(x) and @h(x) are the subgradients at x of g and h, respectively Since NC is a cone, for every c40, inclusion (6) is equivalent to È É c FðxÞ þ @gðxÞ À @hðxÞ þ NC ðxÞ: ð7Þ Let us define g1(x) :ẳ g(x) ỵ C(x), where @C(x) is the subgradient of the indicator function C of C at x Then, applying the well-known Moreau–Rockafellar theorem, we have @g1(x) ¼ @g(x) ỵ @C(x) Thus, by definition, x is a stationary point if and only if À Á c Fxị ỵ @gxị @hxị ỵ @C xị, where c40 is referred to a regularization parameter in the algorithm to be described below From Proposition 2.1, it follows, like in optimization, that every local solution to the problem (1) is a stationary point Since both g and C are proper, convex and closed, so is the function g1 Thus @g1(x) ẳ @g(x) ỵ NC(x) for every x C PROPOSITION 2.3 A necessary and sufficient condition for x to be a stationary point to the problem (1) is that À 8ị x I ỵ c@g1 x cFxị ỵ c@hxị , where c40 and I stands for the identity mapping Proof Since g1 is proper closed convex, (I ỵ @g1)1 is single valued and defined everywhere [18] Hence, x satisfies (8) if and only if x cF(x) ỵ cv(x) (I ỵ c@g1)(x) for some v(x) @h(x) Since NC(x) is a cone and @g1(x) ẳ @g(x) ỵ @C(x) ẳ @g(x) ỵ NC(x), the inclusion x cF(x) ỵ cv(x) 2(I ỵ c@g1)(x) is equivalent to F(x) ỵ @g(x) @h(x) ỵ NC(x) which proves (8) g 2.2 The algorithm and its convergence If we denote the right-hand side of (8) by Z(x) then inclusion (8) becomes x Z(x) Proposition 2.3 suggests that finding a stationary point of (1) is indeed to find a fixed point of the splitting proximal point mapping Z According to framework of proximal point methods we can construct an iterative sequence as follows: Taking an arbitrary x0 C and set k :¼ For each k ¼ 0, 1, , for a given xk, we compute xkỵ1 by taking: k xkỵ1 ẳ I ỵ ck @g1 x ck Fxk ị ỵ ck vxk ị , where v(xk) @h(xk) ð9Þ 67 Optimization If we denote by yk :ẳ xk ckF(xk) ỵ ckv(xk) then finding xkỵ1 is reduced to solving the strongly convex programming problem: & ' kx À yk k : x C : 10ị gxị ỵ 2ck Indeed, by the well-known optimality condition for convex programming, xkỵ1 is the optimal solution to the convex problem (10) if and only if Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 @gxkỵ1 ị ỵ kỵ1 x yk ị ỵ NC xkỵ1 ị: ck Note that when F  and h  the process (9) becomes the well-known proximal point algorithm for convex programming problems, whereas F  it is the proximal method for DC optimization [14,18,20] In order to prove the convergence of the proximal point method defined by (9), we recall the following well-known definitions (see, e.g [7,17,21]) n For any mapping  : C ! 2R :  is said to be monotone on C if ðu À vÞT ðx À yÞ ! 0, 8x, y C, 8u ðxÞ, v ð yÞ;  is said to be maximal monotone if its graph is not contained properly in the graph of another monotone mapping;  is said to be strongly monotone with modulus 40, shortly, -strongly monotone, on C if ðu À vÞT ðx À yÞ ! kx À yk2 , 8x, y C, 8u ðxÞ, v ð yÞ;  is said to be cocoercive with modulus 40, shortly, -cocoercive, on C if ðu À vÞT ðx À yÞ ! ku À vk2 , 8x, y C, 8u ðxÞ, v ð yÞ: ð11Þ Clearly, if  is single valued and -cocoercive then it is 1/-Lipschitz PROPOSITION 2.4 Suppose that the set of stationary points Sà of problem (1) is nonempty, that F is -cocoercive, g is strongly convex on C with modulus 40, and h is L-Lipschitz differentiable on C Then, for each xà Sà , we have 2 k kxk À xà k À k kxkỵ1 x k ! ck 2 ck ịkFxk Þ À Fðxà Þk , ð12Þ where k ¼ þ ckLt, k ¼ ð1 þ 2ck  À ck LtÞ and t40 Proof First we note that, since g is proper, closed convex and C is nonempty closed convex, the mapping (I ỵ ck@g1)1 is single valued and defined everywhere for every ck40 [18] Thus the sequence {xk} constructed by (9) is well-defined It follows from (9) that xkỵ1 ẳ xk ck Fxk ị ỵ ck vk ck zkỵ1 , k k kỵ1 kỵ1 kỵ1 kỵ1 13ị where v ¼ rh(x ) and z @g1(x ) ẳ @g(x ) ỵ NC(x ) For simplicity of notation, in the remainder of the following expressions, we write F k for F(xk) and F à for F(xà ) By definition, if xà is a stationary point to MVIP (1) 68 L.D Muu and T.D Quoc then ¼ zà þ F(xà ) À và , where zà @g1(xà ) and và ¼ rh(xà ) The cocoercivity of F on C implies that ðF k À F à ÞT ðxk À xÃ Þ À kF k À F à k ẳ F k F ịT xk x ck F k ỵ ck F ị Dk , where Dk ẳ ( ck)kF k À F à k2 Since F à ¼ v z ckvk ckzkỵ1, we obtain from the last inequality that and xkỵ1 ẳ xk ckF k þ ðF k À F à ÞT ðxk À ck F k ỵ ck vk ck zkỵ1 x Þ Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 À ck ðF k À F à ịT vk v zkỵ1 ỵ z ị Dk ẳ F k F ịT xkỵ1 xÃ Þ À ck ðF k À F à ÞT vk v zkỵ1 ỵ z ị Dk : ð14Þ On the other hand, since g is strongly convex with modulus 40, it is obvious that @g is strongly monotone with modulus  which implies that the mapping @g1 :ẳ @g ỵ NC is also strongly monotone with modulus  Thus, from zkỵ1 @g1(xkỵ1), we can write xkỵ1 x ịT zkỵ1 z ị kxkỵ1 x k ! 0: 15ị Adding (14) to (15), and then using z ỵ F(x ) ẳ v and (13), we get F k F ỵ zkỵ1 z ịT xkỵ1 x ị À ck ðF k À F à ÞT ðvk À v zkỵ1 ỵ z ị kxkỵ1 x k À Dk   ðxk À xÃ Þ À xkỵ1 x ị ẳ xkỵ1 x ịT vk þ À và ck À ck ðF k À F ịT vk v zkỵ1 ỵ z ị kxkỵ1 x k Dk : Now, we denote by x^ k :¼ xk À xà , x^ kỵ1 :ẳ xkỵ1 x , v^k :ẳ vk v , z^kỵ1 :ẳ zkỵ1 z and F^ k :¼ F k À F à We can write the last inequality as 2ck x^ kỵ1 ịT v^k 2x^ kỵ1 ịT x^ kỵ1 x^ k ị 2c2k F^ k ịT v^k z^kỵ1 ị 2ck kx^ kỵ1 k 2ck Dk ! 0: 16ị From (13), we have 2 kx^ kỵ1 x^ k k ẳ c2k kF^ k k ỵ c2k kz^kỵ1 v^k k 2c2k F^ k ịT v^k z^kỵ1 ị: Then using the identity 2 2x^ kỵ1 ịT x^ kỵ1 x^ k ị ẳ kx^ kỵ1 x^ k k ỵ kx^ kỵ1 k À kx^ k k , we obtain from (16) that 2 2ck x^ kỵ1 ịT v^k ỵ 2ck ịkx^ kỵ1 k ỵ kx^ k k À c2k kF^ k k À c2k kv^k À z^kỵ1 k 2ck Dk ! 0: Since rh is L-Lipschitz continuous, we have kv^k k using the Chebyshev inequality that kỵ1 T k 2x^ ị v^ kỵ1 2kx^ k kkv^ k k kỵ1 2Lkx^ kkx^ k 17ị Lkx^ k k It is easy to show by kx^ kỵ1 k 2L tkx^ k ỵ t k 2 ! , 8t 0: 69 Optimization Thus we can replace 2x^ kỵ1 ịT v^k by 2Ltkx^ k k ỵ kx^ definition of Dk to obtain kỵ1 k t ị into (17) and using the 2 2 k kx^ k k k kx^ kỵ1 k ! ck 2 ck ịkF^ k k ỵ c2k kv^k z^kỵ1 k , where k ẳ ỵ ck Lt, k ẳ ỵ 2ck  proved ck Ltị and t40 The proposition ð18Þ is g Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 The following corollary proves the convergence of the proximal point sequence {xk} by using estimation (12) COROLLARY 2.5 Under the assumptions of Proposition 2.4, we suppose further that  ! L Then the sequence {xk} generated by (9) converges to a stationary point of problem (1) Moreover, if either 4L or F is -strongly monotone then the sequence {xk} linearly converges to a stationary point of (1) Proof Suppose  ! L Let m and M be two real numbers such that 05m ck M52 If we choose t ¼ 1, it follows from (18) that 2 kxk À xà k kxkỵ1 x k ! m2 M ị kFxk ị Fx ịk ! 0: ỵ ML Hence, {xk} is bounded and the sequence {kxk À xà k2} is convergent, since it is nonincreasing and bounded from below by Moreover, this inequality implies that limk!1 F(xk) ¼ F(xà ) Note that, by (13), we have xkỵ1 xk ẳ lim vk zkỵ1 F k ị ẳ lim vk zkỵ1 ị F à , k!1 k!1 k!1 ck ¼ lim which follows by the fact ÀF à ¼ zà À và that limk!1(vk v zkỵ1 ỵ z ) ẳ Let x1 be a limit point of the bounded sequence {xk} and {xk: k K} be the subsequence converging to x1 Since F is cocoercive, it is continuous Thus limk!1 F(xk) ¼ F(xà ) implies F(x1) ¼ F(xà ) By assumption that rh is L-Lipschitz, we have kvk À và k Lkxk À xà k Thus {vk} is bounded too By taking again a subsequence, if necessary, we may assume that the subsequence {vk: k K} converges to v1 Using the continuity of rh, we have v1 ¼ rh(x1) Now, we show that v1 À F(x1) @g1(x1) To this end, let z @g1(x) Then it follows from the strong monotonicity of @g1 that kxk xk zk zịT xk xị ẳ zk zịT xk x1 ị ỵ zk zịT x1 xị ẳ zk zịT xk x1 ị ỵ zk vk1 zịT x1 xị ỵ vk1 ịT x1 xị: Note that, from limk!1(vk v zkỵ1 ỵ z ) ẳ 0, we have zkỵ1 vk ! z v ẳ ÀFðxà Þ: Then, taking the limit on the left-hand side of the last inequality, we obtain ðv1 À FðxÃ Þ À xÞT ðx1 À xÞ ! which, by the maximal monotonicity of @g1 [17], implies that v1 À F(xà ) @g1(x1) Since F(x1) ¼ F(xà ), we have v1 À F(x1) @g1(x1) Note that v1 ¼ rh(x1) we have @g1(x1) ỵ F(x1) rh(x1), which means that x1 is a stationary point of 70 L.D Muu and T.D Quoc problem (1) Substituting xà into (12) by x1 and observing that {kxk À x1k} is convergent, we can imply that the whole sequence {xk} converges to x1, because it has a subsequence converging to x1 Next,qitffiffiffifollows from (12) that kkxk x k2 ! kkxkỵ1 xà k2 Thus if L5 then ffi k r :¼ k for every k ! Hence, kxkỵ1 x k rkxk x k, which shows that the sequence {xk} converges linearly to xà If F is strongly monotone with modulus 40 then kFðxk Þ À Fðxà Þkkxk À xà k ! ðFðxk Þ À Fðxà ÞÞT ðxk À xÃ Þ ! kxk À xà k : Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 Consequently, 2 kFðxk Þ À Fðxà Þk ! 2 kxk À xà k : Substituting this inequality into (12), after a simple arrangement, we get 2 ½ k À 2 ck ð2 À ck ịkxk x k ! k kxkỵ1 x k : Using assumption 05m inequality that ck M52 and taking t ¼ 1, we obtain from the last 2 ẵ1 ỵ ck L 2 ck 2 ck ịkxk x k ! ỵ 2ck  ck Lịkxkỵ1 x k : Since  ỵ 2  m2 ị L, ỵ ckL 2ck(2 ck)51 ỵ 2ck ckL, it is easy to see g that {xk} converges linearly to xà Global solution methods In this section, we propose solution methods for finding a global solution of MVIP (1), where the cost function ’ may not be convex, but a DC function The first method uses the convex envelope of the cost function to convert a nonconvex MVIP to a convex one The second method is devoted to the case when the cost function ’ is concave In this case, a global solution attains at an extreme point of the feasible domain This fact suggests that outer approximations which have widely used in global optimization can be applied to nonconvex mixed variational inequalities First, we recall that the convex envelope of a function  on a convex set C is a convex function conv  on C that satisfies the following conditions: (i) conv (x) (x) for every x C, (ii) If l is convex on C and l(x) (x) for all x C then l(x) x C conv (x) for all We need the following lemma LEMMA 3.1 [6] Let  :ẳ l ỵ 1 with l being an affine function Suppose that C is a polyhedral convex set Then the convex envelope conv  of  on C is l ỵ conv 1, where conv 1 denotes the convex envelope of 1 on C Using Lemma 3.1 we can prove the following proposition which states that the problem (1) is equivalent to a convex MVIP whenever it admits a solution Optimization 71 PROPOSITION 3.2 Suppose that problem (1) is solvable Then a point x, for which conv ’(x) ¼ ’(x), is a global solution to problem (1) if and only if it is a solution of the following convex MVIP: Find x C such that: FxịT y xị ỵ conv yị À conv ’ðxÞ ! for all y C: ð19Þ Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 Proof For simplicity of notation, let us denote, respectively, the bifunction of problem (1) by f, i.e f x, yị :ẳ FxịT y xị ỵ yÞ À ’ðxÞ, and the bifunction of (19) by f,~ i.e ~ yị :ẳ FxịT y xị ỵ conv ’ð yÞ À conv ’ðxÞ: fðx, It follows from our assumption that ~ yị ẳ FxịT y xị ỵ conv yị xị: fx, ~ ị is the Since, for each fixed x, the function F(x)T(Á À x) is affine, by Lemma 3.1, fðx, à convex envelope of f (x, Á) on C Suppose that x is a global solution to (1) Then f (xà , y) ! for every y C In virtue of Proposition 2.1, we have ẩ ẫ mx ị ẳ f x , yị: y C ẳ 0: Thus m(xà ) f (xà , y) for every y C Since the constant function m(xà ) (with respect ~ , yị ! mx ị ẳ for every y C, which to the variable y) is convex, we have fðx à means that x is a global solution to the problem (19) Conversely, if xà is a solution to (19) then, again by Proposition (2.1), one has ~ ị :ẳ f~ x , yị: ẳ mx y2C ~ , yị f x , yị and m(x ) ẳ fy2C(x , y) which follows that Since fðx à ~ Þ mðxÞ This inequality together with m(xà ) implies m(xà ) ¼ 0 ¼ mðx g Again by Proposition 2.1, we conclude that xà is a global solution to (1) Proposition 3.2 suggests that instead of solving nonconvex MVIP (1) we can solve convex MVIP (19) However, computing the convex envelope of a function on a convex set, in general, is difficult except special cases As a particular case, we consider an important special class of ’ when ’ ¼ g À h with g affine, Àh concave In this case, the convex envelope of f (x, Á) is À Á conv f x, yị ẳ FxịT y xị þ gð yÞ þ convðÀhð yÞÞ À gðxÞ À hðxÞ : Since Àh is concave, its convex envelope on a polyhedral convex set can be computed with a reasonable effort, even explicitly when, for instance, the polyhedron is a simplex [6] Another global solution approach to problem (1) where ’ is a concave function is outer approximation which has been widely used in global optimization This approach is based upon the fact that a global solution to problem (1) attains at an 72 L.D Muu and T.D Quoc extreme point of the feasible set C in our specific case In fact, by Proposition 2.1, x is a global solution to MVIP (1) if and only if it is a solution to optimization problem o n FðxÞT ð y À xị ỵ yị xị: y C : ð20Þ Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 y Since ’ is concave, this mathematical programme attains its optimal solution at an extreme point of C whenever it admits a solution Thus, for each fixed x, problem (20) has a solution at an extreme point of C Suppose that C is a compact, convex set Then, we can describe an outer approximation procedure for globally solving MVIP (1) Similar to global optimization, in this outer approximation procedure, it starts from a simple polyhedral convex set S0 which contains the feasible domain C, a nested sequence {Sk} of polyhedral convex sets is constructed such that S0 ' S1 ' Á Á Á ' Sk ' Á Á Á  C: At each iteration k, we solve the following relaxed problem: Find vk Sk such that: Fðvk ịT y vk ị ỵ yị ’ðvk Þ ! for all y Sk : ð21Þ If it happens that vk C, we are done Otherwise, we continue the process by constructing a new polyhedron Skỵ1 containing C but does not contain vk and solve the new relaxed problem Namely, we can describe the algorithm in detail as follows Algorithm Initialization Take a simple polyhedral convex set S0, for example, a simplex, containing C Let V(S0) denote the set of vertices of S0 Iteration k (k ¼ 0, 1, ) At the beginning of each iteration k, we have a polyhedral convex set Sk whose vertex set Vk is known For each v Vk, solve the following optimization problem: n o 22ị mSk vị :ẳ FvịT y vị ỵ yị vị: y Vk : Let vk Vk such that mSk(vk) :¼ maxv2Vk mSk(v) If vk C then terminate: vk is a global solution to (1) Otherwise, construct a cutting hyperplane lk( y) such that lk(vk)40 and lk( y) for every y C and then define n o Skỵ1 :ẳ y Sk : lk ð yÞ : ð23Þ Compute Vkỵ1, the set of vertices of Skỵ1 Increase k by and repeat Convergence of Algorithm depends upon the construction of the cutting hyperplane For example, when C is defined by C ¼ {y: c( y) 0} with c being a closed convex, subdifferentiable function, one can determine a cutting hyperplane by taking lk yị :ẳ cvk ị þ ð pk ÞT ð y À vk Þ, ð24Þ where pk @c(vk) In the case C having an interior point we can use the cutting plane determined in the following lemma 73 Optimization LEMMA 3.3 [6] Let {vk} & Rn\C be a bounded sequence Let v0 int C, yk [v0, vk]\int C, pk @c( yk) and k c( yk) If, for every k, the affine function lk(x) :ẳ (pk)T(x yk) ỵ k satisfies lk ðvk Þ 0, lk ðxÞ 0, 8x C then every cluster point of the sequence {vk} belongs to C Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 The following theorem shows the convergence of the outer approximate algorithm THEOREM 3.4 Suppose that problem (1) is solvable Suppose, in addition, that F and ’ are continuous on S0 and the cutting hyperplane used in the outer approximation algorithm is given as in Lemma 3.3 Then, any cluster point of the sequence {vk} is a global solution to problem (1) We omit the proof of this theorem, since it can be done by a similar way as in the proof of outer approximation schemes in global optimization (see, e.g [6]) We notice, as it is well-known in global optimization, that the number of newly generated vertices of a polyhedron constructed by adding a cutting hyperplane to a given polyhedron may increase very quickly in high-dimensional spaces So this outer approximation method is expected to work well only for problems of moderate dimension A Cournot–Nash oligopolistic equilibrium model As an example for DC mixed variational inequalities, we consider in this section a Cournot–Nash oligopolistic market equilibrium model In this model, it is assumed that there are n-firms producing a common homogeneous P commodity and that the price pi of firm i depends on the total quantity  :¼ ni¼1 xi of the commodity Let hi(xi) denote the cost of the firm i when its production level is xi Suppose that the profit of firm i is given by ! n X xi À hi ðxi Þ i ẳ 1, , nị, 25ị fi x1 , , xn ị ẳ xi pi i¼1 where hi is the cost function of firm i that is assumed to be dependent only on its production level Let Ci & R, (i ¼ 1, , n) denote the strategy set of the firm i Each firm seeks to maximize its own profit by choosing the corresponding production level under the presumption that the production of the other firms are parametric input In this context, a Nash equilibrium is a production pattern in which no firm can increase its profit by changing its controlled variables Thus under this equilibrium concept, each firm determines its best response given other firms’ actions Mathematically, a point x ẳ x1 , , xn ị C :¼ C1  Á Á Á  Cn is said to be a Nash-equilibrium point if fi ðxÃ1 , , xi1 , yi , xiỵ1 , , xÃn Þ fi ðxÃ1 , , xÃn Þ, 8yi Ci , 8i ẳ 1, , n: 26ị When hi is affine, this market problem can be formulated as a special Nash-equilibrium problem in the n-person non-cooperative game theory, which in turn is a strongly monotone variational inequality (see, e.g [7]) 74 L.D Muu and T.D Quoc Let n X ẫx, yị :ẳ fi x1 , , xi1 , yi , xiỵ1 , , xn ị, 27ị iẳ1 and ẩx, yị :ẳ Éðx, yÞ À Éðx, xÞ: ð28Þ Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 Then, it has been proved [7] that the problem of finding an equilibrium point of this model can be formulated as the following equilibrium problem in the sense of Blum and Oettli [3] (see also [11]): Find xà C such that: Èðxà , yÞ ! for all y C: ðEPÞ In classical Cournot–Nash models [4,7], the price and the cost functions for each firm are assumed to be affine of the forms 0 ! 0, pi ị  pị ẳ 0 À , 0, with  ¼ n X xi , iẳ1 hi xi ị ẳ i xi ỵ i , i ! 0, i ! ði ¼ 1, , nÞ: In this case, using (25)(28) it is easy to check that ~ ỵ  ịT y xị ỵ yT Ay xT Ax, ẩx, yị ẳ Ax where 0 ÁÁÁ 0 ÁÁÁ 7 A¼6 7, 4ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ5 T 0 ¼ ð 0 , , 0 Þ and ÁÁÁ ÁÁÁ 7 A~ ¼ 7, 4ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ5 ÁÁÁ T  ¼ ð1 , , n Þ: Then the problem of finding a Nash-equilibrium point can be formulated as the following mixed variational inequality: find x C such that ~ ỵ  ịT y xị ỵ yT Ay xT Ax ! ðAx for all y C: ð29Þ Let Q :ẳ 2A ỵ A Since 40, it is easy to imply from the definition of Q that this matrix is a symmetric and positive definite matrix Mixed variational inequality (29) can be reformulated equivalently to a strongly convex quadratic programming problem: & ' T T ðQPÞ x Qx ỵ  ị x : x2U Hence, this problem has a unique optimal solution which is also the unique equilibrium point of the classical oligopolistic market equilibrium model The oligopolistic market equilibrium models, where the profit functions fi (i ¼ 1, , n) of each firm are assumed to be differentiable and convex with respect to its production level xi when the other production levels are fixed, are studied in [4] This convex model is formulated equivalently as a monotone variational inequality 75 Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 Optimization The cost function is assumed to depend linearly on the quantity of the commodity, in general, is not practical, since usually the cost per a unit of the action does decrease when the quantity of the commodity exceeds a certain amount Taking into account this fact, in the sequel we consider the oligopolistic market equilibrium models with concave cost functions The Cournot–Nash oligopolistic equilibrium market models with piecewise concave cost functions have been considered recently in [11] In this article, the authors have developed a global method for obtaining a global equilibrium point to the models In this algorithm the strategy box C is divided into subboxes and on each of them the cost function is affine Thus the proposed algorithm works well when n is relatively small, but it becomes expensive when n gets larger With a similar way as in [11], in this model we suppose that the cost functions hi(Á) (i ¼ P1, , n) are concave (not necessarily piecewise) and that the price function p njẳ1 xj ị can change from firm to firm Namely, the price has the following form: ! n n X X pi ị :ẳ pi xj ẳ i À i xj , i ! 0, i i ẳ 1, , nị: 30ị j¼1 j¼1 In this case, we denote by T ¼ ð 1 , 2 , , n Þ, 0 ÁÁÁ 0 ÁÁÁ B :¼ 4ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ 0 ÁÁÁ 7 7, ÁÁÁ5 1 ÁÁÁ 2 Á Á Á B~ :¼ 4ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ5 n n n n and hxị :ẳ n X hi xi ị i¼1 with hi (i ¼ 1, , n) being concave functions Obviously, B is a symmetric positive definite matrix Then, we can formulate problem (EP) as follows: Find xà C such that: f ðxà , yÞ :ẳ Fx ịT y x ị ỵ yÞ À ’ðxÃ Þ ! for all y C, ~ and (x) :ẳ xTBx ỵ h(x) Since B is symmetric positive definite where Fxị :ẳ Bx and h is concave, this problem is of the form (1) Acknowledgement This work is supported in part by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) References [1] L.T.H An and T Pham Dinh, The DC (difference of convex functions) and DCA revisited with DC models of real world nonconvex optimization problems, Ann Oper Res 133 (2005), pp 23–47 Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 76 L.D Muu and T.D Quoc [2] P.N Anh, L.D Muu, V.H Nguyen, and J.J Strodiot, On the contraction and nonexpansiveness properties of the marginal mappings in generalized variational inequalities involving co-coercive operators, in Generalized Convexity and Monotonicity, A Eberhard, N Hadjisavvas and D.T Luc, eds., Springer-Verlag, New York, 2005, Chapter 5, pp 89–111 [3] E Blum and W Oettli, From optimization and variational inequality to equilibrium problems, Math Stud 63 (1994), pp 127–149 [4] F Facchinei and J.S Pang, Finite-Dimensional Variational Inequalities and Complementary Problems, Springer-Verlag, New York, 2003 [5] M Fukushima, Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems, Math Prog 53 (1992), pp 99–110 [6] R Horst and H Tuy, Global Optimization: Deterministic Approach, Springer-Verlag, New York, 2003 [7] I.V Konnov, Combined Relaxation Methods for Variational Inequalities, Springer-Verlag, Berlin, 2000 [8] I.V Konnov and S Kum, Descent methods for mixed variational inequalities in Hilbert spaces, Nonlinear Anal.: Theory, Methods Appl 47 (2001), pp 561–572 [9] B Martinet, Regularisation d’inequalions variationnelles par approximations successives, Rev Francaise D’Inform Rech Oper (1970), pp 154–159 [10] L.D Muu, An augmented penalty function method for solving a class of variational inequalities, USSR Comp Math Math Phys 12 (1986), pp 1788–1796 [11] L.D Muu, V.H Nguyen, and N.V Quy, On Nash-Cournot oligopolistic market equilibrium problems with concave cost functions, J Glob Optim 41 (2007), pp 351–364 [12] M.A Noor, Iterative schemes for quasi-monotone mixed variational inequalities, Optimization 50 (2001), pp 29–44 [13] D.T Pham, Algorithms for solving a class of nonconvex optimization problems, methods of subgradients, in Fermat Days 85, Mathematics for Optimization, J.B Hirriart Urruty, ed., Elsevier Science, North Hollad, 1986, pp 249–270 [14] D.T Pham and L.T.H An, Convex analysis approach to DC programming: Theory, algorithms and applications, ACTA Math Vietnam 22 (1997), pp 289–355 [15] D.T Pham and L.T.H An, A DC optimization algorithm for solving the trust region problem, SIAM J Optim (1998), pp 476–505 [16] D.T Pham, L.T.H An, and F Akoa, Combining DCA and interior point techniques for large-scale nonconvex quadratic programming, Optim Methods Softw 23 (2008), pp 609–629 [17] R.T Rockafellar, On the maximality of sums of nonlinear monotone operators, Trans Math Soc 149 (1970), pp 75–87 [18] R.T Rockafellar, Monotone operators and the proximal point algorithm, SIAM J Control Optim 14 (1976), pp 877–898 [19] G Salmon, J.J Strodiot, and V.H Nguyen, A bundle method for solving variational inequalities, SIAM J Optim 14 (2004), pp 869–893 [20] W.-Y Sun, R.J.B Sampaio, and M.A.B Condido, Proximal point algorithm for minimization of DC function, J Comput Math 21 (2003), pp 451–462 [21] D Zhu and M Marcotte, Cocoercivity and its role in the convergence of iterative schemes for solving variational inequalities, SIAM J Optim (1996), pp 714–726 ... 2010, 63–76 One step from DC optimization to DC mixed variational inequalities Le Dung Muua* and Tran Dinh Quocb Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014 a Hanoi... monotone operator Sun et al in [20] applied the proximal point method to DC optimization It is observed that with a suitable DC decomposition, DC algorithm introduced by Pham [13] for DC optimization. .. for mixed variational inequalities with DC cost function is proposed An estimation for the iterative sequence to a stationary point is given Then it is used to prove the convergence to a stationary

Ngày đăng: 16/12/2017, 00:57

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN