DSpace at VNU: Extragradient algorithms extended to equilibrium problems

30 50 0
DSpace at VNU: Extragradient algorithms extended to equilibrium problems

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

DSpace at VNU: Extragradient algorithms extended to equilibrium problems tài liệu, giáo án, bài giảng , luận văn, luận á...

This article was downloaded by: [The Aga Khan University] On: 11 October 2014, At: 03:50 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Optimization: A Journal of Mathematical Programming and Operations Research Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gopt20 Extragradient algorithms extended to equilibrium problems a a D Quoc Tran , M Le Dung & Van Hien Nguyen a National University of Hanoi , Vietnam b Hanoi Institute of Mathematics , Vietnam a c University of Namur , Belgium Published online: 08 Nov 2008 To cite this article: D Quoc Tran , M Le Dung & Van Hien Nguyen (2008) Extragradient algorithms extended to equilibrium problems , Optimization: A Journal of Mathematical Programming and Operations Research, 57:6, 749-776, DOI: 10.1080/02331930601122876 To link to this article: http://dx.doi.org/10.1080/02331930601122876 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content This article may be used for research, teaching, and private study purposes Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden Terms & Downloaded by [The Aga Khan University] at 03:50 11 October 2014 Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions Optimization Vol 57, No 6, December 2008, 749–776 Extragradient algorithms extended to equilibrium problemsô D Quoc Trana, M Le Dungb* and Van Hien Nguyenc a National University of Hanoi, Vietnam; bHanoi Institute of Mathematics, Vietnam; c University of Namur, Belgium Downloaded by [The Aga Khan University] at 03:50 11 October 2014 (Received September 2005; final version received 17 October 2006) We make use of the auxiliary problem principle to develop iterative algorithms for solving equilibrium problems The first one is an extension of the extragradient algorithm to equilibrium problems In this algorithm the equilibrium bifunction is not required to satisfy any monotonicity property, but it must satisfy a certain Lipschitztype condition To avoid this requirement we propose linesearch procedures commonly used in variational inequalities to obtain projection-type algorithms for solving equilibrium problems Applications to mixed variational inequalities are discussed A special class of equilibrium problems is investigated and some preliminary computational results are reported Keywords: equilibrium problem; extragradient problem principle; variational inequality method; linesearch; auxiliary Mathematics Subject Classifications 2000: 65k10; 90c25 Introduction and the problem statement Let K be a nonempty closed convex subset of the n-dimensional Euclidean space Rn and let f : K  K ! R [ fỵ1g Consider the following equilibrium problem in the sense of Blum and Oettli [6]: Find xà K such that f ðxà , yÞ ! for all y K PEPị where f x, xị ẳ for every x K As usual, we call a bifunction satisfying this property an equilibrium bifunction on K Equilibrium problems have been considered by several authors (see e.g [6,12,13,21,22] and the references therein) It is well known (see e.g [13,21,23]) that various classes of mathematical programing problems, variational inequalities, fixed point problems, Nash equilibrium in noncooperative games theory and minimax problems can be formulated in the form of (PEP) *Corresponding author Email: ldmuu@math.ac.vn ôThis article is dedicated to the Memory of W Oettli ISSN 0233–1934 print/ISSN 1029–4945 online # 2008 Taylor & Francis DOI: 02331930601122876 http://www.tandf.co.uk/journals Downloaded by [The Aga Khan University] at 03:50 11 October 2014 750 D Q Tran et al The proximal point method was first introduced by Martinet in [16] for solving variational inequalities and then extended by Rockafellar [28] to the problem of finding a zero of a maximal monotone operator Moudafi [20] further extended the proximal point method to monotone equilibrium problems Konnov [14] used the proximal point method for solving Problem (PEP) with f being a weakly monotone equilibrium function Another strategy is to use, as for variational inequality problems, a gap function in order to convert an equilibrium problem into an optimization problem [14,18] In general, the transformed mathematical programing problem is not convex The auxiliary problem principle, first introduced for solving optimization problems, by Cohen in [7], and then extended to variational inequalities in [8], becomes a useful tool for analyzing and developing efficient algorithms for the solution to various classes of mathematical programming and variational inequality problems (see e.g [1,2,7–9,11,24,29] and the references cited therein) Recently, Mastroeni in [17] further extended the auxiliary problem principle to equilibrium problems involving strongly monotone equilibrium bifunctions satisfying some Lipschitz-type condition Noor in [25] used the auxiliary problem principle to develop iterative methods for solving problems where the equlibrium bifunctions are supposed to be partially relaxed strongly monotone As in the proximal point method, the subproblems needed to solve in these methods are strongly monotone equilibrium problems In a recent article, Nguyen et al [31] developed a bundle method for solving problems where the equilibrium functions satisfy a certain cocoercivity condition A continuous extragradient method is proposed in [3] for solving equilibrium problems with skew bifunctions It is well known that algorithms based upon the auxiliary problem principle, in general, are not convergent for monotone variational inequalities that are special cases of the monotone equilibrium problem (PEP) To overcome this drawback, the extragradient method, first introduced by Korpelevich [15] for finding saddle points, is used to solve monotone, even pseudomonotone, variational inequalities [9,23,24] In this article, we use the auxiliary problem principle to extend the extragradient method to equilibrium problems By this way, we obtain extragradient algorithms for solving Problem (PEP) Convergence of the proposed algorithms does not require f to satisfy any type of monotonicity, but it must satisfy a certain Lipschitz-type condition as introduced in [17] In order to avoid this requirement, we use a linesearch technique to obtain convergent algorithms for solving (PEP) The rest of the article is organized as follows In the next section, we give fixed-point formulations to Problem (PEP) We then use these formulations in the third section to describe an extragradient algorithm for (PEP) Section four is devoted to presentation of linesearch algorithms and their convergence results avoiding the aforementioned Lipschitz-type condition In section five, we discuss applications of the proposed algorithms to mixed multivalued variational inequalities The last section contains some preliminary computational results and experiments Fixed point formulations First we recall some well-known definitions on monotonicity that we need in the sequel Definition 2.1 Let M and K be nonempty convex sets in Rn , M  K, and let f : K K ! R [ fỵ1g The bifunction f is said to be 751 Optimization (a) strongly monotone on M with constant  > if for each pair x, y M, we have Àkx À yk2 ; f x, yị ỵ f y, xị (b) strictly monotone on M if for all distinct x, y M, we have f x, yị ỵ f y, xị < 0; (c) monotone on M if for each pair x, y M, we have f x, yị ỵ f ð y, xÞ 0; Downloaded by [The Aga Khan University] at 03:50 11 October 2014 (d) pseudomonotone on M if for each pair x, y M it holds that f ðx, yÞ ! implies 0; f ð y, xÞ From the definition above we obviously have the following implications: ðaÞ ) ðbÞ ) ðcÞ ) ðd Þ: Following [14], associated with (PEP) we consider the following dual problem of (PEP) Find xà K such that f ð y, xÃ Þ 8y K: ðDEPÞ For each x K, let ẩ Lf xị :ẳ y K : f ðx, yÞ É : T Clearly, xà is a solution to (DEP) if and only if xà x2K Lf ðxÞ d We will denote by Kà and K the solution sets of (PEP) and (DEP), respectively Conditions under which (PEP) and (DEP) have solutions can be found, for example, in T [6,12,13,30] and the references therein Since Kd ẳ x2K Lf xị, the solution set Kd is closed convex if f ðx, ÁÞ is closed convex on K In general, Kà may not be convex However, if f is closed convex on K with respect to the second variable and hemicontinuous with respect to the first variable, then Kà is convex and Kd  Kà Moreover, if f is pseudomonotone on K, then Kà ¼ Kd (see [14,21]) In what follows, we suppose that Kd 6¼ The following lemma gives a fixed-point formulation for (PEP) LEMMA 2.1 ([17,23]) Let f : K K ! R [ fỵ1g be an equilibrium bifunction Then the following statements are equivalent: (i) (ii) xà is a solution to (PEP); xà is a solution to the problem f ðxà , yÞ: y2K ð2:1Þ The main drawback of the fixed-point formulation given by Lemma 2.1 is that Problem (2.1), in general, may not have a solution, and if it does, the solution may not be unique To avoid this situation, it is very helpful to use another auxiliary equilibrium problem that is equivalent to (PEP) 752 D Q Tran et al Let L : K  K ! R be a nonnegative differentiable convex bifunction on K with respect to the second argument y (for each fixed x K) such that (i) (ii) Lx, xị ẳ for all x K, r2 Lx, xị ẳ for all x K where, as usual, r2 Lðx, xÞ denotes the gradient of the function Lðx, ÁÞ at x An important example for such a function is Lx, yị :ẳ 12 ky xk2 We consider the auxiliary equilibrium problem defined as Downloaded by [The Aga Khan University] at 03:50 11 October 2014 Find xà K such that f ðxà , yÞ þ Lðxà , yÞ ! for all y K ðAuPEPÞ where  > is a regularization parameter Applying Lemma 2.1 to the equilibrium function f ỵ L we see that xà is a minimizer of the convex program ẩ ẫ f x , yị ỵ Lx , yÞ : ð2:2Þ y2K Equivalence between (PEP) and (AuPEP) is stated in the following lemma LEMMA 2.2 ([17,23]) Let f : K K ! R [ ffỵ1gg be an equilibrium bifunction, and let xà K Suppose that f ðxà , ÁÞ : K ! R is convex and subdifferentiable on K Let L : K  K ! Rỵ be a dierentiable convex function on K with respect to the second argument y such that (i) (ii) Lx , x ị ẳ 0, r2 Lx , x Þ ¼ Then xà K is a solution to (PEP) if and only if xà is a solution to (AuPEP) We omit the proof for this nondifferentiable case because it is similar to the one given in [17,23] for differentiable case An extragradient algorithm for EP As we have mentioned, if f ðx, ÁÞ is closed convex on K and f ðÁ, yÞ is upper hemicontinuous on K, then the solution set of (DEP) is contained in that of (PEP) In the following algorithm, as in [17], we use the auxiliary bifunction given by   Lðx, yị :ẳ G yị Gxị rGxị, y x , ð3:1Þ where G : Rn ! R is a strongly convex (with modulus > 0) and continuously differentiable function; for example Gxị ẳ 12 kxk2 Since G is strongly convex on the closed convex set K, the problem ẩ  ẫ f x, yị ỵ G yÞ À GðxÞ À rGðxÞ, y À x ðCxÞ y2K always admits a unique solution Lemma 2.2 gives a fixed-point formulationÀ forÁ Problem (PEP) that suggests an iterative method for solving (PEP) by setting xkỵ1 ẳ s xk where s(xk) is the unique solution of the strongly convex problem (Cxk ) Unfortunately, it is well known (see also [9]) that, for Optimization 753 monotone variational inequality problems, which are special cases of monotone equilibrium problem (PEP), the sequence fxk g may not be convergent This fact suggested the use of the extragadient method introduced by Korpelevich in [15], first for finding saddle points, to monotone variational inequalities [9,23] For the singlevalued variational inequality problem given as   Find xà K such that Fðxà Þ, x À xà ! for all x K ðVIPÞ Downloaded by [The Aga Khan University] at 03:50 11 October 2014 the extragradient (or double projection) method constructs two sequences fxk g and f yk g by setting À À ÁÁ À À yk :ẳ K xk F xk and xkỵ1 :¼ ÅK xk À F yk where  > and ÅK denotes the Euclidean projection onto K Now we further extend the extragradient method to equilibrium problem (PEP) Throughout the rest of the article, we suppose that the function f ðx, ÁÞ is closed, convex and subdifferentiable on K for each x K Under this assumption, subproblems needed to solve in the algorithms below are convex programs with strongly convex objective functions In Algorithm we are going to describe, in order to be able to obtain its convergence, the regularization  must satisfy some condition (see convergence Theorem 3.2) Algorithm Step Step Take x0 K,  > and set k :¼ Solve the strongly convex program Á È À  À ẫ f xk , y ỵ G yị À rG xk , y À xk y2K to obtain its unique optimal solution yk If yk ¼ xk, then stop: xk is a solution to (PEP) Otherwise, go to Step Step Solve the strongly convex program Á È À  À Á É f yk , y ỵ G yị rG xk , y xk y2K 3:2ị 3:3ị to obtain its unique solution xkỵ1 Step Set k :ẳ k ỵ 1, and go back to Step The following lemma shows that, if Algorithm terminates after a finite number of iterations, then a solution to (PEP) has already been found LEMMA 3.1 (PEP) Proof If the algorithm terminates at some iterate point xk, then xk is a solution of If yk ¼ xk, then, by the fact that f ðx, xÞ ¼ 0, we have À Á À Á À Á   f xk , yk ỵ G yk À G xk À rG xk , yk À xk ¼ 0: Since yk ¼ xk is the solution of (3.2), we have À Á À Á À Á  À Á  ¼ f xk , yk þ G yk À G xk À rG xk , yk À xk À Á À Á  À Á  f xk , y ỵ G yị G xk À rG xk , y À xk 8y K: 754 D Q Tran et al Thus, by Lemma 2.2, xk is a solution to (PEP) g The following theorem establishes the convergence of the algorithm THEOREM 3.2 (i) (ii) Suppose that G is strongly convex with modulus > and continuously differentiable on an open set  containing K There exist two constants c1 > and c2 > such that f x, yị ỵ f y, zÞ ! f ðx, zÞ À c1 ky À xk2 À c2 kz À yk2 8x, y, z K: ð3:4Þ Downloaded by [The Aga Khan University] at 03:50 11 October 2014 Then (a) For every xà Kd , it holds true      k   2 k kỵ1 756 D Q Tran et al Since G is strongly convex with modulus > 0, for every x and y, one has 2    Gð yÞ À GðxÞ À rGðxÞ, y À x ! y À x 8x, y K: 3:11ị Applying (3.11) rst with xkỵ1 , yk and then with yk , xk we obtain from (3.10) that      k   2 k kỵ1 k 2  c1 y x ỵ c2 xkỵ1 À yk  8k ! l x Àl x ð3:12Þ ! 2 Downloaded by [The Aga Khan University] at 03:50 11 October 2014 which proves (a) È É Now we prove (b) By Assumption <  < ð =2c1 Þ, ð =2c2 Þ , we have À c1 > and À c2 > 0: Thus, from the inequality (3.12), we deduce that    2 k kỵ1 c1 yk À xk  ! 8k: l x Àl x ! ð3:13Þ Thus flðxk Þk!0 g is a nonincreasing sequence Since it is bounded below by 0, it converges to là Passing to the limit as k ! it is easy to see from (3.13) that   lim yk À xk  ¼ 0: ð3:14Þ k!1 Note that, since G is -strongly convex, by the definition of l(xk), we can write   xà À xk 2 À Á l xk 8k: Thus, since flðxk Þg is convergent, we can deduce that the sequence fxk gk!0 is bounded, so it has at least one cluster point Let x K be any cluster point and fxki i!0 g be the subsequence such that lim xki ¼ x: i!1 Then, it follows from (3.14) that lim yki ¼ x: i!1 Again by Step of the algorithm, we have À Á   f xki , y ỵ G yị Gxki ị À rGðxki Þ, y À xki   ! f xki , yki ị ỵ G yki ị Gxki Þ À rGðxki Þ, yki À xki 8y K: Since f is lower semicontinuous on K  K, f ðÁ, yÞ is upper semicontinuous on K and f ðx, xị ẳ 0, letting i ! we obtain from the last inequality that   f x, yị ỵ Gð yÞ À GðxÞ À rGðxÞ, y À x ! 8y K, 757 Optimization which shows that x is a solution of the (AuPEP ) corresponding to Lðx, yị ẳ G yị  Gxị rGxị, y À x Then, by Lemma 2.2, x is a solution to (PEP) Suppose now Kd ¼ Kà We claim that the whole sequence fxk gk!0 converges to x Indeed, using the definition of l(x^k) with xà ¼ x Kd , we have lxị ẳ Thus, as G is -strongly convex, we can write Downloaded by [The Aga Khan University] at 03:50 11 October 2014 2   À Á À Á  À Á ð3:15Þ l xk lxị ẳ Gxị G xk rG xk , xk À x ! xk À x 8k ! 0: È À ÁÉ On the other hand, since the l xk k!0 is nonincreasing and as lðxki Þ ! lðxÞ, we must have lðxk Þ ! lðxÞ when k ! Thus, by (3.15), limk!1 xk ¼ x Kà g Remark 3.1 The condition (3.4) does not necessarily imply that f is continuous In fact, if f x, yị :ẳ yị xị, then clearly that (3.4) holds true for any c1 ! 0, c2 ! and for any function ’ Linesearch algorithms Algorithm requires that f satisfies the Lipschitz-type condition (3.4) which in some cases is not known In order to avoid this requirement, in this section we modify Algorithm by using a linesearch The linesearch technique has been used widely in descent methods for mathematical programing problems as well as for variational inequalities [9,13] First, we begin with the following definition Definition 4.1 ð[13]Þ Let K be a nonempty closed set in Rn A mapping P : Rn ! Rn is said to be (i) feasible with respect to K if PðxÞ K (ii) 8x Rn , quasi-nonexpansive with respect to K if for every x Rn ,   PðxÞ À y kx À yk 8y K: ð4:1Þ Note that, if ÅK is the Euclidean projection on K, then ÅK is a feasible quasinonexpansive mappings We denote by F ðKÞ the class of feasible quasi-nonexpansive mappings with respect to K È É Next, we choose a sequence k k!0 such that k 0, 2ị 8k ẳ 0, 1, 2, and lim inf k ð2 À k Þ > 0: k!1 ð4:2Þ The algorithm then can be described as follows Algorithm Data x0 K, ð0, 1Þ,  ð0, 1Þ and  > Step Set k :¼ Step Solve the following strongly convex optimization problem & '  À Á i 1h f xk , yị ỵ G yị rG xk , y À xk y2K  ð4:3Þ ... [13,21,23]) that various classes of mathematical programing problems, variational inequalities, fixed point problems, Nash equilibrium in noncooperative games theory and minimax problems can be formulated... a weakly monotone equilibrium function Another strategy is to use, as for variational inequality problems, a gap function in order to convert an equilibrium problem into an optimization problem... equilibrium problems The first one is an extension of the extragradient algorithm to equilibrium problems In this algorithm the equilibrium bifunction is not required to satisfy any monotonicity

Ngày đăng: 16/12/2017, 01:14

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan