1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Dual extragradient algorithms extended to equilibrium problems

21 103 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 21
Dung lượng 319,93 KB

Nội dung

DSpace at VNU: Dual extragradient algorithms extended to equilibrium problems tài liệu, giáo án, bài giảng , luận văn, l...

J Glob Optim (2012) 52:139–159 DOI 10.1007/s10898-011-9693-2 Dual extragradient algorithms extended to equilibrium problems Tran D Quoc · Pham N Anh · Le D Muu Received: June 2010 / Accepted: February 2011 / Published online: 19 February 2011 © Springer Science+Business Media, LLC 2011 Abstract In this paper we propose two iterative schemes for solving equilibrium problems which are called dual extragradient algorithms In contrast with the primal extragradient methods in Quoc et al (Optimization 57(6):749–776, 2008) which require to solve two general strongly convex programs at each iteration, the dual extragradient algorithms proposed in this paper only need to solve, at each iteration, one general strongly convex program, one projection problem and one subgradient calculation Moreover, we provide the worst case complexity bounds of these algorithms, which have not been done in the primal extragradient methods yet An application to Nash-Cournot equilibrium models of electricity markets is presented and implemented to examine the performance of the proposed algorithms Keywords Dual extragradient algorithm · Equilibrium problem · Gap function · Complexity · Nash-Cournot equilibria T D Quoc (B) Hanoi University of Science, Hanoi, Vietnam e-mail: quoc.trandinh@esat.kuleuven.be Present Address: T D Quoc Department of Electrical Engineering (ESAT/SCD) and OPTEC, K.U Leuven, Leuven, Belgium P N Anh Posts and Telecommunications Institute of Technology, Hanoi, Vietnam e-mail: anhpn@ptit.edu.vn L D Muu Institute of Mathematics, Hanoi, Vietnam e-mail: ldmuu@math.ac.vn 123 140 J Glob Optim (2012) 52:139–159 Introduction In recent years, equilibrium problems (EP) become an attractive field for many researchers both in theory and applications (see, e.g [1,7–10,12,15,18,19,21,22,27–29,33,34] and the references quoted therein) It is well-known that equilibrium problems include many important problems in nonlinear analysis and optimization such as the Nash equilibrium problem, variational inequalities, complementarity problems, (vector) optimization problems, fixed point problems, saddle point problems and game theory [1,3,9,10,26] Furthermore, they can represent rather general and suitable format for the formulation and investigation of various complex problems arising in economics, physics, transportation and network models (see, e.g [7,10]) The typical form of equilibrium problems is formulated by means of Ky Fan’s inequality and is given as [1]: Find x ∗ ∈ C such that f (x ∗ , y) ≥ for all y ∈ C, (PEP) where C is a nonempty closed convex subset in Rn x and f : C × C → R is a bifunction such that f (x, x) = for all x ∈ C Problems of the form (PEP) are referred as primal equilibrium problems Associated with problem PEP), the dual form is presented as: Find y ∗ ∈ C such that f (x, y ∗ ) ≤ for all x ∈ C (DEP) Let S ∗p and Sd∗ denote the solution sets of (PEP) and (DEP), respectively The conditions for nonemptiness of S ∗p and Sd∗ and their characterizations can be found in many research papers and monographs (see, e.g [10,12,14]) Methods for solving problems (PEP)-(DEP) have been studied extensively They can be roughly categorized into three popular approaches The first direction is using gap functions Instead of solving problem (PEP) directly, the methods based on gap functions convert the original problem into a suitable optimization problem Then local optimization methods are usually applied to solve the resulting problem Gap function-based methods frequently appear in optimization and applied mathematics, and were used for variational inequalities by Zhu and Marcotte [36] Mastroeni [19] further exploited them for equilibrium problems The second approach is based on auxiliary problem principle Problem (PEP) is reformulated equivalently to an auxiliary problem, which is usually easier to solve than the original one This principle was first introduced by Cohen [4] for optimization problems and then applied to variational inequalities in [5] Mastroeni [18] further extended the auxiliary problem principle to the equilibrium problems of the form (PEP) involving a strongly monotone bifunction and satisfying a certain Lipschitz-type condition The third approach is proximal point method Proximal point methods were first investigated by Martinet [17] for solving variational inequalities and then was deeply studied by Rockafellar [31] for finding a zero point of a maximal monotone operator Recently, many researchers have exploited this method for equilibrium problems (see, e.g [20,22]) One of the methods for solving equilibrium problems based on the auxiliary problem principle recently proposed in [8], which is called proximal-like methods The authors in [29] further extended and investigated the convergence of this method under different assumptions The methods in [29] are also called extragradient methods due to the results of Korpelevich in 123 J Glob Optim (2012) 52:139–159 141 [13] The extragradient method for solving problem (PEP) generates two iterative sequences {x k } and {y k } as: y k := argmin ρ f x k , y + G x k , y x k+1 := argmin ρ f y k , y + G x k , y : y∈C , : y∈C , (1) where x ∈ C is given, ρ > is a regularization parameter, and G(x, y) is the Bregman distance function (see, e.g [7,25]) Under mild conditions, the sequences {x k } and {y k } generated by scheme (1) simultaneously converge to a solution of problem (PEP) This method is further investigated in [33] combining with interior proximal point methods Recently, Nesterov [24] introduced a dual extrapolation method1 for solving monotone variational inequalities Instead of working on the primal space, this method performs the main step in the dual space Motivated by this work and comparing to the primal extragradient methods in [8,29], in this paper, we extend the dual extrapolation method to convex monotone equilibrium problems Note that, in the primal extragradient method (1), two general convex programs need to be solved at each iteration In contrast to the primal methods, the dual extragradient algorithms developed here only require (i) to solve one general convex program, (ii) to compute one projection point on a convex set and (iii) to calculate one subgradient of a convex function In practice, if the feasible set C is simple (e.g box, ball, polytope) then the projection problem (ii) is usually cheap to compute Moreover, if the bifunction f is convex and differentiable with respect to the second argument then problem (iii) collapses to calculate the gradient vector ∇2 f (x, ·) of f (x, ·) at x The methods proposed in this paper look quite similar to the Ergodic iteration scheme in [2,23] among many other average schemes in fixed point theory and nonlinear analysis However, the methods developed here use the new information computed at the current iteration in a different way The new information w k enters into the next iteration with the same weight thanks to the guidance of the Lipschitz constants (see Algorithm below) In contrast to the method in [2], the new information w k is not thoroughly exploited The subgradient −w k is used to compute the next iteration with a decreasing weight (tk > such that k tk = ∞ and k tk2 w k < ∞) The average schemes are usually very sensitive to the calculation errors and leads to slow convergence in practice The main contribution of this paper is twofold: algorithms and convergence theory We provide two algorithms for solving (PEP)-(DEP) and prove their convergence The worst-case complexity bounds of these algorithms are also estimated This task has not been done in the primal extragradient methods yet An application to Nash-Cournot oligopolistic equilibrium models of electricity markets is presented This problem is not monotone, which can not directly apply the algorithms developed here However, we will show that this problem can be reformulated equivalently to a monotone equilibrium problem by means of the auxiliary problem principle The rest of this paper is organized as follows In Sect a restricted dual gap function of (DEP) is defined and its properties are considered Then, a scheme to compute dual extragradient step is provided and its properties are investigated The dual extragradient algorithms are presented in detail in Sect The convergence of these algorithms is proved and the complexity bound is estimated in this section An application to Nash-Cournot equilibrium models of electricity markets is presented and implemented in the last section Notation Throughout this paper, we use the notation · for the Euclidean norm The notation “:=” means “to be defined” For a given real number x, [x] defines the largest integer It is also called dual extragradient method in [32] 123 142 J Glob Optim (2012) 52:139–159 number which is less than or equal to x A function f : C ⊆ Rn x → R is said to be strongly convex on C with parameter ρ > if f (·) − ρ2 · is convex on C The notation ∂ f denotes the classical subdifferential of a convex function f and ∂2 f is the subdifferential of f with respect to the second argument If f is differentiable with respect to the second argument then ∇2 f (x, ·) denotes the gradient vector of f (x, ·) Dual extragradient scheme Let X be a subset of Rn x and f : X × X → R ∪ {+∞} be a bifunction such that f (x, x) = for all x ∈ X We first recall the following well-known definitions that will be used in the sequel (see [1,19,29]) Definition A bifunction f is said to be a) b) c) strongly monotone on X with a parameter ρ > if f (x, y) + f (y, x) ≤ −ρ x − y for all x and y in X ; monotone on X if f (x, y) + f (y, x) ≤ for all x and y in X ; pseudomonotone on X if f (x, y) ≥ implies f (y, x) ≤ for all x and y in X It is obvious from these definitions that: (a) ⇒ (b) ⇒ (c) The following concept is family in nonlinear analysis A multivalued mapping F : X ⊆ nx Rn x ⇒ 2R is said to be Lipschitz continuous on X with a Lipschitz constant L > if dist(F(x), F(y)) ≤ L x − y , ∀x, y ∈ X, (2) where dist(A, B) is the Hausdorff distance between two sets A and B The multivalued mapping F is said to be uniformly bounded on X if there exists M > such that sup {dist(0, F(x)) | x ∈ X } ≤ M Throughout this paper, we will use the following assumptions: A The set of interior points int(C) of C is nonempty A f (·, y) is upper semi-continuous on C for all y in C, and f (x, ·) is proper, closed, convex and subdifferentiable on C for all x in C A f is monotone on C Not that if f (x, ·) is convex for all x ∈ C then Sd∗ ⊆ S ∗p If f is pseudomonotone on C then S ∗p ⊆ Sd∗ Therefore, under Assumptions Assumptions A.1–A.3 one has S ∗p ≡ Sd∗ (see, e.g [10]) Now, let us recall the dual gap function of problem (PEP) defined as follows [19]: g(x) := sup{ f (y, x) | y ∈ C} (3) Under Assumption A.2, to compute one value of g, a general optimization problem needs to be solved When f (·, x) is concave for all x ∈ C, it becomes a convex problem The following lemma shows that (3) is indeed a gap function of (DEP) whose proof can be found, for instance, in [11,19] Lemma The function g defined by (3) is a gap function of (DEP), i.e.: a) g(x) ≥ for all x ∈ C; 123 J Glob Optim (2012) 52:139–159 b) 143 x ∗ ∈ C and g(x ∗ ) = if and only if x ∗ is a solution of (DEP) If f is pseudomonotone then x ∗ is a solution of (DEP) if and only if it solves (PEP) Under Assumptions A.1–A.3, the gap function g may not be well-defined due to the fact that problem sup{ f (y, x) | y ∈ C} may not be solvable Instead of using gap function g, we consider a restricted dual gap function g R defined as follows Definition Suppose that x¯ ∈ int(C) is fixed and R > is given The restricted dual gap function of problem (DEP) is defined as: g R (x) := sup { f (y, x) | y ∈ C, y − x¯ ≤ R} (4) Let us denote by B R (x) ¯ := {y ∈ Rn x | y − x¯ ≤ R} the closed ball in Rn x of radius R centered at x, ¯ and by C R (x) ¯ := C ∩ B R (x) ¯ Then the characterizations of the restricted dual gap function g R are indicated in the following lemma Lemma Suppose that Assumptions A.1–A.3 hold Then: a) b) c) The function g R defined by (4) is well-defined and convex on C If x ∗ ∈ C R (x) ¯ is a solution of (DEP) then g R (x ∗ ) = If there exists x˜ ∈ C such that g R (x) ˜ = 0, x˜ − x¯ < R and f is pseudomonotone then x˜ is a solution of (DEP) (and, therefore, a solution of (PEP)) Proof Since f (·, x) is upper semi-continuous on C for all x ∈ C and B R (x) ¯ is bounded, the supremum in (4) attains Hence, g R is well-defined Moreover, since f (x, ·) is convex for all x ∈ C and g R is the supremum of a family of convex functions depending on parameter x, then g R is convex (see [30]) The statement a) is proved Now, we prove b) Since f (x, x) = for all x ∈ C, it immediately follows from the definition of g R that g R (x) ≥ for all x ∈ C Let x ∗ ∈ B R (x) ¯ be a solution of (DEP), we have f (y, x ∗ ) ≤ for all y ∈ C and, particularly, f (y, x ∗ ) ≤ for all x ∈ C ∩ B R (x) ¯ ≡ C R (x) ¯ Hence, g R (x ∗ ) = sup{ f (y, x ∗ ) | y ∈ C ∩ B R (x)} ¯ ≤ However, since g R (x) ≥ for all x ∈ C, we conclude that g R (x ∗ ) = By the definition of g R , it is obvious that g R is a gap function of (DEP) restricted to C ∩ B R (x) ¯ Therefore, if g(x) ˜ = for some x˜ ∈ C and x˜ − x¯ < R then x˜ is a solution of (DEP) restricted to C ∩ B R (x) ¯ On the other hand, since f is pseudomonotone, x˜ is also a solution of (PEP) restricted to C ∩ B R (x) ¯ Furthermore, since x˜ ∈ int(B R (x)), ¯ for any y ∈ C, we can choose t > sufficiently small such that yt := x˜ + t (y − x) ˜ ∈ B R (x) ¯ and ≤ f (x, ˜ yt ) = f (x, ˜ t y + (1 − t)x) ˜ ≤ t f (x, ˜ y) + (1 − t) f (x, ˜ x) ˜ = t f (x, ˜ y) Here, the middle inequality follows from the convexity of f (x, ˜ ·) and the last equality happens because f (x, x) = Since t > 0, dividing this inequality by t > 0, we conclude that x˜ is a solution of (PEP) on C Finally, since f is pseudomonotone, x˜ is also a solution of (DEP) The lemma is proved For a given nonempty, closed, convex set C ⊆ Rn x and an arbitrary point x ∈ Rn x , let us denote by dC (x) the Euclidean distance from x to C and by πC (x) the point attained this distance, i.e dC (x) := y − x , and πC (x) := argmin y − x y∈C y∈C (5) 123 144 J Glob Optim (2012) 52:139–159 It is well-known that πC is a nonexpansive and co-coercive operator on C [7] For any x, y ∈ Rn x and β > 0, we define the following function: Q β (x, y) := y − β dC2 x + y β (6) In this paper, the function Q β plays a role as a Lyapunov function in the investigation of the convergence of the algorithms [18,25] Lemma For given x, y ∈ Rn x , the function dC and the mapping πC defined by (5) satisfy: [πC (x) − x]T [v − πC (x)] ≥ 0, ∀v ∈ C dC2 (x + y) ≥ dC2 (x) + dC2 (πC (x) + (7) y) − 2y [πC (x) − x] T (8) Consequently, the function Q β defined by (6) possesses the following properties: a) b) For x, y ∈ Rn x , Q β (x, y) ≤ y If x ∈ C then Q β (x, y) ≥ β x − πC (x + for any y ∈ Rn x For x, y, z ∈ Rn x , it holds that y , z + 2βz T β Q β (x, y + z) ≤ Q β (x, y) + Q β πC x + πC x + β y) y −x β (9) Proof The inequality (7) is a well-known property of the projection mapping πC (see, e.g [7,10]) Now, we prove the inequality (8) For any v ∈ C, from (7) we have v − (x + y) = v − [πC (x) + y] + [πC (x) − x] = v − [πC (x) + y] + {v − [πC (x) + y]}T [πC (x) − x] + πC (x) − x = v − [πC (x) + y] 2 + 2[πC (x) − x]T [v − πC (x)] −2y [πC (x) − x] + πC (x) − x T ≥ v − [πC (x) + y] 2 − 2y T [πC (x) − x] + πC (x) − x By the definition of dC (·) and noting that dC2 (x) respect to v ∈ C in both sides of (10) we get = πC (x) − x (10) , taking the minimum with dC2 (x + y) ≥ dC2 (πC (x) + y) + dC2 (x) − 2y T [πC (x) − x] This inequality is indeed (8) The inequality Q β (x, y) ≤ y directly follows from the definition (6) of Q β Furthermore, if we denote by πCk := πC (x + β1 y) then, from (6), we have Q β (x, y) = y − β dC2 x + y − πCk β = β2 x+ = β2 x − πC x + y β = y − [x − πCk )] y β 1 y −x− y β β πCk − x − y β − β πC x + − β2 −2 x + y − πCk β T x − πCk (11) Since x ∈ C, applying (7) with πCk instead of πC (x) and v = x, it follows from (11) that Q β (x, y) ≥ β x − πC (x + β1 y) which proves the second part of a) 123 J Glob Optim (2012) 52:139–159 145 To prove (9), we substitute x by x + dC2 x + − zT β β βz y, and y by into (8) to obtain 1 1 (y + z) ≥ dC2 πC x + y + z + dC2 x + y β β β β 1 πC x + y − x + y β β Then subtracting the identity y + z = y + z + 2y T z to the last inequality after multiplying by β and using the definition of Q β , we get (9) Q β (x, y) then the function q(β) is Remark For any x, y ∈ Rn x , if we define q(β) := 2β nonincreasing with respect to β > 0, i.e q(β1 ) ≤ q(β2 ) for all β1 ≥ β2 > y − β2 v−x− β1 y = y T (v−x)− β2 v−x , Indeed, consider the function ψ(v, β) := 2β which is convex with respect to (v, β) Since q(β) = minv∈C ψ(v, β), it is convex (see [30]) On the other hand, q (β) := − πC (x + β1 y) − x ≤ Thus q is nonincreasing For a given integer number n ≥ 0, suppose that {x k }nk=0 is a finite sequence of arbitrary points in C and {λk }nk=0 ⊆ (0, +∞) is a finite sequence of positive numbers Let us define n Sn := λk , x¯ n := k=0 Sn n λk x k , (12) k=0 and ¯ , r R (w, x) := max w T (y − x) | y ∈ C R (x) (13) for given w ∈ Rn x and x ∈ C Clearly, the point x¯ n is a convex combination of {x k }nk=0 with given coefficients { λSnk }nk=0 Using the definition of g R and the convexity of f (x, ·), it is easy to show that g R (x¯ n ) = max f (y, x¯ n )|y ∈ C R (x) ¯ = max Sn ≤ max = max Sn f y, Sn n λk x k |y ∈ C R (x) ¯ k=0 n λk f y, x k |y ∈ C R (x) ¯ k=0 n λk f y, x k |y ∈ C R (x) ¯ := k=0 Sn n R The following lemma provides an upper estimation for the quantity (14) n R Lemma a) The function r R define by (13) satisfies ¯ ≤ r R (w, x) β R2 ¯ w) + Q β (x, 2β (15) b) Suppose that Assumptions A.1–A.3 hold and w k ∈ −∂2 f (x k , x k ) Then the quantity n defined by (14) satisfies: R n n R λk w k ≤ k=0 where s n := T x¯ − x k + β R2 ¯ sn + Q β x, , 2β (16) n k k=0 λk w 123 146 J Glob Optim (2012) 52:139–159 Proof Let us define L(x, ρ) := w T (y − x) ¯ + ρ(R − y − x¯ ) as the Lagrange function of the minimizing problem in (13) Using duality theory in convex optimization, for some β > 0, we have r R (w, x) ¯ = max w T (y − x) ¯ | y ∈ C, y − x¯ ≤ R2 = max w T (y − x) ¯ + ρ R − y − x¯ y∈C ρ≥0 = max w T (y − x) ¯ − ρ ≥0 = ρ ≥0 ≤ max 2ρ y∈C β R2 + 2β w w 2 ρ y − x¯ 2 + − ρ y − x¯ − w ρ y∈C − β y − x¯ − y∈C w β ρ R 2 + ρ R Thus the inequality (15) follows from this estimation by using the definition (6) of Q β From (14), by the monotonicity of f , we have: n n R n = max | y ∈ C R (x) ¯ ≤ max − λk f y, x k k=0 λk f x k , y | y ∈ C R (x) ¯ k=0 (17) T y − x k for all y ∈ C R (x) ¯ ⊆ C Since w k ∈ −∂2 f x k , x k , we have − f x k , y ≤ w k Multiplying this inequality by λk > and then summing up from k = to k = n we get n n − λk w k λk f x k , y ≤ k=0 k=0 n λk w k = T T y − xk n ¯ + (y − x) k=0 λk w k x¯ − x k k=0 n = sn T T ¯ + (y − x) λk w k T x¯ − x k (18) k=0 Combining (15), (17) and (18) we obtain n n R λk w k ≤ max (s n )T (y − x) ¯ | y ∈ C R (x) ¯ + T x¯ − x k k=0 n λk w k = r R s n , x¯ + T x¯ − x k k=0 n λk w k ≤ k=0 T x¯ − x k + β ¯ sn + R2, Q β x, 2β (19) which proves (16) For a given tolerance ε ≥ 0, we say that x¯ n is an ε-solution of (PEP) if g R (x¯ n ) ≤ ε and x¯ n − x¯ < R Note that if ε = then an ε-solution x¯ n is indeed a solution of (PEP) due to Lemma 123 J Glob Optim (2012) 52:139–159 147 Remark If we denote by rn := n λk (w k )T (x¯ − x k ) then g R (x¯ n ) ≤ Sn k=0 rn + r R (s n , x) ¯ Hence, x¯ n is an ε-solution of (PEP), it requires that rn + r R (s n , x) ¯ ≤ εSn (s n , x) ¯ (20) x¯ n ≤ εSn then is an ε-solution of (PEP) This condition can be Therefore, if rn + r R used as a stopping criterion of the algorithms described in the next section The main idea of designing our algorithms is to construct a sequence {x¯ n } such that the sequence {g R (x¯ n )} of the restricted dual gap function g R tends to as n → ∞ By virtue of Lemma 2, we can check whether or not x¯ n being an ε-solution of (PEP) Let s −1 := The dual extragradient step (u k , x k , s k , w k ) at iteration k(k ≥ 0) is computed as follows: ⎧ k−1 k ⎪ , ⎪ ⎨ u := πC x¯ + β s k k (21) x := argmin f u , y + βρ2 k y − u k | y ∈ C , ⎪ ⎪ ⎩ k k k−1 + ρk w , s := s where ρk > and β > are given parameters, and w k ∈ −∂2 f (x k , x k ) Lemma The sequence {(u k , x k , s k , w k )} generated by scheme (21) satisfies: 2β wk ρk T x¯ − x k + Q β x, ¯ s k ≤ Q β x, ¯ s k−1 − β x k − u k + 2β k ξ + wk ρk βρk wk + ξ k As a consequence, one has + Proof Applying the inequality (9) with x = x, ¯ y = s k−1 and z = = πC (x¯ + wk + ξ k ρk2 k ρk w (23) and noting that we get Q β (x, ¯ s k ) = Q β x, ¯ s k−1 + k w ρk ¯ s k−1 ) + Q πC x¯ + ≤ Q β (x, k−1 − x¯ s β 2β wk = Q(x, ¯ s k−1 ) + Q u k , w k + ρk ρk + (22) 2β k T (w ) (x¯ − x k ) + Q β (x, ¯ s k ) ≤ Q β (x, ¯ s k−1 ) − β x k − u k ρk k−1 ), βs − β πCk − x k πCk − x k , where ξ k ∈ ∂2 f (u k , x k ) and πCk := πC x k + uk 2β k T (w ) ρk k−1 s , wk β ρk πC x¯ + T u k − x¯ (24) Since f (u k , ·) is subdifferentiable, using the first order necessary condition for optimality in convex optimization, it follows from the second line of (21) that ξ k + βρk x k − u k T v − x k ≥ 0, ∀v ∈ C, (25) for some ξ k ∈ ∂2 f (u k , x k ) This inequality implies that x k = πC u k − k ξ βρk (26) 123 148 J Glob Optim (2012) 52:139–159 Now, applying again (9) with x = u k , y = − ρ1k ξ k and z = we obtain Qβ uk , k w ρk k ρk (ξ + w k ) and then using (26) k k ξ k + wk ξ + Q β πC u k − ξ , ρk βρk ρk T k 2β k ξ + wk πC u k − ξ − uk + ρk βρk ≤ Qβ uk , − = Qβ uk , − + k ξ + Q β x k , (ξ k + w k ) ρk ρk 2β k ξ + wk ρk T x k − uk (27) Then, from the definition of Q β and (26) we have Qβ uk , − k ξ ρk = k ξ ρk2 = k ξ ρk2 − β dC2 u k − − β πC u k − = −β x k − u k If we denote by πCk := πC x k + Qβ x k , k βρk (w − k k ξ − uk − ξ βρk βρk 2β k ξ ρk T x k − uk (28) + ξ k ) then 1 (w k + ξ k ) = w k + ξ k ρk ρk = k ξ βρk wk + ξ k ρk2 wk + ξ k βρk − β dC2 x k + − β πCk − x k + = −β πCk − x k + 2β wk + ξ k ρk T wk + ξ k βρk πCk − x k (29) Combining (24), (27), (28) and (29), we get ¯ s k ≤ Q β x, ¯ s k−1 − β x k − u k Q β x, + 2β k ξ + wk ρk −β πCk − x k T + ¯ s k−1 − β = Q β x, + 2β wk ρk T x k − uk 2β k T k (ξ ) x − u k ρk 2β k T k + (w ) u − x¯ ρk − 2β wk + ξ k ρk x k − uk x k − x¯ + T πCk − x k + πCk − x k 2β wk + ξ k ρk T πCk − x k , which proves (22) The inequality (23) follows from (24), (27), (28) and the fact that Q β x k , ρ1k w k + ξ k ≤ ρ2 wk + ξ k 123 (by the statement a) of Lemma 3) J Glob Optim (2012) 52:139–159 149 Note that the inequality (23) can be estimated in another form: 2β wk ρk T x¯ − x k + Q β x, ¯ s k ≤ Q β x, ¯ s k−1 − β x k − u k − β πCk − x k Indeed, let us denote by y k := x k + βρk the last term of (22) It is obvious that + wk + ξ k ρk2 2 (30) 2β k k T πk − xk C ρk ξ + w 2β y k − x k Substituting this w k + ξ k and by A := 2β ρk wk + ξ k = T T relation into the last term of (22) we get A = 2β y k − x k πck − x k = 2β y k − x k k k k k k k πC (y ) − πC (x ) ≤ 2β y − x = ρ w + ξ Here, the last inequality follows k from the nonexpansiveness of the projection mapping πC Lemma Under the assumptions of Lemma and for a given x k ∈ C: a) If ∂2 f (·, x k ) is Lipschitz continuous on C with a Lipschitz constant L k then L2 2β k T ¯ s k ≤ Q β x, ¯ s k−1 − β − 2k (w ) x¯ −x k + Q β x, ρk ρk x k − uk ¯ s k−1 , ≤ Q β x, b) (31) provided that βρk ≥ L k If ∂2 f (·, x k ) is uniformly bounded by Mk > on C then 2β wk ρk T x¯ −x k + Q β x, ¯ s k ≤ Q β x, ¯ s k−1 −β x k −u k ¯ s k−1 + ≤ Q β x, 4Mk2 ρk2 + 4Mk2 ρk2 (32) Proof Since ξ k ∈ ∂2 f (u k , x k ), w k ∈ −∂2 f (x k , x k ) and ∂2 f (·, x k ) is Lipschitz continuous on C with a Lipschitz constant L k , it implies that w k + ξ k ≤ dist ∂2 f u k , x k , ∂2 f x k , x k ≤ L k x k − uk (33) Substituting (33) into (23) we get L2 2β k T (w ) (x¯ − x k ) + Q β (x, ¯ s k ) ≤ Q β (x, ¯ s k−1 ) − β − 2k ρk ρk x k − uk Since ρk2 β ≥ L 2k , the last inequality implies (31) To prove (b), we note that w k + ξ k ≤ dist ∂2 f x k , x k , ∂2 f u k , x k ≤ dist ∂2 f (u k , x k ), + dist ∂2 f (x k , x k ), ≤ 2Mk Substituting this estimation into (23) we obtain (32) 123 150 J Glob Optim (2012) 52:139–159 If f is differentiable with respect to the second argument for all x ∈ C then the Lipschitz condition of ∂2 f (·, x) collapses to the Lipschitz continuity of ∇2 f (·, x) for fixed x ∈ C In the context of variational inequalities, the function f is defined by f (x, y) := F(x)T (y − x) which is always differentiable with respect to y Then the conclusions of Lemma hold for this case if F is Lipschitz continuous with a Lipschitz constant L (resp., uniformly bounded by M) on C Dual extragradient algorithms and their convergence In this section, for simplicity of discussion, we assume that the subdifferential mapping ∂2 f (·, x k ) is Lipschitz continuous with the same Lipschitz constant L > (resp., uniformly bounded by the same constant M) for all k ≥ in Lemma 6, i.e L k ≡ L > (resp., Mk ≡ M > 0) for all k ≥ In practice, it is not necessary to fix the constants L k and Mk , they can be varied at each iteration However, in this case, we need to control the parameter ρk in the scheme (21) to ensure the convergence of the algorithms The dual extragradient algorithm is presented in detail as follows Algorithm Initialization: Fix an arbitrary point x¯ ∈ int(C) and choose β ≥ L Find an initial point x ∈ C Set s −1 := and r−1 := Iterations: For each k = 0, 1, 2, , n, execute the following steps: Step 1: Compute the projection point u k as: u k := πC x¯ + k−1 s , β (34) where πC is the Euclidean projection mapping onto C Step 2: Solve the strongly convex program: f (u k , x) + β x − uk 2 |x ∈C (35) to obtain the unique solution x k Step 3: Calculate an arbitrary vector w k ∈ −∂2 f (x k , x k ), and then update the dual step s k := s k−1 + w k Step 4: Compute rk := rk−1 + (w k )T (x¯ − x k ) and r R (s k , x) ¯ If rk + r R (s k , x) ¯ ≤ (k + 1)ε for a given tolerance ε > then: terminate Otherwise, increase k by and go back to Step Output: Compute the final output x¯ n as: x¯ n := (n + 1) n xk (36) k=0 The main tasks of Algorithm include: (i) computing a projection point (34), (ii) solving a strongly convex subproblem (35), and (iii) calculating a subgradient vector −w k If the feasible set C has a simple structure such as box, simplex, ellipsoid or polytope then problem (i) can be explicitly solved If f (x, ·) is differentiable then problem (iii) collapses to calculate the gradient vector ∇2 f (x, ·) of f (x, ·) The number of iterations n in Algorithm is chosen such that n ≤ n ε , where n ε is the maximum number of iterations for the worst case determined in Theorem below 123 J Glob Optim (2012) 52:139–159 151 The convergence of Algorithm is stated in the following theorem Theorem Suppose that Assumptions A.1–A.3 are satisfied and {(u k , x k , s k , w k )}nk=0 is a sequence generated by Algorithm Suppose further that ∂2 f (·, x k )(k ≥ 0) are Lipschitz continuous on C with the same Lipschitz constant L > Then the final output x¯ n computed by (36) satisfies: g R (x¯ n ) ≤ β R2 2(n + 1) (37) Consequently, the sequence {g R (x¯ n )}n≥0 converges to and the number of iterations attained β R2 2ε an ε-solution of (PEP) is at most n ε := Proof Since the calculations at Steps 1, and of Algorithm are indeed of the scheme (21) with λk = for all k ≥ Thus Sn defined by (12) satisfies Sn = n + Moreover, since s −1 = 0, it follows from Step of Algorithm that n sn = wk (38) k=0 From (14), (38) and Lemma 4, we have (n + 1)g R x¯ n = Sn g R x¯ n ≤ n R n ≤ (w k )(x¯ − x k ) + k=0 β R2 Q β x, ¯ sn + 2β (39) On the other hand, it follows from (31) of Lemma with ρk = that n an := (w k )T (x¯ − x k ) + k=0 β R2 Q β x, ¯ sn + 2β n−1 (w k )T (x¯ − x k ) + (w n )T x¯ − x n + = k=0 n−1 (w k )T (x¯ − x k ) + ≤ k=0 β R2 ¯ sn + Q β x, 2β β R2 ¯ s n−1 + Q β x, 2β ≡ an−1 (40) Furthermore, since a−1 ≡ 2β Q β x, ¯ s −1 + into (39), by induction, we get β R2 β R2 , substituting this relation and (40) β R2 , (41) = (n + 1)g R x¯ n ≤ βR The remaining statements of Theorem follow which is equivalent to g R (x¯ n ) ≤ 2(n+1) immediately from the estimation (37) Now, we consider the case that the Lipschitz condition (2) is not satisfied, but the mappings ∂2 f (·, x k ), (k ≥ 0), are uniformly bounded by the same constant M > In this case, Algorithm is modified to obtain a new variant described below 123 152 J Glob Optim (2012) 52:139–159 Algorithm Initialization: Fix an arbitrary point x¯ ∈ int(C) and set β0 := 2M R Take an initial point x ∈ C Set s −1 := and r−1 := Iterations: For each k = 0, 1, 2, , n, execute the following steps: Step 1: Compute the projection point u k as: u k := πC x¯ + k−1 s βk (42) Step 2: Solve the strongly convex program: f (u k , x) + βk x − uk 2 |x ∈C (43) to obtain the unique solution x k Step 3: Calculate an arbitrary vector w k ∈ −∂2 f (x k , x k ), and then update s k := s k−1 + w k Step 4: Compute rk := rk−1 + (w k )T (x¯ − x k ) and r R (s k , x) ¯ If rk + r R s k , x¯ ≤ (k + 1)ε for a given tolerance ε > then: terminate Otherwise, compute a new step size as βk := 2M √ k + 1, R (44) increase k by and go back to Step Output: Compute the final output x¯ n as (36) To compute the quantity r R (s k , x) ¯ at Step of Algorithms and 2, a convex program with linear objective function needs to be solved The solution of this problem lies at the boundary of its feasible set C R (x) ¯ However, solving this problem is usually expensive if C is complex In practice, instead of measuring r R , we can use the condition x¯ n+1 − x¯ n ≤ ε to terminate Algorithms and This condition ensures that the approximation x¯ n is not significantly improved in the next iterations The following theorem shows the convergence of Algorithm Theorem Suppose that Assumptions A.1–A.3 are satisfied and the sequence {(u k , x k , s k , w k )} is generated by Algorithm Suppose further that the mappings ∂2 f (·, x k )(k ≥ 0) are uniformly bounded by the same constant M > Then the final output x¯ n computed by (36) satisfies g R (x¯ n ) ≤ 2M R √ n+1 (45) Consequently, the sequence {g R (x¯ n )}n≥0 converges to and the number of iterations attained an ε-solution of (PEP) is at most n ε := 4M R ε2 Proof It is sufficient to prove the main estimation (45) The remaining conclusions of this theorem immediately follow from (45) Similar to Algorithm 1, the calculations at Steps 1, and of Algorithm are indeed of the scheme (21) with λk = for all k Therefore, Sn defined by (12) satisfies Sn = n + Since s −1 = 0, it follows from Step of Algorithm that n sn = wk k=0 123 (46) J Glob Optim (2012) 52:139–159 153 On the other hand, from (14), (46) and Lemma 4, we have (n + 1)g R x¯ n = Sn g R x¯ n ≤ n R n ≤ (w k )T (x¯ − x k ) + k=0 βn R Q βn x, ¯ sn + 2βn (47) Now, let us define bn := nk=0 (w k )(x¯ − x k ) + 2β1n Q βn (x, ¯ s n ), applying the inequality (32) with ρk = 1, and Remark with the fact that βn−1 < βn , we deduce bn − bn−1 = w n T ≤ wn ≤ x¯ − x n + T x¯ − x n 1 ¯ sn − x, ¯ s n−1 Q βn x, Qβ 2βn 2βn−1 n−1 + Q βn x, ¯ s n − Q βn (x, ¯ s n−1 ) 2βn MR 2M = √ βn n+1 (48) Moreover, it follows from the definition of bn and Lemma that b0 ≡ w T x¯ − x + 1 2M ¯ s0 ≤ ¯ s −1 + Q β0 x, Q β0 x, = M R (49) 2β0 2β0 β0 From (48) and (49), by induction, we have n bn ≤ M R k=0 √ βn R ≤ MR n + ≡ √ k+1 (50) Substituting (50) into (47) we get √ (n + 1)g R (x¯ n ) ≤ βn R = 2M R n + 1, which implies that g R (x¯ n ) ≤ (51) 2M R √ n+1 Remark Theorem shows that the worst-case complexity bound of Algorithm is O( n1 ), where n is the number of iterations, while, in Algorithm 2, this quantity is O( √1n ) according to Theorem Numerical results In this section we apply Algorithms and to solve an equilibrium problem arising from Nash-Cournot oligopolistic equilibrium models of electricity markets This problem has been investigated in many research papers (see, e.g [6]) Instead of using a quadratic cost function as in [6], the cost function in our example is slightly modified It is still convex but nonsmooth [6,22] Thus the resulting equilibrium problem can not be transformed into a variational inequality problem Consider a Nash-Cournot oligopolistic equilibrium model arising in electricity markets with n c (n c = 3) generating companies and each company i (i = 1, 2, 3) (com.#) may possess several generating units n ic (gen.#), as shown in Table The quantities x and x c are the power generation of a unit and a company, respectively Assuming that the electricity demand is a strictly decreasing function of the price p, the demand function in an interval of time during a day of study considered standard can 123 154 J Glob Optim (2012) 52:139–159 Table The lower and upper bounds of the power generation of the generating units and companies (n g = 6) Table The parameters of the generating unit cost functions c j ( j = 1, , 6) g g gen.# xmin xmax c xmin 1 80 80 2 80 130 50 130 55 125 30 125 40 125 com.# gen.# αˆ j [$ MW2 h] βˆ j [$/MWh] γˆ j [$/h] α˜ j [$/MWh] β˜ j c xmax γ˜ j 0.0400 2.00 0.00 2.0000 1.0000 25.0000 0.0350 1.75 0.00 1.7500 1.0000 28.5714 0.1250 1.00 0.00 1.0000 1.0000 8.0000 0.0116 3.25 0.00 3.2500 1.0000 86.2069 0.0500 3.00 0.00 3.0000 1.0000 20.0000 0.0500 3.00 0.00 3.0000 1.0000 20.0000 ( p) + σ p, where P ( p) is the total power demand level be expressed as Pload ( p) = Pload load expected for a selected time interval, and σ represents the elasticity of the demand with respect nc to price Let us denote by n g := i=1 n ic the number of generating units of all companies and n c I = I := {1, 2, , n g } Ii the index set of all generating units of the company i, where ∪i=1 i and Ii ∩ I j = ∅(i, j = 1, , n c , i = j) Similar to [6], in this example, we assume that Pload ( p) = 189.2 − 0.5 p which can be expressed inversely as p = 378.4 − 2Pload ( p), where g Pload ( p) := nj=1 x j − Ploss , Ploss represents the transmission losses throughout the system (Ploss is assumed to be zero in this example) The cost of a generating unit j is given as c j (x j ) := max cˆ j (x j ), c˜ j (x j ) , (52) where αˆ j x + βˆ j x j + γˆ j , and j β˜ j −1/β˜ j ˜ ˜ γ˜ j (x j )(β j +1)/β j , c˜ j (x j ) := α˜ j x j + β˜ j + cˆ j (x j ) := (53) and the parameters αˆ j , βˆ j , γˆ j , α˜ j , β˜ j and γ˜ j , ( j = 1, , n g ), are given in Table The cost function c j of each generating unit j does not depend on other units The profit made by company i that owns n ic generating units is ⎛ f i (x) := p c j (x j ) = ⎝378.4 − xj − j∈Ii j∈Ii g ng l=1 g ⎞ xl ⎠ xj − j∈Ii c j (x j ) (54) j∈Ii subject to the constraints (xmin ) j ≤ x j ≤ (xmax ) j ( j = 1, , n g ), where x := (x1 , , xn g )T 123 J Glob Optim (2012) 52:139–159 155 For each i = 1, , n c , let us define ⎛ ⎡ ⎞⎤ ϕi (x, y) := ⎣378.4−2 ⎝ y j ⎠⎦ xj+ j ∈I / i i∈Ii yj − j∈Ii c j (y j ), (55) j∈Ii nc and f (x, y) := [ϕi (x, x) − ϕi (x, y)] , (56) i=1 Then the oligopolistic equilibrium model of electricity markets [16] can be reformulated as an equilibrium problem of the form (PEP): Find x ∗ ∈ C g such that f (x ∗ , y) ≥ for all y ∈ C g , (57) , j = 1, , n g (58) where C g is the feasible set defined by g g C g := x g ∈ Rn | xmin j g ≤ x j ≤ xmax j Let us introduce two vectors q i := (q1i , , qni g )T with q ij = if j ∈ Ii otherwise, and q¯ i := (q¯1i , , q¯ni g )T with q¯ ij := − q ij ( j = 1, , n g ), and then define nc A := q¯ i q i T i=1 nc , B := qi qi T i=1 nc a := −378.4 nc q , and c(x) := i=1 i=1 j∈Ii (59) ng c j (x j ) = i c j (x j ), j=1 then the bifunction f defined by (56) can be expressed as f (x, y) = [(A + B)x + By + a]T (y − x) + c(y) − c(x) (60) Note that since c is nonsmooth and convex and B is symmetric positive semidefinite, f (x, ·) is nonsmooth and convex for all x ∈ C g Moreover, f is continuous on C g × C g The function c is subdifferentiable and its subdifferential at x is given by ∂c(x) = (∂c1 (x1 ), , ∂cn g (xn g ))T , where ⎧ ⎪ αˆ j x j + βˆ j , if cˆ j (x j ) > c˜ j (x j ), ⎪ ⎪ ⎪ ⎪ ˜j 1/ β ⎨ x if cˆ j (x j ) = c˜ j (x j ), j = 1, · · · , n g αˆ j x j + βˆ j , α˜ j + γ˜ jj ∂c j (x j ) = ⎪ ⎪ ˜ ⎪ ⎪ x 1/β j ⎪ ⎩ α˜ j + γ˜ j , if cˆ j (x j ) < c˜ j (x j ), j Since f (x, y)+ f (y, x) = −(y − x)T A(y − x) and A is not positive semidefinite, the bifunction f is not monotone Thus we can not directly apply Algorithms or to solve problem (57) However, the following lemma shows that (57) can be reformulated equivalently to a monotone equilibrium problem 123 156 J Glob Optim (2012) 52:139–159 Lemma Suppose that the cost functions c j ( j = 1, , n g ) is defined by (52) with the parameters are given in Table Then the equilibrium problem (57) can be reformulated equivalently to an equilibrium problem of a monotone bifunction f given by: f (x, y) := [A1 x + B1 y + a]T (y − x) + c(y) − c(x), (61) where A1 := A + 23 B and B1 := 21 B Proof From the formula (53), and the parameters given in Table 2, it is easy to check that the functions cˆ j and c˜ j (i = 1, , n g ) are convex (by computing explicitly their second derivatives) Since c j is defined by (52), it is also convex [30] On the other hand, it follows from (60) that f (x, y) = A+ B x + By + a 2 T (y − x) + (y − x)T B(y − x) + c(y) − c(x) If we define f as in (61) then f (x, y) = f (x, y) + 21 (y − x)T B(y − x) Since B1 = 21 B which is symmetric positive semidefinite and c is convex, the bifunction f is still convex with respect to the second argument for all x Now, if we define h(x, y) := 21 (y − x)T B(y − x) then, by the symmetric positive semidefiniteness of B, h(x, y) is convex and differentiable with respect to the second argument for all x Moreover, it is obvious that h(x, y) is nonnegative, h(x, x) = and ∇2 h(x, x) = for all x and y Applying Proposition 2.1 in [18], it implies that the equilibrium problem: Find x ∗ ∈ C g such that: f (x ∗ , y) ≥ for all y ∈ C g is equivalent to an auxiliary equilibrium problem: Find x ∗ ∈ C g such that: f (x ∗ , y) + h(x ∗ , y) ≥ for all y ∈ C g However, since f (x, y) + h(x, y) = f (x, y), we conclude that (57) is equivalent to an equilibrium problem of the bifunction f defined by (61) Finally, it is necessary to show that f is monotone Indeed, from the definition of f we have f (x, y) + f (y, x) = −(y − x)T (A + B)(y − x) ≤ Here, we use the fact that A + B is positive semidefinite by virtue of (59) Note that matrices A1 and B1 in Lemma are not unique There are many possible choices of A1 and B1 such that f defined by (61) is still monotone Now, we need to estimate the constants M and L indicated in Theorems and that will be used in our implementation Since c is subdifferentiable on C g , from the definition of f , we can explicitly compute ∂2 f (x, y) = (A + B)x + By + a + ∂c(y) Hence, ∂2 f (u k , x k ) − ∂2 f (x k , x k ) = (A + B)(u k − x k ) ≤ A+B x k − u k , ∀u k , x k ∈ C g This inequality shows that ∂2 f (·, x k ) is uniformly Lipschitz continuous with the same Lipschitz constant L := A + B on C g Moreover, since C g is bounded and ∂c(x) is given by (61), ∂2 f (·, x k ) is bounded on C g by a positive constant M defined as ¯ M := ( A + B + B )Mx + max {max { ξ | ξ ∈ ∂c(x)} | x ∈ C R (x)} 123 (62) J Glob Optim (2012) 52:139–159 g 157 g g g Here, Mx := 21 ( xmax +xmin + xmax −xmin ) is the diameter of the feasible set C g Besides, g ¯ := {x ∈ Rn x x ≤ R} with R := xmax − it is easy to check that the box C g ⊆ B R (x) g g g xmin ∞ and x¯ := (xmin + xmax ) With these choices of R and x, ¯ we have C R (x) ¯ ≡ C g These constants will be used in the implementation of Algorithms and It remains to express Algorithms and for this specific application Three main steps of Algorithms and are specified as follows g g Projection: u k := argmin x − x¯ − β1k s k | xmin ≤ x ≤ xmax This problem is indeed a convex quadratic program Convex program: Problem (35) (resp., (43)) is reduced to the following convex program: T g g x Hk x + h kT x + c(x) | xmin ≤ x ≤ xmax , (63) where Hk := B + βk I and h k := [(A + B) − βk I ]u k + a Subgradient: Problem of calculating vector w k ∈ −∂2 f (x k , x k ) is solved explicitly by taking: w k := −(A + 2B)x k − a − ξ k , (64) where ξ k ∈ ∂c(x k ) defined by (61) The dual step s k is updated by s k+1 := s k + w k The parameter √βk is chosen by βk = A + B in Algorithm for all k, and is computed by βk := 2M k + in Algorithm To terminate Algorithms and we use the stopping R criterion x¯ n+1 − x¯ n ≤ ε with a tolerance ε = 10−4 for both algorithms We implement Algorithms and in Matlab 7.11.0 (R2010b) running on a PC Desktop Intel(R) Core(TM)2 Quad CPU Q6600 with 2.4 GHz, 3Gb RAM We use quadprog, a built-in Matlab solver for convex quadratic programming, to solve the projection problem at Step and Ipopt package (a C++ open source software at http://www.coin-or.org/Ipopt/ [35]) to solve the convex program at Step Note that the convex programs (35) and (43) are nonsmooth However, Ipopt still works well for these problems We solve the problem which corresponds to the first model in [6], where three companies (n c = 3) are considered The first company F1 possesses one generating unit {1}, the second company F2 has two generating units {2, 3} and the third one, F3 , has three generating units {4, 5, 6} The computational results of this model are reported in Table Here, the notation in this table includes: co#, gu.# and Prof stand for the coalition, generating unit and profit, respectively; x j and xic are the total power of the generating unit j and the company i, respectively Table The total power and profit made by three companies Companies Algorithm Algorithm xic xic co# gu.# x j Prof.[$/h] x j ExtraGrad[29] Prof.[$/h] x j xic Prof.[$/h] F1 46.6296 46.6296 4397.51 46.0871 46.0871 4334.75 F2 32.1091 31.6670 32.1468 15.0358 47.1449 4480.97 15.9337 47.6007 4511.42 15.0011 47.1479 4477.99 24.8218 22.6606 25.1062 10.8560 10.1875 10.8537 11.1277 46.8056 4395.13 14.1755 47.0236 4402.89 10.8545 46.8143 4392.72 F3 46.6524 46.6524 4396.42 123 158 J Glob Optim (2012) 52:139–159 Table The performance information of three algorithms Algorithm iter error 4416 Algorithm cputime[s] iter error 9.9972e-05 164.20 6850 ExtraGrad[29] cputime[s] iter error 9.9992e-05 241.22 2101 cputime[s] 9.9939e-05 185.17 To compare, we also implement Algorithm of the extragradient methods in [29] for this example The computational results are also shown in Table with the stopping criterion x n+1 − x n ≤ 10−4 The results reported by three algorithms are almost similar to each other The performance of three algorithms are reported in Table 4, where iter indicates the number of iterations, error is the norm x¯ n+1 − x¯ n (or x n+1 − x n ) and cputime is the CPU time in second From Table we see that the number of iterations of Algorithms and is much smaller than the number of iterations n ε in the worst case According to Theorems and 2, these numbers are 96 × 106 and 4.389 × 1018 for Algorithms and 2, respectively, in this example (if β is chosen by β = L in Algorithm 1) The number n ε crucially depends on the estimations of the constants L , M, Mx , the radius R and the center point x ¯ In this example, these constants are estimated quite roughly which lead to very large values of n ε The computational time of the ExtraGrad algorithm is greater than of Algorithm even though the ExtraGrad algorithm requires fewer iterations This happens because there are two general convex programs need to be solved at each iteration in the ExtraGrad algorithm instead of one as in Algorithm Acknowledgments The authors would like to thank the anonymous referees and the editor for their comments and suggestions that helped to improve the presentation of the paper This research was supported in part by NAFOSTED, Vietnam, Research Council KUL: CoE EF/05/006 Optimization in Engineering(OPTEC), GOA AMBioRICS, IOF-SCORES4CHEM, several PhD/postdoc & fellow grants; the Flemish Government via FWO: PhD/postdoc grants, projects G.0452.04, G.0499.04, G.0211.05, G.0226.06, G.0321.06, G.0302.07, G.0320.08 (convex MPC), G.0558.08 (Robust MHE), G.0557.08, G.0588.09, research communities (ICCoS, ANMMM, MLDM) and via IWT: PhD Grants, McKnow-E, Eureka-Flite+EU: ERNSI; FP7-HD-MPC (Collaborative Project STREP-grantnr 223854), Contract Research: AMINAL, and Helmholtz Gemeinschaft: viCERP; Austria: ACCM, and the Belgian Federal Science Policy Office: IUAP P6/04 (DYSCO, Dynamical systems, control and optimization, 2007–2011) References Blum, E., Oettli, W.: From optimization and variational inequality to equilibrium problems Math Student 63, 127–149 (1994) Bruck, R.E.: On the weak convergence of an Ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space J Math Anal Appl 61, 159–164 (1977) Chinchuluun, A., Pardalos, P.M., Migdalas, A., Pitsoulis, L (eds.): Pareto Optimality, Game Theory and Equilibria Springer, Berlin (2008) Cohen, G.: Auxiliary problem principle and decomposition of optimization problems J Optim Theory Appl 32, 277–305 (1980) Cohen, G.: Auxiliary principle extended to variational inequalities J Optim Theory Appl 59, 325–333 (1988) Contreras, J., Klusch, M., Krawczyk, J.B.: Numerical solutions to Nash-Cournot equilibria in coupled constraint electricity markets IEEE Trans Power Syst 19(1), 195–206 (2004) Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol I II Springer, New York (2003) 123 J Glob Optim (2012) 52:139–159 159 Flam, S.D., Antipin, A.S.: Equilibrium programming using proximal-like algorithms Math Program 78, 29–41 (1997) Giannessi, F., Maugeri, A., Pardalos, P.M (eds.): Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models Kluwer, Dordrecht (2004) 10 Konnov, I.V.: Combined relaxation methods for variational inequalities Springer-Verlag, Berlin (2001) 11 Konnov, I.V., Kum, S.: Descent methods for mixed variational inequalities in Hilbert spaces Nonlinear Anal 47, 561–572 (2001) 12 Konnov, I.V.: Generalized convexity and related topics In: Konnov, I.V., Luc, D.T., Rubinov, (eds.) A.M Combined Relaxation Methods for Generalized Monotone Variational Inequalities, pp 3–331 Springer, Berlin (2007) 13 Korpelevich, G.M.: Extragradient method for finding saddle points and other problems Matecon 12, 747– 756 (1976) 14 Lalitha, C.S.: A note on duality of generalized equilibrium problem Optim Lett 4(1), 57–66 (2010) 15 Li, S.J., Zhao, P.: A method of duality for mixed vector equilibrium problem Optim Lett 4(1), 85– 96 (2010) 16 Maiorano, A., Song, Y.H., Trovato, M.: Dynamics of noncollusive oligopolistic electricity markets In: Proceedings IEEE Power Engineering Society Winter Meeting, pp 838–844, Singapore Jan (2000) 17 Martinet, B.: Régularisation d’inéquations variationelles par approximations successives Revue Franỗaise dAutomatique Et dInformatique Recherche Opộrationnelle 4, 154–159 (1970) 18 Mastroeni, G.: On auxiliary principle for equilibrium problems Publicatione del Dipartimento di Mathematica DellUniversita di Pisa 3, 1244–1258 (2000) 19 Mastroeni, G.: Gap function for equilibrium problems J Global Optim 27(4), 411–426 (2003) 20 Moudafi, A.: Proximal point algorithm extended to equilibrium problem J Nat Geom 15, 91–100 (1999) 21 Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria Nonlinear Anal 18(12), 1159–1166 (1992) 22 Muu, L.D., Quoc, T.D.: Regularization algorithms for solving monotone Ky Fan inequalities with application to a Nash-Cournot equilibrium model J Optim Theory Appl 142(1), 185–204 (2009) 23 Nemirovskii, A.S.: Effective iterative methods for solving equations with monotone operators Ekon Matem Met (Matecon) 17, 344–359 (1981) 24 Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems Math Program Ser B 109(2–3), 319–344 (2007) 25 Nguyen, V.H.: Lecture Notes on Equilibrium Problems CIUF-CUD Summer School on Optimization and Applied Mathematics Nha Trang, Vietnam (2002) 26 Panicucci, B., Pappalardo, M., Passacantando, M.: On solving generalized Nash equilibrium problems via optimization Optim Lett 3(3), 419–435 (2009) 27 Pardalos, P.M, Rassias, T.M., Khan, A.A (eds.): Nonlinear Analysis and Variational Problems Springer, Berlin (2010) 28 Quoc, T.D., Muu, L.D.: Implementable quadratic regularization methods for solving pseudomonotone equilibrium problems East West J Math 6(2), 101–123 (2004) 29 Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems Optimization 57(6), 749–776 (2008) 30 Rockafellar, R.T.: Convex Analysis Princeton University Press, Princeton (1970) 31 Rockafellar, R.T.: Monotone operators and the proximal point algorithm SIAM J Control Optim 14, 877– 898 (1976) 32 Taskar, B., Lacoste-Julien, S., Jordan, M.I.: Structured prediction, dual extragradient and Bregman projections J Mach Learn Res 7, 1627–1653 (2006) 33 Van, N.T.T., Strodiot, J.J., Nguyen, V.H.: The interior proximal extragradient method for solving equilibrium problems J Global Optim 44(2), 175–192 (2009) 34 Van, N.T.T., Strodiot, J.J., Nguyen, V.H.: A bundle method for solving equilibrium problems Math Program 116, 529–552 (2009) 35 Wachter, A., Biegler, L.T.: On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming Math Program 106(1), 25–57 (2006) 36 Zhu, D.L., Marcotte, P.: An extended descent framework for variational inequalities J Optim Theory Appl 80, 349–366 (1994) 123 ... equations with monotone operators Ekon Matem Met (Matecon) 17, 344–359 (1981) 24 Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems Math... in the dual space Motivated by this work and comparing to the primal extragradient methods in [8,29], in this paper, we extend the dual extrapolation method to convex monotone equilibrium problems. .. quadratic regularization methods for solving pseudomonotone equilibrium problems East West J Math 6(2), 101–123 (2004) 29 Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to

Ngày đăng: 16/12/2017, 04:18