1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Iterative methods for solving monotone equilibrium problems via dual gap functions

20 154 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Comput Optim Appl (2012) 51:709–728 DOI 10.1007/s10589-010-9360-4 Iterative methods for solving monotone equilibrium problems via dual gap functions Tran Dinh Quoc · Le Dung Muu Received: 27 March 2010 / Published online: 14 October 2010 © Springer Science+Business Media, LLC 2010 Abstract This paper proposes an iterative method for solving strongly monotone equilibrium problems by using gap functions combined with double projection-type mappings Global convergence of the proposed algorithm is proved and its complexity is estimated This algorithm is then coupled with the proximal point method to generate a new algorithm for solving monotone equilibrium problems A class of linear equilibrium problems is investigated and numerical examples are implemented to verify our algorithms Keywords Gap function · Double projection-type method · Monotone equilibrium problem · Proximal point method · Global convergence · Complexity Introduction Let C be a nonempty closed convex subset in Rn , and f : C × C → R ∪ {+∞} be a bifunction such that f (x, x) = for all x ∈ C We are interested in the following problem: Find x ∗ ∈ C such that f (x ∗ , y) ≥ for all y ∈ C This paper is supported in part by NAFOSTED, Vietnam T.D Quoc ( ) Hanoi University of Science, Hanoi, Vietnam e-mail: quoc.trandinh@esat.kuleuven.be Present address: T.D Quoc Department of Electrical Engineering (ESAT/SCD) and OPTEC, K.U Leuven, Leuven, Belgium L.D Muu Institute of Mathematics, Hanoi, Vietnam e-mail: ldmuu@math.ac.vn (PEP) 710 T.D Quoc, L.D Muu Problem (PEP) is known as an equilibrium problem in the sense of Blum and Oettli [1] (see also [12]) This problem is also referred as Ky-Fan’s inequality due to his results in this field [13] Associated with the primal form (PEP), its dual form is defined as follows: Find x ∗ ∈ C such that f (y, x ∗ ) ≤ for all y ∈ C (DEP) Let us denote by Sp∗ and Sd∗ the solution sets of Problems (PEP) and (DEP), respectively The nonemptiness of Sp∗ and Sd∗ , their structure, and the relations between these sets have been studied in the literature (see, e.g [4]) Problem (PEP) on one hand covers many practical problems in optimization and nonlinear analysis such as optimization problems, variational inequalities, complementarity problems, fixed point problems and Nash equilibrium models [1, 13, 16, 18, 19] On the other hand, it is resulted from many practical problems in economics, transportation, mechanics and engineering Theory, methods and applications of equilibrium problems are widely studied by many researchers In recent years, methods for solving Problems (PEP)–(DEP) have been studied extensively The first solution-approach is based on the auxiliary problem principle This principle was first introduced to optimization problems by Cohen [2] and then extended to variational inequalities in [3] Mastroeni [10] further applied the auxiliary problem principle to the equilibrium problems of the form (PEP) involving strongly monotone bifunctions satisfying a certain Lipschitz-type condition Noor [17] used this principle to develop the iterative algorithms for solving (PEP), where the bifunction f was supposed to be partially relaxed strongly monotone One of the most popular methods is proximal point method This method was first introduced by Martinet [8] for variational inequalities and then extended by Rockafellar [21] for finding a zero point of a maximal monotone operator Moudafi [11] and Konnov [4] further extended the proximal point method to Problem (PEP) with monotone and weakly monotone bifunctions, respectively Other solution methods have been well developed in mathematical programming and variational inequalities such as gap function-based, extragradient and bundle methods [5, 14–16] recently have been extended to equilibrium problems [13, 19, 22] In this paper, we first extend the method proposed in [15] for strongly monotone variational inequalities to strongly monotone equilibrium problems that satisfy a certain Lipschitz-type condition The global convergence of the proposed algorithm is investigated and the complexity is estimated We prove that the rate of convergence of the algorithm is linear It is noticeable that the global contraction rate is better than the projection method √ proposed in [9, 13] when the condition number of (PEP) is greater than (2 + 5)1/2 that is often the case in practice However, as a compensation, in each iteration of the algorithm, two convex programming problems need to be solved instead of one as in the projection method The obtained algorithm is then coupled with the inexact proximal point method [11] to obtain a new variant of the proximal point algorithm for solving the monotone (not necessarily strongly monotone) problem (PEP) A class of linear equilibrium problem is also considered as a special case of (PEP) and two equilibrium models are implemented to verify the proposed algorithms Iterative methods for solving monotone equilibrium problems 711 The rest of this paper is organized as follows In Sect we propose an algorithm (Algorithm 1) for solving strongly monotone equilibrium problems The convergence of this algorithm is proved and its complexity is estimated In Sect we present a combination of Algorithm and the proximal point method [11] and show the convergence of the resulting algorithm In the last section, a class of linear equilibrium problems is investigated and two numerical examples are implemented Algorithm for strongly monotone equilibrium problems Recently, Nesterov and Scrimali [15] proposed an iterative method for solving strongly monotone variational inequalities that satisfy a Lipschitz condition This method is known as an extrapolation from the extragradient algorithm or a double projection-type method In this section, we extend the idea in [15] to strongly monotone equilibrium problems that satisfy a certain Lipschitz-type condition We prove the global convergence of the proposed algorithm and estimate its complexity Before presenting the algorithmic scheme, we recall the following well-known definitions that will be used in the sequel Definition [4, 16] Let X ⊆ Rn and f : X × X → R ∪ {+∞} be a bifunction Then f is said to be (i) strongly monotone on X with parameter τ > if for all x and y in X, it holds f (x, y) + f (y, x) ≤ −τ y − x ; (ii) monotone on X if for all x and y in X, we have f (x, y) + f (y, x) ≤ 0; (iii) pseudo-monotone on X if f (x, y) ≥ implies f (y, x) ≤ for all x, y in X; (iv) Lipschitz-type continuous on X if there exists a constant L > such that f (x, y) + f (y, z) ≥ f (x, z) − L y − x z − y , ∀x, y, z ∈ C (1) It is obvious from Definition that (i) ⇒ (ii) ⇒ (iii) Remark Note that the Lipschitz-type condition (1) implies the Lipschitz-type condition in the sense of Mastroeni [9]: f (x, y) + f (y, z) ≥ f (x, z) − c1 y − x − c2 z − y , ∀x, y, z ∈ C, (2) where c1 , c2 > are two given constants L 2q Indeed, applying the Cauchy-Schwartz inequality we have L y − x z − y ≤ for an arbitrary q > Thus if we denote by c := L y − x + Lq z−y 2q and c2 := Lq then the condition (1) implies (2) Condition (1) can be considered as a variant of the Lipschitz-type condition (2) 712 T.D Quoc, L.D Muu If f (x, y) = F (x)T (y − x), which means that (PEP) collapses to a variational inequality problem, then f satisfies (1) if F is Lipschitz continuous with a Lipschitz constant L > Indeed, since F is Lipschitz continuous, using the CauchySchwartz inequality, we have f (x, y) + f (y, z) − f (x, z) = (F (y) − F (x))T (z − y) ≥ − F (y) − F (x) z − y ≥ −L y − x z − y If f (x, x) = for all x ∈ C then by substituting z = x into (1) we obtain f (x, y) + f (y, x) ≥ −L y − x This inequality implies that if f is strongly monotone with parameter τ > then it requires τ ≤ L Let us denote by domg the domain of a convex function g and ri(X) is the set of relative interior points of a convex set X Throughout this section, we assume that C ⊆ ri(domf (x, ·)) for all x ∈ C, and the following assumptions hold: Assumption The function f (·, y) is upper semicontinuous on C with respect to the first argument for all y ∈ C, and f (x, ·) is proper, closed convex on C with respect to the second argument for all x ∈ C Assumption The function f is strongly monotone on C with parameter τ > (see Definition 1(i)) Note that, by assumption C ⊆ ri(domf (x, ·)), f (x, ·) is subdifferentiable on C with respect to the second variable for all x in C (see [20, Theorem 2.34]) Lemma Under Assumptions 1–2, the problems (PEP) and (DEP) have the same unique solution Proof The nonemptiness and uniqueness of Sp∗ has been proved (see, e.g [6]) Since f (x, ·) is convex and subdifferentiable on C, it implies that Sd∗ ⊆ Sp∗ according to Proposition [6] Moreover, since f is strongly monotone, it is pseudo-monotone on C, applying again Proposition [6], we have Sp∗ ⊆ Sd∗ Hence, Sd∗ = Sp∗ Definition [10] A function g : C → R is called a gap function of Problem (DEP) if (i) g(x) ≥ for all x ∈ C, and (ii) g(x ∗ ) = if and only if x ∗ solves (DEP) Let us define the following function: g(x) := sup f (y, x) + τ y −x 2 |y∈C , (3) where τ > is the strongly monotone parameter of f We refer to this function as a dual gap function of (PEP) We recall that a function h : X → R, where X is a convex set in Rn , is said to be strongly convex with parameter γ > if h(·) − γ2 · is convex on X The function h is said to be strongly concave with parameter γ if −h is strongly convex with parameter γ The next lemma shows that g is well-defined and is a gap function to (DEP) on ri C Iterative methods for solving monotone equilibrium problems 713 Lemma Suppose that Assumptions 1–2 are satisfied Then the function g given by (3) is well-defined and strongly convex with parameter τ Moreover, it is a gap function to both (DEP) and (PEP) Proof Since f is strongly monotone with parameter τ > 0, for all y ∈ C, we have τ τ y − x ≤ −f (x, y) − τ y − x + y − x 2 τ ≤ sup −f (x, y) − y − x | y ∈ C := u(x) fx (y) := f (y, x) + (4) Since f (x, ·) is convex with respect to the second argument for all x ∈ C, it implies that f (x, ·) + τ2 · −x is strongly convex with parameter τ > Thus the supremum in the second line of (4) is attained, i.e (i) the function u(x) defined in this line is well-defined On the other hand, (ii) fx (x) = ≤ u(x) It follow from (i) and (ii) that the level set Lu(x) (fx ) := {y ∈ C | fx (y) ≤ u(x)} of fx is nonempty and bounded in Rn Moreover, since f (·, y) is upper semi-continuous on C for all y ∈ C, g is welldefined Note that the function fy (x) := f (y, x) + τ2 x − y is strongly convex with parameter τ As a consequence, the function g(x) = sup{fy (x) | y ∈ C} defined as the supremum of a family of strongly convex functions with parameter τ is strongly convex with parameter τ [20] Since f (x, x) = 0, we have g(x) = sup{f (y, x)+ τ2 y −x | y ∈ C} ≥ f (x, x)+ τ = If x¯ ∈ C such that g(x) ¯ = then f (y, x) ¯ + τ2 y − x¯ ≤ g(x) ¯ = for x −x τ all y ∈ C This inequality implies that f (y, x) ≤ − y − x¯ ≤ 0, i.e x¯ is a solution to (DEP) Consequently, x¯ is also a solution to (PEP) by virtue of Lemma Remark Let x ∗ be a solution to (PEP) Then, from the definition of g, we have g(x) ≥ g(x ∗ ) = for all x ∈ C Since g is strongly convex with parameter τ , it implies that x ∗ is the unique global solution to minx∈C g(x) Moreover, one has g(x) ≥ τ x − x∗ 2, ∀x ∈ C (5) Let {x i }i≥0 be a sequence in C and {λi }i≥0 be a positive sequence in (0, +∞) Let us define k Sk := λi i=0 and x¯ := Sk k λi −f (x i , y) − Δk := max k k i=0 λi x i , (6) i=0 τ y − xi 2 |y∈C (7) Lemma Under Assumptions 1–2, the quantity Δk defined by (7) satisfies g(x¯ k ) ≤ Δk Sk (8) 714 T.D Quoc, L.D Muu Proof From the definition of x¯ k , Sk and Δk , and using the strong monotonicity of f and the convexity of f (y, ·) we have g(x¯ k ) = sup f (y, x¯ k ) + ⎧ ⎨ = sup f y, ⎩ Sk ≤ sup ≤ Sk max Sk τ y − x¯ k 2 k τ y− Sk λi x i + i=0 k λi f (y, x i ) + i=0 |y∈C τ y − xi k λi −f (x i , y) − i=0 k λi x i i=0 ⎭ y∈C τ y − xi y∈C ⎫ ⎬ y∈C = Δk Sk The lemma is proved Based on Lemma 3, the algorithm will be designed to control the sequence {Δk }k≥0 such that its growth can be compared to the sum Sk For a given ρ > 0, we define the following functions: ϕxρ (y) := −f (x, y) − ρ y − x 2 and (9) k ψk (y) := λi ϕxτ i (y) (10) i=0 As usual, we say that a concave function h is subdifferentiable on C if −h is subdifferentiable on C Note that, since f (x k , ·) is convex and subdifferentiable on C, the ρ function ϕx is strongly concave with parameter ρ and subdifferentiable on C As a consequence, ψk is strongly concave with parameter τ Sk and subdifferentiable on C For a given x ∈ C, consider two sequences {uk }k≥0 and {x k }k≥0 generated by the following scheme: uk := argmax ψk (y), (11) y∈C ρ x k+1 := argmax ϕuk (y) (12) y∈C The next theorem provides a key property to prove the convergence of our algorithm that will be described later Theorem Suppose that Assumptions 1–2 are satisfied, and the sequence {(uk , x k )}k≥0 is generated by (11) and (12) Suppose further that f satisfies the Iterative methods for solving monotone equilibrium problems 715 Lipschitz-type condition (1) with √ a Lipschitz constant L > and the parameter ρ in (12) is chosen such that ρ ≥ 12 ( 4L2 + τ − τ ) Then the sequence {Δk }k≥0 defined by (7) satisfies Δk+1 ≤ Δk − L2 ρ− λk+1 x k+1 − uk ρ +τ ≤ Δk , (13) provided that λk+1 ≤ ρτ Sk Proof Note that ψk defined by (10) is strongly concave with parameter τ Sk and subdifferentiable on C Using the optimality condition for (11) we get τ ψk (y) ≤ ψk (uk ) − Sk y − uk 2 , ∀y ∈ C (14) On the other hand, it follows from the definition (10) that ψk+1 (y) = ψk (y) + λk+1 ϕxτ k+1 (y) (15) Using the definition (7) of Δk , we have Δk = maxy∈C ψk (y) Combing this relation, (14) and (15) we obtain Δk+1 = max ψk+1 (y) y∈C = max ψk (y) + λk+1 ϕxτ k+1 (y) | y ∈ C τ ≤ ψk (uk ) + max λk+1 ϕxτ k+1 (y) − Sk y − uk = Δk + max λk+1 −f (x k+1 , y) − τ − Sk y − uk 2 τ y − x k+1 |y ∈C |y ∈C (16) Now, since x k+1 is a solution to (12), using the optimality condition for this problem, we have ξk+1 + ρ(x k+1 − uk ) T (y − x k+1 ) ≥ 0, ∀y ∈ C, (17) ρ where ξk+1 ∈ ∂f (uk , x k+1 ) Since ϕuk is strongly concave with parameter ρ > 0, it follows from (17) that ρ −ϕuk (x k+1 ) + ρ y − x k+1 2 ρ ≤ −ϕuk (y), ∀y ∈ C This inequality is equivalent to f (uk , x k+1 ) + ∀y ∈ C ρ k+1 x − uk 2 + ρ y − x k+1 2 − f (uk , y) − ρ y − uk 2 ≤ 0, (18) 716 T.D Quoc, L.D Muu From (18) and using the Lipschitz-type condition (1) with x = uk , y = x k+1 and z = y we obtain f (x k+1 , y) + τ y − x k+1 2 ≥ f (uk , x k+1 ) + f (x k+1 , y) − f (uk , y) + + ≥ = ρ k+1 x − uk ρ y − uk 2 (τ + ρ) + y − x k+1 2 + y − x k+1 − L2 ρ − 2(ρ + τ ) − ρ k u − x k+1 ρ y − uk − L uk − x k+1 − (ρ + τ ) (τ + ρ) y − x k+1 2 y − x k+1 L uk − x k+1 (ρ + τ ) x k+1 − uk − ρ y − uk 2 (19) Substituting (19) into (16), and noting that λk+1 ≤ ρτ Sk , we get Δk+1 ≤ Δk + max − λk+1 − λk+1 + ρ L2 − 2(ρ + τ ) λk+1 ρ τ Sk − 2 = Δk − λk+1 ≤ Δk − λk+1 y − uk (ρ + τ ) τ Sk λk+1 ρ − 2 y − uk ρ L2 − 2(ρ + τ ) y − x k+1 − x k+1 − uk ρ L2 − 2(ρ + τ ) + max − λk+1 − (ρ + τ ) 2 2 |y ∈C x k+1 − uk y − x k+1 − L x k+1 − uk (ρ + τ ) L x k+1 − uk (ρ + τ ) |y ∈C x k+1 − uk (20) , which proves the first inequality of (13) From the choice of ρ and since λk+1 > 0, we have λk+1 ( ρ2 − the second inequality of (13) holds L2 2(ρ+τ ) ) ≥ Thus We continue expanding scheme (11)–(12) algorithmically For simplicity of discussion, we assume that the constants τ and L are known in advance Otherwise, Iterative methods for solving monotone equilibrium problems 717 global strategies such as line search √ procedures should be used to estimate ρ From Theorem 1, the parameter ρ ≥ 12 ( 4L2 + τ − τ ) > can be chosen in advance and λk+1 ≤ ρτ Sk at each iteration The algorithm is described in detail as follows: Algorithm √ Initialization: Given a tolerance ε > Choose λ0 := 1, ρ ≥ 12 ( 4L2 + τ − τ ) and ω ∈ (0, 1] Set α := ρτ and take an initial point x ∈ C Iteration k (k = 0, 1, , kε ): Perform the three steps below: Step 1: Solve the first strongly convex programming problem k λi f (x i , y) + i=0 τ y − xi 2 |y ∈C (21) to obtain the unique solution uk Step 2: Solve the second strongly convex programming problem f (uk , y) + ρ y − uk 2 |y∈C (22) to obtain the unique solution x k+1 Step 3: Update λk+1 := ωα Sk and go back to Step Output: Compute the final output: x¯ := Sk k k λi x i (23) i=0 The main tasks of Algorithm are to solve two strongly convex programs (21) and (22) Note that the objective function qk (y) := ki=0 λi [f (x i , y) + τ2 y − x i ] of (21) can be computed recursively, i.e qk+1 (y) = qk (y) + λk+1 [f (x k+1 , y) + τ k+1 ] Thus the method for solving this problem can conveniently exploit y−x the computational information of the previous steps to reply on the current step It remains to determine the maximum number of iterations kε in Algorithm Remark will provide an estimation for this constant Theorem Under the assumptions of Theorem and under the condition that the parameters ρ and λk are computed as in Algorithm Let x ∗ be a solution to (PEP) Then the final output sequence {x¯ k }k≥0 generated by Algorithm satisfies τ k+1 x¯ − x∗ 2 ≤ g(x¯ k+1 ) ≤ g(x ) + ≤ (L2 − τ ) x − x∗ 2τ L2 ωk g(x ) exp − α+ω τ2 exp − ωk α+ω (24) 718 T.D Quoc, L.D Muu As a consequence, this sequence converges linearly to the unique solution x ∗ of (PEP) Proof The first inequality of (24) follows immediately from (5) with x = x¯ k+1 ∈ C We prove the middle inequality It is obvious that S0 = λ0 = Applying the updating rule of λk+1 at Step of Algorithm we have Sk+1 = Sk + λk+1 = Sk + ω Sk = α ω + Sk = α ω +1 α k ω +1 α S0 = k (25) From Lemma 3, Theorem and (25) we have g(x¯ k+1 ) ≤ ω Δk+1 Δ0 ≤ ω = Δ0 − Sk+1 ( α + 1)k α+ω k ≤ Δ0 exp − ωk α+ω (26) To estimate Δ0 , on one hand, we note that Δ0 = max ψ0 (y) = max ϕxτ (y) = max −f (x , y) − y∈C y∈C τ y − x0 2 | y ∈ C (27) On the other hand, using the Lipschitz-type condition (1) and noting that x ∗ is a solution to (PEP), i.e f (x ∗ , y) ≥ for all y ∈ C, we have −f (x , y) − τ y − x0 2 ≤ −f (x ∗ , y) + f (x ∗ , x ) + L x − x ∗ ≤ f (x ∗ , x ) + L x − x ∗ ≤ f (x ∗ , x ) + − L2 x − x∗ τ2 = f (x ∗ , x ) + + L2 τ − 2τ ≤ g(x ) − τ 2 + τ x − x∗ y − x0 + − τ τ y − x0 2 y − x0 − y − x0 L2 x − x∗ 2τ x0 − x∗ τ y − x0 y − x0 − τ 2L x − x∗ τ y − x0 − 2 y − x0 + L x − x∗ τ 2 L x − x∗ τ + L2 τ − 2τ x0 − x∗ (28) Here, the last inequality in (28) follows from the fact that g(x ) = sup f (y, x ) + τ y − x0 2 | y ∈ C ≥ f (x ∗ , x ) + τ x − x∗ 2 Iterative methods for solving monotone equilibrium problems 719 Substituting (28) into (27) we get Δ0 ≤ g(x ) + L2 τ − 2τ + max − ≤ g(x ) + τ x0 − x∗ y − x0 + L2 τ − 2τ 2 L x − x∗ τ |y ∈C x0 − x∗ (29) Combining (26) and (29), we obtain the middle inequality in (24) −τ ) It follows from (5) that g(x ) ≥ τ2 x −x ∗ Hence, g(x )+ (L 2τ x −x ∗ ≤ L2 g(x ) τ2 Combining this inequality and the middle inequality of (24) we obtain the last one of (24) The last statement of the theorem immediately follows from (24) Remark From the proof of Theorem 2, it is easy to compute the contraction rate ω of Algorithm as r := − ω+α (see (26)) The optimal contraction rate is rmin = √ [1 − √ 2 ]1/2 that attains when ρmin = 4L2 + τ − τ and ωmin = 4L /τ +1 The optimal parameters ωmin and ρmin will be used for Algorithm in Sect and in the numerical examples of Sect 4.1 Comparison with the projection method Let us recall the projection method for solving strongly monotone equilibrium problem introduced in [9] This algorithm generates a sequence {x k }k≥0 starting from x ∈ C and compute x k+1 := argmin f (x k , y) + σ y − xk 2 |y ∈C , (30) where σ > is a regularization parameter The convergence of this method is proved in [9, 13] and the authors in [13] showed that the rate of convergence is linear It is not difficult to prove that if the Lipschitz-type condition (1) is used in the projection p method instead of (2) then the optimal contraction rate of this method is rmin := − (τ/L)2 If we define κ := L/τ , the condition number of the strongly monotone equilibrium problem (PEP), then we can easily verify that: √ p – If κ ∈ [1, + 5) (≈ 2.058171027) then rmin < rmin , the projection method (30) converges faster than Algorithm √ – If κ > + then Algorithm converges faster than the projection method (30) Therefore, theoretically, if problem (PEP) has large condition number, Algorithm works better than the projection method (30) However, as a compensation, in each 720 T.D Quoc, L.D Muu iteration of Algorithm 1, two strongly convex programs need to be solved instead of one as in the projection method (30) Remark It follows from Theorem that, for a given tolerance ε > 0, to find an ε-solution, i.e x¯ k − x ∗ ≤ ε, Algorithm performs at least kε := (α + 1) log 2L2 g(x ) τ ε2 +2 (31) iterations, where [x] denotes the largest integer number such that [x] ≤ x for any real 0) )) (depending on the number x Hence, the complexity of Algorithm is O(log( g(x ε2 initial point x ) Note that to compute kε as in (31), the value g(x ) is required, i.e a nonconvex program needs to be solved Instead of using kε , we can compute another estimation k¯ε using (26) and (27), and use this constant to terminate Algorithm k−1 Indeed, x¯ k − x ∗ ≤ τ2 g(x¯ k ) ≤ τ2 Δ0 exp(− α+1 ) ≤ ε The last inequalities im2Δ0 plies that k ≥ (α + 1) log( τ ε2 ) + Note that Δ0 = maxy∈C ψ0 (y) = ψ0 (u0 ), which has been computed at the first iteration of Algorithm Thus we can compute k¯ε := [(α + 1) log( 2ψτ0ε(u2 ) )] + Application to proximal point method This section presents an application of Algorithm to the proximal point method proposed by Moudafi in [11] for solving monotone equilibrium problems In each iteration of the proximal point method for (PEP), a strongly monotone equilibrium subproblem needs to be solved with a given tolerance This task can be done by applying Algorithm Since the inexact proximal algorithm allows us to solve the equilibrium subproblems inexactly, Algorithm applied to this problem can be terminated after finite iterations determined in advance The inexact proximal point method for solving monotone equilibrium problem (PEP) generates a sequence {x k }k≥0 as follows: Choose two positive sequences {εk }k≥0 ⊂ [0, 1) and {ck }k≥0 such that ∞ k=0 εk < +∞ and < c ≤ ck < +∞ Take an arbitrary point x ∈ C For k = 0, 1, , with given x k , ck and εk , solve the following strongly monotone equilibrium subproblem: Find x k+1 ∈ C such that: ck f (x k+1 , y) + (x k+1 − x k )T (y − x k+1 ) ≥ −εk , ∀y ∈ C (EPk ) Let us define fk (x, y) := ck f (x, y) + (x − x k )T (y − x) (32) Iterative methods for solving monotone equilibrium problems 721 It is obvious that if f satisfies Assumption in Sect then fk still satisfies this assumption The following lemma provides the conditions for f such that fk is strongly monotone and still satisfies the Lipschitz-type condition (1) Lemma Suppose that f is monotone on C and satisfies the Lipschitz-type condition (1) with a Lipschitz constant L > Then, for any ck > 0, it holds that: (i) fk is strongly monotone with parameter τk = (ii) fk satisfies the Lipschitz-type condition (1) with a Lipschitz constant Lk = ck L + Proof The statement (i) has been proved in [13, Lemma 5.1] It remains to prove (ii) Let hk (x, y) := (x − x k )T (y − x), then fk (x, y) = ck f (x, y) + hk (x, y) and using the Cauchy-Schwartz inequality we have hk (x, y) + hk (y, z) − hk (x, z) = (y − x)T (z − y) ≥ − y − x z−y (33) Since f satisfies the Lipschitz-type condition (1), combining this assumption, (33) and the definition (32) of fk , we get fk (x, y) + fk (y, z) − fk (x, z) ≥ −(ck L + 1) y − x z−y , which proves (ii) Lemma shows that the equilibrium subproblem (EPk ) satisfies the conditions of Algorithm By coupling this algorithm and the inexact proximal point method, we obtain a variant of the proximal point method for solving monotone equilibrium problems Let us define gk (x) := max fk (y, x) + y −x 2 ρ ϕk k (x; y) := −fk (x, y) − ρk y − x , |y∈C , and (34) (35) i ψki (y) := λk,j ϕk1 (x k,j ; y), (36) j =0 where ρk := 12 ( + 4(ck L + 1)2 − 1) To solve subproblem (EPk ) with a given tolerance εk > 0, Algorithm performs at least k¯εk iterations, where k¯εk := (ρk + 1) log 2ψk0 (uk,0 ) εk2 + (37) The new variant of the inexact proximal point algorithm is now presented as follows 722 T.D Quoc, L.D Muu Algorithm Initialization: Choose two positive sequences {ck }k≥0 ⊂ [c, +∞), where c > 0, and {εk }k≥0 ⊂ (0, 1) such that ∞ k=0 εk < +∞ Take an initial point x ∈ C and set k := Iteration k (Outer loop iteration) For a given x k , compute k¯εk by (37) Take x k,0 := x k Inner loop iteration: For i = 0, 1, , k¯εk execute the three steps below: Step 1: Solve the first strongly convex programming subproblem maxy∈C ψki (y) to get the unique solution uk,i Step 2: Solve the second strongly convex programming subproblem ρ maxy∈C ϕk k (uk,i ; y) to get the unique solution x k,i+1 Step 3: Update λk,i+1 := ρ1k Sk,i , where Sk,i := ij =0 λk,j , and go back to Step Output at the outer iteration k: Compute the final output at the iteration k by taking x k+1 := k¯εk Sk,k¯ε k λk,j x k,j (38) j =0 Increase k by and go back to Outer loop iteration The following theorem shows the convergence of Algorithm 2, whose proof can be done similarly as Theorem 5.1 [13] We omit the proof in detail here Recall that a bifunction f : C × C → R is said to be hemicontinuous on C × C if for any u, v ∈ C × C, f is continuous on the line segment [u, v] connected u and v Theorem Suppose that f (x, ·) is proper, closed convex on C Suppose further that f is hemicontinuous on C × C, is monotone and satisfies the Lipschitz-type condition (1) with a Lipschitz constant L > on C Then the sequence {x k } generated by Algorithm converges to a solution of (PEP) Moreover, the following estimation holds: x k+1 − x ∗ ≤ x k − x ∗ − x k+1 − x k + δk , ∀k ≥ 0, (39) where δk := 6M(εk−1 + εk ) + εk−1 + 2εk−1 εk , with M > being a constant It follows from (39) that xk − x∗ ≤ x0 − x∗ k−1 − k−1 x i+1 − x i i=0 + i=0 δi ≤ x − x ∗ k−1 + δi (40) i=0 This estimation shows that the convergence of {x k } depends crucially on the starting point x Note that the constant M in Theorem depends on the initial point x For instance, M = dist(x , Sp∗ ), where dist(x, S) is the distance from a point x to a set S If C is bounded then we can roughly choose M := diam(C), the diameter of C Iterative methods for solving monotone equilibrium problems 723 Application to linear equilibrium problems 4.1 Linear equilibrium problems Suppose that C is a polytope in Rn defined as C := x ∈ Rn | Ax ≤ b, l ≤ x ≤ u , (41) where A ∈ Rm×n , b ∈ Rm , and l, u ∈ Rn are the lower bound and the upper bound on x, respectively Let f : C × C → R be a bifunction defined by f (x, y) := (P x + Qy + r)T (y − x), (42) where P and Q are two n × n symmetric matrices, and r ∈ Rn We shall refer to the problem of the form (PEP) with C and f defined by (41) and (42), respectively, as a linear equilibrium problem (LEP) It is well-known that this problem covers many linear problems in optimization such as linear programs, linear variational inequalities and linear complementarity problems Since f is quadratic in each argument, the following properties follow immediately from the definition of f (see, e.g [19]) (i) The domain of f is Rn × Rn , and f is continuously differentiable with respect to two arguments x and y on its domain Moreover, ∂y f (x, y) = ∇y f (x, y) = P x + Q(2y − x) + r (ii) If Q is positive semidefinite (resp., positive definite) then f is convex (resp., strongly convex with parameter μ = 2λmin (Q), where λmin (Q) is the smallest eigenvalue of Q) (iii) If P − Q is positive semidefinite (resp., positive definite) then f is monotone (resp., strongly monotone with parameter τ = λmin (P − Q), the smallest eigenvalue of P − Q) (iv) f is Lipschitz-type continuous on its domain with a Lipschitz constant L = P − Q Indeed, from the definition of f , it is easy to show that f (x, y) + f (y, z) − f (x, z) = (y − x)T (P − Q)(z − y) Using the Cauchy-Schwartz inequality we obtain that f (x, y) + f (y, z) − f (x, z) ≥ − P − Q x − y × y − z , which means that f is Lipschitz-type continuous with a Lipschitz constant L = P − Q (Definition 1(iv)) Suppose that W is an n × n symmetric matrix, we consider the following auxiliary equilibrium problem: Find x ∗ ∈ C such that: (P x ∗ + Qy + r)T (y − x ∗ ) + (y − x ∗ )T W (y − x ∗ ) ≥ 0, for all y ∈ C (43) If W is positive definite then this problem is called an auxiliary equilibrium problem corresponding to (LEP) [9] However, the following lemma shows that this problem is equivalent to (LEP) even if W is not positive definite 724 T.D Quoc, L.D Muu Lemma Suppose that Q is symmetric positive definite and matrix W is chosen such that Q + W is symmetric positive definite Then the auxiliary equilibrium problem (43) is equivalent to (LEP) Proof We start the proof by showing that x ∗ is a solution to (LEP) if and only if x ∗ solves min{(P x ∗ + Qy + r)T (y − x ∗ ) | y ∈ C} Indeed, if x ∗ is a solution to (LEP) then f (x ∗ , y) = (P x ∗ + Qy + r)T (y − x ∗ ) ≥ = f (x ∗ , x ∗ ) for all y ∈ C This means that x ∗ is a solution to min{f (x ∗ , y) | y ∈ C} = min{(P x ∗ + Qy + r)T (y − x ∗ ) | y ∈ C} Conversely, if x ∗ is a solution to min{(P x ∗ + Qy + r)T (y − x ∗ ) | y ∈ C} then (P x ∗ + Qy + r)T (y − x ∗ ) ≥ (P x ∗ + Qx ∗ + r)T (x ∗ − x ∗ ) = for all y ∈ C Hence, x ∗ is a solution to (LEP) Now, since C is a polytope and Q is symmetric positive definite, the minimization problem min{(P x ∗ + Qy + r)T (y − x ∗ ) | y ∈ C} is a strongly convex quadratic program The necessary and sufficient optimality condition for this problem is (P + Q)x ∗ + r T (y − x ∗ ) ≥ 0, ∀y ∈ C (44) On the other hand, if we define f1 (x, y) := (P x + Qy + r)T (y − x) + (y − x)T W (y − x) then ∇y2 f1 (x, y) = 2(Q + W ) Since Q + W is symmetric positive definite, f1 is strongly convex with respect to y Thus the necessary and sufficient condition for x ∗ to be a solution to (43) is (P + Q)x ∗ + r T (y − x ∗ ) + 2W (x ∗ − x ∗ ) ≥ 0, ∀y ∈ C, (45) which coincides with (44) The lemma is proved Lemma Suppose that Q and P + Q are symmetric positive definite Then (LEP) can be reformulated equivalently to a linear equilibrium problem of f1 (x, y) = (P1 x + Q1 y + r1 )T (y − x) such that f1 is strongly monotone and strongly convex with respect to the second argument Proof From the definition (42) of f we have f (x, y) = (P x + Qy + r)T (y − x) = (P + W )x + (Q − W )y + r T (y − x) + (y − x)T W (y − x), (46) for any n × n symmetric matrix W Since Q is symmetric positive definite, we can choose W to be a symmetric positive definite such that Σ := Q − W is still positive definite Consider matrix (P + W ) − (Q − W ), one has (P + W ) − (Q − W ) = P + Q − 2Σ Since P + Q is positive definite, we can choose W sufficiently close to Q such that P +Q−2Σ is still positive definite Now, let us define P1 := P +W , Q1 := Q − W and r1 := r It follows from (42) that f (x, y) = (P1 x + Q1 y + r1 )T (y − x) + (y − x)T W (y − x) := f1 (x, y) + (y − x)T W (y − x) Since Q1 and W are symmetric positive definite, applying Lemma 5, we conclude that (LEP) is equivalent to a linear equilibrium problem of the bifunction f1 By the choices of P1 , Q1 and r1 , it is easy to show that f1 is strongly convex with respect to y and f1 is strongly monotone due to (iii) Iterative methods for solving monotone equilibrium problems 725 In order to implement Algorithm for solving (LEP), it remains to consider in detail two convex programming problems (21) and (22) at Step and Step 2, respecρ tively, of this algorithm From the definition of ϕx , we have ϕxρ (y) = −f (x, y) − ρ y−x 2 = −(P x + Qy + r)T (y − x) − ρ y −x 2 − y T (2Q + ρI )y − [(P − Q − ρI )x + r]T y ρ I − P x + rT x − xT Let us denote by Hρ := 2Q + ρI , hρ (x) := (P − Q − ρI )x + r and αρ (x) := ρ x T ( ρ2 I − P )x + r T x Then the function ϕx defined by (9) can be represented as a quadratic form: ϕxρ (y) = − y T Hρ y − hρ (x)T y − αρ (x) (47) The function ψk in (10) is then expressed as k ψk (y) = i=0 ϕxτ i (y) = − y T (Sk Hτ )y T k − i λi hτ (x ) k y− i=0 λi ατ (x i ) (48) i=0 √ The parameters of Algorithm are computed as ρ := 12 [ 4L2 + τ − τ ], and α := ρ/τ due to Remark If we denote by q := τ/ρ then, from Step of Algorithm 1, it is easy to show that Sk = (1 + q)k S0 = (1 + q)k , λ0 = and λk+1 = q(1 + q)k S0 = q(1 + q)k for all k ≥ Two main steps (Step and Step 2) of Algorithm now becomes: Solve the first convex quadratic program: T y Hk y + hTk y | Ay ≤ b, l ≤ y ≤ u to obtain a solution uk , where Hk := (1 + q)k Hτ , hk := q(1 − q)i−1 for i > and λ0 = Solve the second convex quadratic program: k i i=0 λi hτ (x ) T y Hρ y + hρ (uk )T y | Ay ≤ b, l ≤ y ≤ u to obtain a solution x k (49) with λi = (50) 726 T.D Quoc, L.D Muu 4.2 Numerical examples In this subsection, we apply Algorithm to solve the linear equilibrium problem (LEP) The first example is taken from [19] with f being strongly monotone The second one is a river basin pollution game presented in [7] All the numerical examples in this subsection are done in C++ running on a PC Desktop with Intel(R) Core(TM)2 Quad CPU Q6600 2.4 GHz with Gb RAM To solve two convex quadratic programming problems (49) and (50) we use qpOASES package (a C++ open source software (using online active set strategies for solving online quadratic programs)) which is available at http://www.kuleuven.be/ optec/software/qpOASES Example Consider the linear equilibrium problem (LEP), where C is given by C := {x ∈ R5 | 5i=1 xi ≥ −1, − ≤ xi ≤ 5, i = 1, , 5}, P two × matrices: ⎡ ⎡ ⎤ 1.6 0 3.1 0 ⎢ 1.6 ⎢ 3.6 ⎥ 0 ⎢ ⎢ ⎥ ⎢0 ⎥, 1.5 0 3.5 Q = P =⎢ ⎢ ⎢ ⎥ ⎣0 ⎣0 1.5 3.3 0⎦ 0 0 0 0 a polytope and Q are ⎤ 0⎥ ⎥ 0⎥ ⎥, 0⎦ and r = (1, −2, −1, 2, −1)T Since matrices Q and P − Q are symmetric positive definite, we can apply directly Algorithm to solve this problem The strongly monotone parameter of f is τ = 0.5, the Lipschitz constant of f is L = 2.6, which implies ρ = 2.36199 We choose the starting point x = (1, 3, 1, 1, 2)T as in [19] and the tolerance ε = 10−4 The algorithm is terminated if either the number of iterations exceeds the maximum number kmax (computed as in Remark 4) or x¯ k+1 − x¯ k ≤ ε The obtained solution is x¯ 45 = (−0.725086, 0.803304, 0.720107, −0.866487, 0.200145)T that closely approximates to the solution reported in [19] The convergence behavior of Algorithm is illustrated on the left of Fig 1, which intuitively shows the linear rate of the convergence of this algorithm indicated in Theorem Fig Convergence behavior of Algorithm for Example (left) and Example (right) Iterative methods for solving monotone equilibrium problems 727 Example (River basin pollution game) [7] In this example, we consider three players j = 1, 2, located along a river Each agent is engaged in an economic activity (paper pulp producing) at a chosen level xj , but the players must meet environmental condition set by a local authority Pollutants may be expelled into the river, where they disperse Two monitoring stations l = 1, are located along the river, at which the local authority has set maximum pollutant concentration levels The revenue and the expenditure for player j are Rj (x) = [d1 − d2 (x1 + x2 + x3 )] xj , and Fj (x) = (c1j + c2j xj )xj , respectively, where the parameters d1 = 3.0, d2 = 0.01, c1j = 0.1, 0.12, 0.15, and c2j = 0.01, 0.05, 0.01 for j = 1, 2, 3, respectively The profit of player j is φj (x) = Rj (x) − Fj (x) = [d1 − d2 (x1 + x2 + x3 )]xj − (c1j + c2j xj )xj The constraint on emission imposed by the local authority at location is ql (x) = ej ulj xj ≤ K, l = 1, 2, j =1 where K = 100, ej = 0.5, 0.25, 0.75, u1j = 6.5, 5.0, 5.5 and u2j = 4.583, 6.25, 3.75 for j = 1, 2, 3, respectively The level xj is nonnegative for j = 1, 2, Players try to maximize their profit φj (x) satisfying the condition ql (x) ≤ (l = 1, 2) and x ≥ This problem can be reformulated as (LEP), where C is given by C := x ∈ R3 | ql (x) ≤ K, l = 1, 2, x ≥ , with matrices P , Q being ⎡ d2 + c21 P := ⎣ d2 d2 ⎡ d2 + c21 Q=⎣ 0 d2 d2 + c22 d2 d2 + c22 ⎤ d2 d2 ⎦ , d2 + c23 ⎤ 0 ⎦, d2 + c23 and vector r = (c11 − d1 , c12 − d1 , c13 − d1 )T Note that since P − Q is not positive definite, this linear equilibrium problem is not strongly monotone However, since P + Q and Q are positive definite which satisfy the assumptions of Lemma 6, applying this lemma, we can choose matrix W such that this problem is equivalent to a strongly monotone LEP In our implementation, we choose W = 0.1 × diag(c21 , c22 , c23 ) For this choice, the corresponding parameters in Algorithm are τ = 0.019, and L = 0.055 that imply ρ = 0.0463144 We perform Algorithm with a starting point x = (0, 0, 0)T and the tolerance ε = 10−4 , the obtained solution is x¯ 47 = (21.1445, 16.0279, 2.72618)T after 47 iterations This solution is identical to the result in [7] The convergence behavior of Algorithm is also plotted on the right of Fig 728 T.D Quoc, L.D Muu Acknowledgement The authors would like to thank the anonymous referees for their comments and suggestions that helped to improve the paper References Blum, E., Oettli, W.: From optimization and variational inequality to equilibrium problems Math Stud 63(1–4), 123–145 (1994) Cohen, G.: Auxiliary problem principle and decomposition of optimization problems J Optim Theory Appl 32(3), 277–305 (1980) Cohen, G.: Auxiliary problem principle extended to variational inequalities J Optim Theory Appl 59(2), 325–333 (1988) Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities Springer, Berlin (2001) Konnov, I.V.: Application of the proximal point method to nonmonotone equilibrium problems J Optim Theory Appl 119(3), 317–333 (2003) Konnov, I.V.: Generalized convexity and related topics In: Konnov, I.V., Luc, D.T., Rubinov, A.M (eds.) Combined Relaxation Methods for Generalized Monotone Variational Inequalities, pp 3–31 Springer, Berlin (2007) Krawczyk, J.B., Uryasev, S.: Relaxation algorithms to find Nash equilibria with economic applications Environ Model Assess 5(1), 63–73 (2000) Martinet, B.: Régularisation d’inéquations variationelles par approximations successives Rev Fr Rech Opér 4, 154–159 (1970) Mastroeni, G.: On auxiliary principle for equilibrium problems In: Daniele, P., Giannessi, F., Maugeri, A (eds.) Equilibrium Problems and Variational Models, pp 289–298 Kluwer, Dordrecht (2003) 10 Mastroeni, G.: Gap function for equilibrium problems J Glob Optim 27(4), 411–426 (2004) 11 Moudafi, A.: Proximal point algorithm extended to equilibrium problems J Nat Geom 15(1–2), 91–100 (1999) 12 Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria Nonlinear Anal 18(12), 1159–1166 (1992) 13 Muu, L.D., Quoc, T.D.: Regularization algorithms for solving monotone Ky Fan’s inequalities with application to a Nash-Cournot equilibrium model J Optim Theory Appl 142(1), 185–204 (2009) 14 Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems Math Program Ser B 109(2–3), 319–344 (2007) 15 Nesterov, Y., Scrimali, L.: Solving strongly monotone variational and quasi-variational inequalities CORE discussion paper #107, pp 1–15 (2006) 16 Nguyen, V.H.: Lecture notes on equilibrium problems CIUF-CUD Summer School on Optimization and Applied Mathematics, Nha Trang, Vietnam (2002) 17 Noor, M.A.: Auxiliary principle technique for equilibrium problems J Optim Theory Appl 122(2), 371–386 (2004) 18 Quoc, T.D., Muu, L.D.: Implementable quadratic regularization methods for solving pseudomonotone equilibrium problems East-West J Math 6(2), 101–123 (2004) 19 Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems Optimization 57(6), 749–776 (2008) 20 Rockafellar, R.T.: Convex Analysis Princeton University Press, Princeton (1970) 21 Rockafellar, R.T.: Monotone operators and the proximal point algorithm SIAM J Control Optim 14, 877–898 (1976) 22 Van, N.T.T., Strodiot, J.J., Nguyen, V.H.: A bundle method for solving equilibrium problems Math Program 116(1–2), 529–552 (2009) ... diam(C), the diameter of C Iterative methods for solving monotone equilibrium problems 723 Application to linear equilibrium problems 4.1 Linear equilibrium problems Suppose that C is a polytope in... lemma shows that g is well-defined and is a gap function to (DEP) on ri C Iterative methods for solving monotone equilibrium problems 713 Lemma Suppose that Assumptions 1–2 are satisfied Then... algorithms Iterative methods for solving monotone equilibrium problems 711 The rest of this paper is organized as follows In Sect we propose an algorithm (Algorithm 1) for solving strongly monotone equilibrium

Ngày đăng: 16/12/2017, 07:08

Xem thêm:

TỪ KHÓA LIÊN QUAN

Mục lục

    Iterative methods for solving monotone equilibrium problems via dual gap functions

    Algorithm for strongly monotone equilibrium problems

    Comparison with the projection method

    Application to proximal point method

    Application to linear equilibrium problems

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN