Comput Optim Appl DOI 10.1007/s10589-016-9857-6 Modified hybrid projection methods for finding common solutions to variational inequality problems Dang Van Hieu1 · Pham Ky Anh1 · Le Dung Muu2 Received: 29 February 2016 © Springer Science+Business Media New York 2016 Abstract In this paper we propose several modified hybrid projection methods for solving common solutions to variational inequality problems involving monotone and Lipschitz continuous operators Based on differently constructed half-spaces, the proposed methods reduce the number of projections onto feasible sets as well as the number of values of operators needed to be computed Strong convergence theorems are established under standard assumptions imposed on the operators An extension of the proposed algorithm to a system of generalized equilibrium problems is considered and numerical experiments are also presented Keywords Variational inequality · Equilibrium problem · Generalized equilibrium problem · Gradient method · Extragradient method Mathematics Subject Classification 65Y05 · 65K15 · 68W10 · 47H05 · 47H10 B Le Dung Muu ldmuu@math.ac.vn Dang Van Hieu dv.hieu83@gmail.com Pham Ky Anh anhpk@vnu.edu.vn Department of Mathematics, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam Institute of Mathematics, VAST, Hanoi, 18 Hoang Quoc Viet, Hanoi, Vietnam 123 D V Hieu et al Introduction N Let {K i }i=1 be nonempty closed convex subsets of a real Hilbert space H such that N K = ∩i=1 K i = ∅ We consider the following common solutions to variational inequality problems (CSVIP) introduced in [11–13] N K such that Problem Find x ∗ ∈ K = ∩i=1 i Ai (x ∗ ), x − x ∗ ≥ 0, ∀x ∈ K i , i = 1, , N (1) N : K i → H are given operators where {Ai }i=1 In what follows, we assume that each operator Ai satisfies the following assumptions G1 Ai is monotone on K i G2 Ai is L - Lipschitz continuous on K i G3 The solution set F of Problem is nonempty If N = then CSVIP (1) becomes the classical variational inequality problem (VIP) [17,20,26,27]: Find x ∗ ∈ K such that: A(x ∗ ), x − x ∗ ≥ 0, ∀x ∈ K , (2) where A : K → H is a monotone and L - Lipschitz continuous operator and K is a nonempty closed convex subset of H Let us denote the solution set of VIP (2) by V I (A, K ) Problem is a generalization of many other problems including: convex feasibility problems, common fixed point problems, common minimizer problems, common saddle-point problems, hierarchical variational inequality problems, variational inequality problems over the intersection of convex sets, etc., see [3–5,7,12,21,22] In this paper, we focus on projection methods, which together with regularization ones are fundamental methods for solving VIPs with monotone and Lipschitz continuous mappings The extragradient method was first introduced by Korpelevich [28] in 1976 for the saddle point problem and then was extended to VIPs It was proved that in a finite dimensional space [28], the sequence {xn } defined by yn = PK (xn − λA(xn )), xn+1 = PK (xn − λA(yn )), (3) where λ ∈ (0, L1 ) converges to some point in V I (A, K ) However, in infinite dimensional Hilbert spaces, the extragradient method only converges weakly In recent years, the extragradient method has received a lot of attention, see, for example, [10,14,15,24,30,31] and the references therein Nadezhkina and Takahashi [32] introduced the following hybrid extragradient method 123 Modified hybrid projection methods for finding ⎧ ⎪ y = PK (xn − λA(xn )), ⎪ ⎪ n ⎪ ⎪ z ⎨ n = PK (xn − λA(yn )), Cn = {z ∈ C : ||z − z n || ≤ ||z − xn ||} , ⎪ ⎪ ⎪ Q n = {z ∈ C : x0 − xn , z − xn ≤ 0} , ⎪ ⎪ ⎩ xn+1 = PCn ∩Q n (x0 ), (4) where λ ∈ (0, L1 ) They proved that the sequence {xn } generated by (4) converges strongly to PV I (A,K ) (x0 ) For solving CSVIP (1) with N > 1, Censor et al [12] proposed a strongly convergent hybrid algorithm (CGRS’s method) where all operators Ai , i = 1, , N are multi-valued mappings from H to H For the sake of simplicity, we recall this algorithm when the mappings Ai , i = 1, , N are single-valued, λin = λ and γni = 21 (see, Algorithm 3.1 in [12] for more details) as follows: Algorithm 1.1 (CGRS’s method) ⎧ i yn = PK i (xn − λAi (xn )), i = 1, , N , ⎪ ⎪ ⎪ ⎨ z ni = PK i (xn − λAi (yni )), i = 1, , N , Cni = z ∈ H : ||z − z ni || ≤ ||z − xn || , ⎪ ⎪ ⎪ ⎩ Q n = {z ∈ H : x0 − xn , z − xn ≤ 0} , xn+1 = PCn ∩Q n x0 (5) N C i As observed in [10], Algorithm 1.1 requires 2N projections where Cn = ∩i=1 n onto the feasible sets K i and 2N values of Ai per each iteration These might be costly when the feasible sets K i and the operators Ai have complex structures, as in largescale VIPs arising from optimal control of systems governed by partial differential equations [29] In this paper, motivated and inspired by the results of Censor et al [12] and Malitsky and Semenov [31], we introduce the following hybrid algorithm for solving CSVIP (1) Algorithm 1.2 (Modified hybrid projection method for CSVIPs) Initialization: Choose x0 = x1 ∈ H, y0i ∈ K i , y1i = PK i (x0 − λAi (y0i )) Set C0 = Q = H The parameters λ, k satisfy the following conditions 0 2L − 2λL Iterative step: for n ≥ 1, compute ⎧ i ⎪ yn+1 = PK i (xn − λAi (yni )), i = 1, , N , ⎪ ⎪ ⎪ i i i i i ⎪ ⎨ n = k||xn − xn−1 || + λL||yn − yn−1 || − (1 − k − λL)||yn+1 − yn || , i Cni = v ∈ H : ||yn+1 − v||2 ≤ ||xn − v||2 + ni , ⎪ ⎪ ⎪ Q n = {v ∈ H : v − xn , x0 − xn ≤ 0}, ⎪ ⎪ ⎩ xn+1 = PCn ∩Q n (x0 ), N Ci where Cn = ∩i=1 n 123 D V Hieu et al x0 xn − λ A2(xn) n x − λ A (y ) xn − λ A1(xn) n 2 n y n z xn xn − λ A1(y1n) xn+1 Cn yn ∩ Cn K2 ∩ Qn z1 n F K Fig Iterative Step of Algorithm 1.1 (CGRS’s method) for N = The number of projections onto the feasible sets K i and that of computed values Ai is Algorithm 1.2 needs only N projections onto the feasible sets K i and N values of Ai per each iteration Thus, based on slightly different half-spaces Cni as suggested in [31], we can reduce a half of number of computations required in Algorithm 1.1 Besides, Algorithm 1.1 requires the monotonicity and Lipschitz continuity of the operators Ai on the whole space H , while these properties are assumed to be satisfied only on the feasible sets K i in Algorithm 1.2 Note that Cni and Q n are either closed halfspaces or whole space Therefore, the projection xn+1 = PCn ∩Q n (x0 ) can be found by Haugazeau’s method [8, Corollary 29.8] or any available method of convex quadratic programming [9, Chapter 8] Some numerical experiments, comparing Algorithm 1.2 with Algorithm 1.1, are performed Figures and illustrate the iterative steps of Algorithms 1.1 and 1.2 for the case N = 2, respectively This paper is organized as follows: In Sect we recall some definitions and preliminary results used in the paper Section deals with the convergence analysis of Algorithm 1.2 and its modification Section presents an extension of Algorithm 1.2 to generalized equilibrium problems In Sect 5, we perform some numerical experiments to illustrate the proposed algorithms in comparison with Algorithm 1.1 Preliminaries Let C be a nonempty closed convex subset of a real Hilbert space H We begin with some concepts of the monotonicity of an operator, see, [1,25] An operator A : C → H is said to be 123 Modified hybrid projection methods for finding x0 xn − λ A2(yn) n xn − λ A1(y ) yn+1 xn xn+1 C1 n K2 ∩ C2 ∩ Q n n F yn+1 K1 Fig Iterative Step of Algorithm 1.2 (the proposed algorithm) for N = The number of projections onto the feasible sets K i and that of computed values Ai is i strongly monotone on C if there exists a constant η > such that A(x) − A(y), x − y ≥ η||x − y||2 , ∀x, y ∈ C; ii monotone on C if A(x) − A(y), x − y ≥ 0, ∀x, y ∈ C; iii α - inverse strongly monotone on C if there exists a positive constant α such that A(x) − A(y), x − y ≥ α||A(x) − A(y)||2 , ∀x, y ∈ C; iv maximal monotone if it is monotone and its graph G(A) := {(x, A(x)) : x ∈ C} is not a proper subset of the graph of any other monotone mapping; v L - Lipschitz continuous on C if there exists a positive constant L such that ||A(x) − A(y)|| ≤ L||x − y||, ∀x, y ∈ C We have the following result Lemma 2.1 [37] Let C be a nonempty, closed convex subset of a Hilbert space H and A be a monotone, hemicontinuous mapping of C into H Then V I (A, C) = {u ∈ C : v − u, A(v) ≥ 0, ∀v ∈ C} Remark 2.1 Lemma 2.1 ensures that the solution set of VIP (2) is closed and convex 123 D V Hieu et al For every x ∈ H , the metric projection PC x of x onto C is defined by PC x = arg { y − x : y ∈ C} Since C is a nonempty closed and convex subset of H , PC x exists and is unique It is well-known that the metric projection PC : H → C has the following characterizations Lemma 2.2 [1,18] Let PC : H → C be the metric projection from H onto a nonempty closed convex subset C of H Then i PC is - inverse strongly monotone on H , i.e., for all x, y ∈ H , PC x − PC y, x − y ≥ PC x − PC y ii For all y ∈ H, x ∈ C, x − PC y + PC y − y ≤ x−y (6) iii z = PC x if and only if x − z, z − y ≥ 0, ∀y ∈ C (7) The normal cone NC of C at a point x ∈ C is defined by NC (x) = {w ∈ H : w, x − y ≥ 0, ∀y ∈ C} The following lemmas will be used for proving convergence theorems in Sect Lemma 2.3 [36] Let C be a nonempty closed convex subset of a Hilbert space H and let A be a monotone and hemi-continuous mapping of C into H with D(A) = C Let Q be a mapping defined by Q(x) = Ax + NC (x) if x ∈ C, ∅ if x ∈ / C Then Q is maximal monotone and Q −1 = V I (A, C) Lemma 2.4 [31] Let {an }, {bn }, {cn } be nonnegative real sequences , α, β ∈ for all n ≥ the following inequality holds an ≤ bn + βcn − αcn+1 If ∞ n=0 bn 123 < +∞ and α > β ≥ then limn→∞ an = and Modified hybrid projection methods for finding Convergence analysis In this section, we prove the convergence of Algorithm 1.2 and propose a modification of it Theorem 3.1 Let K i , i = 1, , N be closed convex subsets of a real Hilbert space N K = ∅ Suppose that the operators {A } N : K → H satisfy H , such that K = ∩i=1 i i i=1 i the conditions G1 − G3 Then, the sequences {xn } , yni generated by Algorithm 1.2 converge strongly to PF (x0 ) Proof We divide the proof of Theorem 3.1 into four steps Claim The following estimate holds i ||yn+1 − x ∗ ||2 ≤ ||xn − x ∗ ||2 + i n (8) for each x ∗ ∈ F and i = 1, , N i = PK i (xn − λAi (yni )) and Lemma 2.2.ii, we have Indeed, by yn+1 i i ||yn+1 − x ∗ ||2 ≤ ||xn − λAi (yni ) − x ∗ ||2 − ||xn − λAi (yni ) − yn+1 ||2 i i = ||xn − x ∗ ||2 − ||xn − yn+1 ||2 − 2λ Ai (yni ), yn+1 − x ∗ (9) The second term in the right-hand side of (9) can be estimated as follows: i i i ||xn − yn+1 + ||xn−1 − yn+1 ||2 = ||xn − xn−1 ||2 + xn − xn−1 , xn−1 − yn+1 ||2 i + ||xn−1 − yni ||2 = ||xn − xn−1 ||2 + xn − xn−1 , xn−1 − yn+1 i i + ||yni − yn+1 + xn−1 − yni , yni − yn+1 ||2 (10) By the triangle inequality, the Cauchy–Schwarz inequality and the inequality of arithmetic and geometric means, we get i ≥ −2||xn − xn−1 ||||xn−1 − yni || − 2||xn xn − xn−1 , xn−1 − yn+1 i − xn−1 ||||yni − yn+1 || ≥ −||xn − xn−1 ||2 − ||xn−1 − yni ||2 − k||xn − xn−1 ||2 i − ||yni − yn+1 ||2 k Thus, i ||xn − xn−1 ||2 + xn − xn−1 , xn−1 − yn+1 + ||xn−1 − yni ||2 ≥ i −k||xn − xn−1 ||2 − ||yni − yn+1 ||2 k (11) 123 D V Hieu et al From the relations (10) and (11), we conclude i i ||yni − yn+1 ||2 + xn−1 − yni , yni − yn+1 k (12) Next, we estimate the third term in the righ-hand side of (9) From x ∗ ∈ V I (Ai , K i ) and Lemma 2.1, we obtain Ai (yni ), yni − x ∗ ≥ Thus, by λ > 0, the Lipschitz continuity of Ai , Cauchy–Schwarz inequality and Cauchy inequality, we find i ||xn − yn+1 ||2 ≥−k||xn−xn−1 ||2+ 1− i i − x ∗ = 2λ Ai (yni ), yn+1 − yni + 2λ Ai (yni ), yni − x ∗ 2λ Ai (yni ), yn+1 i ≥ 2λ Ai (yni ), yn+1 − yni i i = 2λ Ai (yni ) − Ai (yn−1 ), yn+1 − yni i i + 2λ Ai (yn−1 ), yn+1 − yni i i ≥ −2λL||yni − yn−1 ||||yn+1 − yni || i i ), yn+1 − yni + 2λ Ai (yn−1 i i ≥ −λL||yni − yn−1 ||2 − λL||yn+1 − yni ||2 i i + 2λ Ai (yn−1 ), yn+1 − yni This together with (12) implies that i i ||2 + 2λ Ai (yni ), yn+1 − x ∗ ≥ −k||xn − xn−1 ||2 ||xn − yn+1 + 1− i − λL ||yni − yn+1 ||2 k i i i − λL||yni − yn−1 ||2 + xn−1 − λAi (yn−1 ) − yni , yni − yn+1 =− i n i i + xn−1 − λAi (yn−1 ) − yni , yni − yn+1 (13) i i Since yni = PK i (xn−1 − λAi (yn−1 )), yn+1 ∈ K i and Lemma 2.2.iii, i i ≥ ) − yni , yni − yn+1 xn−1 − λAi (yn−1 Thus, from (13) we get i i ||2 + 2λ Ai (yni ), yn+1 − x ∗ ≥ − ni ||xn − yn+1 From (9) and (14), we obtain the desired inequality (8) 123 (14) Modified hybrid projection methods for finding Claim The sets F, Cn , Q n are closed and convex, F ⊂ Cn ∩ Q n for all n ≥ 0, and i lim ||xn+1 − xn || = lim ||yni − xn || = lim ||yn+1 − yni || = 0, ∀i = 1, , N n→∞ n→∞ n→∞ (15) From the definitions, Cni , Q n are closed half-spaces or the whole space H, hence they are closed and convex subsets for all n ≥ Claim and the definition of Cni ensure that F ⊂ Cni for all n ≥ and i = 1, , N It is clear that F ⊂ C0 ∩ Q Assume that F ⊂ Cn ∩ Q n for some n ≥ From xn+1 = PCn ∩Q n (x0 ) and Lemma 2.2.iii, we see that z − xn+1 , x0 − xn+1 ≤ for all z ∈ Cn ∩ Q n In particular, this is also true for z ∈ F ⊂ Cn ∩ Q n By the definition of Q n+1 , F ⊂ Q n+1 Hence, F ⊂ Cn+1 ∩ Q n+1 Thus, by induction, F ⊂ Cn ∩ Q n for all n ≥ From the definition of Q n and Lemma 2.2.iii, xn = PQ n (x0 ) It follows from Lemma 2.2.ii that (16) ||z − xn ||2 ≤ ||z − x0 ||2 − ||xn − x0 ||2 , ∀z ∈ Q n Substituting z = x † := PF (x0 ) ∈ Q n into inequality (16), one has ||x † − x0 ||2 − ||xn − x0 ||2 ≥ ||x † − xn ||2 ≥ 0, (17) which implies that the sequence {||xn − x0 ||}, and therefore {xn }, are bounded Substituting z = xn+1 ∈ Q n into inequality (16), one also gets ≤ ||xn+1 − xn ||2 ≤ ||xn+1 − x0 ||2 − ||xn − x0 ||2 (18) This implies that {||xn − x0 ||} is non-decreasing Thus, there exists the limit of {||xn − x0 ||} From the relation (18), it follows K ||xn+1 − xn ||2 ≤ ||x K +1 − x0 ||2 − ||x1 − x0 ||2 , ∀K ≥ n=1 Passing to the limit in the last inequality as K → ∞, we obtain ∞ ||xn+1 − xn ||2 < +∞, (19) n=1 hence, lim ||xn+1 − xn || = (20) n→∞ N C i , from the definition of C i , we find Since xn+1 ∈ Cn := ∩i=1 n n i ||yn+1 − xn+1 ||2 ≤ ||xn − xn+1 ||2 + i n (21) 123 D V Hieu et al i i Set ani = ||yn+1 − xn+1 ||2 , bn = ||xn − xn+1 ||2 + k||xn − xn−1 ||2 , cni = ||yni − yn−1 ||2 , β = λL, and α = − k − λL Taking into account the definition of ni , ||xn − xn+1 ||2 + i n i = bn + βcni − αcn+1 , and using relation (21) we come to the inequalities i ani ≤ bn + βcni − αcn+1 (22) for each fixed i ∈ {1, , N } From the hypotheses of λ, k and relation (19), we see that α > β ≥ and ∞ n=1 bn < +∞ Relation (22) and Lemma 2.4 ensure that ani → 0, or i − xn+1 || = 0, ∀i = 1, , N (23) lim ||yn+1 n→∞ i i − yni || ≤ ||yn+1 − xn+1 || + This together with relation (20) and the inequality ||yn+1 i ||xn+1 − xn || + ||xn − yn || implies that i − yni || = lim ||yn+1 (24) n→∞ Moreover, by (23), the sequence yni is bounded because of the boundedness of {xn } Claim If p is any weak cluster point of {xn } then p ∈ F For each i = 1, , N , set Q i (x) = Ai x + N K i (x) ∅ if x ∈ K i if x ∈ / Ki , where N K i (.) is the normal cone of K i Since Ai is monotone and Lipschitz continuous, from Lemma 2.3 we see that Q i is maximal monotone and Q i−1 (0) = V I (Ai , K i ) For each pair (x, y) in the graph of Q i , i.e., (x, y) ∈ G(Q i ), one has y − Ai (x) ∈ N K i (x) By the definition of N K i (x), x − z, y − Ai (x) ≥ 0, ∀z ∈ K i i Substituting z = yn+1 ∈ K i into the last inequality, one gets i i x − yn+1 , y ≥ x − yn+1 , Ai (x) i By yn+1 = PK i xn − λAi yni and Lemma 2.2.ii, we obtain i i x − yn+1 , yn+1 − xn + λAi yni 123 ≥ 0, (25) Modified hybrid projection methods for finding which implies that i i x − yn+1 , Ai (yni ) ≥ x − yn+1 , i xn − yn+1 λ (26) The relations (25), (26) and the monotonicity of Ai lead to i i i i x − yn+1 , y ≥ x − yn+1 , Ai (x) = x − yn+1 , Ai (x) − Ai (yn+1 ) i i i + x − yn+1 , Ai (yn+1 ) − Ai (yni ) + x − yn+1 , Ai (yni ) i i i , Ai (yn+1 ) − Ai (yni ) + x − yn+1 , ≥ x − yn+1 i xn − yn+1 λ (27) i − yni || → 0, one gets From the Lipschitz-continuity of Ai and ||yn+1 i lim ||Ai (yn+1 ) − Ai (yni )|| = (28) n→∞ Assume that there exists a subsequence of {xn } converging weakly to p Without loss i of generality, we can write xn p as n → ∞ Since ||xn − yn+1 || → 0, yni p as n → ∞ Passing to the limit in (27) as n → ∞ and employing relation (28) and the boundedness of yni , we obtain x − p, y ≥ for all (x, y) ∈ G(Q i ) Thus, from the maximal monotonicity of Q i and Lemma 2.3, one has p ∈ Q i−1 (0) = V I (Ai , K i ) for all ≤ i ≤ N Hence, p ∈ F Claim The sequences {xn } and yni converge strongly to x † := PF (x0 ) From (17) we obtain ||xn − x0 || ≤ ||x † − x0 || for all n ≥ This together with the weak lower semicontinuity of the norm ||.|| implies that || p − x0 || ≤ lim inf ||xn − x0 || ≤ lim sup ||xn − x0 || ≤ ||x † − x0 || n→∞ n→∞ By the definition of x † , p = x † and limn→∞ ||xn − x0 || = ||x † − x0 || Finally, since x † − x0 , the Kadec-Klee property of H ensures that xn − x0 → x † − x0 xn − x0 † or xn → x = PF (x0 ) as n → ∞ By the uniqueness of x † , the whole sequence {xn } converges strongly to x † From Claim 2, we can conclude that yni also converges strongly to x † = PF (x0 ) The proof of Theorem 3.1 is complete Next, we propose a modification of Algorithm 1.2 Algorithm 3.3 (A modified hybrid shringking projection method for CSVIPs) Initialization: Choose x0 = x1 ∈ H, y0i ∈ K i , y1i = PK i (x0 − λAi (y0i )) and set C0i = H The parameters λ, k satisfy the following conditions 0 2L − 2λL 123 D V Hieu et al Iterative step: ⎧ i i ⎪ ⎨ yn+1 = PK i (xn − λAi (yn )), i = 1, , N , i i i Cn+1 = v ∈ Cn : ||yn+1 − v||2 ≤ ||xn − v||2 + ⎪ ⎩ xn+1 = PCn+1 (x0 ), N Ci where Cn+1 = ∩i=1 n+1 , i n i n , is defined as in Algorithm 1.2 Remark 3.1 By induction, we can show that Cn is the intersection of finitely many closed half-spaces Actually, the number of half-spaces increases precisely by N after each iterative step However, for our tested problems, Algorithm 3.3 converges more quickly than Algorithms 1.1 and 1.2 due to the shrinking property of the sequence {Cn } Theorem 3.2 The conclusion of Theorem 3.1 remains true for Algorithm 3.3 Proof By similar arguments as in Claim of Theorem 3.1, we obtain i ||yn+1 − x ∗ ||2 ≤ ||xn − x ∗ ||2 + i n, ∀x ∗ ∈ F, ∀i = 1, , N (29) N C i Assume that F ⊂ C for some n ≥ From the It is clear that F ⊂ C0 = ∩i=1 n i i definition of Cn , F ⊂ Cn This together with the definition of Cn+1 and relation (29) i N i implies that F ⊂ Cn+1 Thus, F ⊂ ∩i=1 Cn+1 = Cn+1 By induction, F ⊂ Cn for all n ≥ From xn = PCn (x0 ) and Lemma 2.2.ii, we have ||u − xn ||2 + ||xn − x0 ||2 ≤ ||u − x0 ||2 , ∀u ∈ Cn (30) Substituting u = x † := PF (x0 ) ∈ Cn into inequality (30), we obtain ||x † − xn ||2 + ||xn − x0 ||2 ≤ ||x † − x0 ||2 Thus ||xn − x0 ||2 ≤ ||x † − x0 ||2 , hence the sequence ||xn − x0 ||2 is bounded Again, substituting u = xn+1 ∈ Cn+1 ⊂ Cn into (30), we find ||xn+1 − xn ||2 + ||xn − x0 ||2 ≤ ||xn+1 − x0 ||2 , (31) which implies ||xn − x0 ||2 ≤ ||xn+1 − x0 ||2 or the sequence ||xn − x0 ||2 is nondecreasing Hence, there exists the limit of the sequence ||xn − x0 ||2 From relation (31) we have ||xn+1 − xn ||2 ≤ ||xn+1 − x0 ||2 − ||xn − x0 ||2 Thus J ||xn+1 − xn ||2 ≤ ||x J +1 − x0 ||2 − ||x1 − x0 ||2 , ∀J ≥ 1, n=1 123 Modified hybrid projection methods for finding which implies that ∞ ||xn+1 − xn ||2 < +∞ n=1 The rest of the proof of Theorem 3.2 is similar to that of Theorem 3.1 An extension to finitely many generalized equilibrium problems Let K i , i = 1, , N be nonempty closed convex subsets of a real Hilbert space H N K = ∅ Let f : K ×K → be bifunctions such that fi (x, x) = such that K := ∩i=1 i i i i for all x ∈ K i and Ai : K i → H be operators In this section, we consider the following problem of finding common solutions to generalized equilibrium problems (CSGEP) [23,33,34] Problem Find x ∗ ∈ K such that f i (x ∗ , y) + Ai (x ∗ ), y − x ∗ ≥ 0, ∀y ∈ K i , i = 1, , N If N = then Problem becomes the following generalized equilibrium problem [16,38]: Find x ∗ ∈ K such that f (x ∗ , y) + A(x ∗ ), y − x ∗ ≥ 0, ∀y ∈ K , (32) where f : K × K → is a bifunction and A : K → H is an operator Let us denote the solution set of (32) by G E P( f, A) Some methods for CSGEPs can be found in [23,33–35] Almost existing methods require a strict assumption on the strong (or inverse-strong) monotonicity of Ai In this section, we assume that Ai is monotone and Lipschitz continuous We recall that a bifunction f : K × K → is called i monotone if f (x, y) + f (y, x) ≤ for all x, y ∈ K ; ii n - cyclically monotone (see, [2]) if for each cycle x1 , x2 , , xn , xn+1 = x1 ∈ K , n f (xi , xi+1 ) ≤ (33) i=1 An example for a bifunction f : × → which satisfies the n - cyclic monotonicity as f (x, y) = x(y − x) Some other cyclically monotone operators can be found in [6] For solving Problem 2, we assume that the operators Ai satisfy the conditions G1−G2 and the bifunctions f i satisfy the following conditions A1 f i (x, x) = for all x ∈ K i ; A2 f i is - cyclically monotone; A3 for all x, y, z ∈ K i , lim sup f i (t z + (1 − t)x, y) ≤ f i (x, y); t→0 123 D V Hieu et al A4 for each x ∈ K i , the function f i (x, ) is convex and lower semicontinuos; Note that Assumptions A1 and A2 imply that f i is monotone on K i Indeed, from A2 we have f i (x, y) + f i (y, z) + f i (z, x) ≤ 0, ∀x, y, z ∈ K i Particularly, by substituting z = x into the last inequality and using assumption A1, one has f i (x, y) + f i (y, x) ≤ 0, ∀x, y ∈ K i Thus f i is monotone on K i The following results concern with the bifunction f : C × C → Lemma 4.5 [16] Let C be a nonempty closed and convex subset of a Hilbert space H, f be a bifunction from C × C to satisfying the conditions A1 − A4 and let r > 0, x ∈ H Then, there exists z ∈ C such that f (z, y) + y − z, z − x ≥ 0, ∀y ∈ C r Lemma 4.6 [16] Let C be a nonempty closed and convex subset of a Hilbert space H , f be a bifunction from C × C to satisfying the conditions A1 − A4 For all r > and x ∈ H , define the mapping f Tr x = z ∈ C : f (z, y) + y − z, z − x ≥ 0, ∀y ∈ C r Then the following hold: f (C1) Tr is single-valued; f (C2) Tr is firmly nonexpansive, i.e., for all x, y ∈ H, f f f f ||Tr x − Tr y||2 ≤ Tr x − Tr y, x − y ; f f f (C3) Fi x(Tr ) = E P( f, C), where Fi x(Tr ) denotes the fixed point set of Tr ; (C4) E P( f, C) is closed and convex Next, we extend Algorithm 1.2 to the following algorithm for solving CSGEPs Algorithm 4.4 (An extension of the modified hybrid projection method for CSGEPs) f Initialization: Choose x0 = x1 ∈ H, y0i ∈ K i , y1i = Tr i (x0 − r Ai (y0i )) Set C0 = Q = H The parameters r, k satisfy the following conditions 0 2L − 2r L Modified hybrid projection methods for finding Iterative step: ⎧ f i ⎪ yn+1 = Tr i (xn − r Ai (yni )), i = 1, , N , ⎪ ⎪ ⎪ i i i i i ⎪ ⎪ ⎨ n = k||xn − xn−1 || + r L||yn − yn−1 || − (1 − k − r L)||yn+1 − yn || , i i 2 i Cn = v ∈ H : ||yn+1 − v|| ≤ ||xn − v|| + n , ⎪ ⎪ ⎪ Q n = {v ∈ H : v − xn , x0 − xn ≤ 0}, n ≥ 1, ⎪ ⎪ ⎪ ⎩x n+1 = PCn ∩Q n (x ), N Ci where Cn = ∩i=1 n We have the following result Theorem 4.3 Let K i , i = 1, , N be closed and convex subsets of a real Hilbert N K = ∅ Suppose that { f } N are space H such that K = ∩i=1 i i i=1 : K i × K i → N : K i → H is a finite family bifunctions satisfying the conditions A1 − A4 and {Ai }i=1 of operators satisfying the conditions G1 − G2 Suppose in addition that the solution N G E P( f , A ) of Problem is nonempty Then, the sequences {x } , y i set F = ∩i=1 i i n n generated by Algorithm 4.4 converge strongly to PF (x0 ) Proof Claim The following estimate holds i − x ∗ ||2 ≤ ||xn − x ∗ ||2 + ||yn+1 i n, ∀x ∗ ∈ F, ∀i = 1, , N (34) f i and Tr i we have From the definition of yn+1 i f i (yn+1 , y) + i i y − yn+1 , yn+1 − (xn − r Ai (yni )) ≥ 0, ∀y ∈ K i r Thus, i i i 2r f i (yn+1 , y) ≥ y − yn+1 , (xn − r Ai (yni )) − yn+1 i i = ||y − yn+1 ||2 + ||(xn − r Ai (yni )) − yn+1 ||2 − ||(xn − r Ai (yni )) − y||2 , which implies that i ||yn+1 − y||2 ≤ ||(xn − r Ai (yni )) − y||2 − ||(xn − r Ai (yni )) i i i − yn+1 ||2 + 2r f i (yn+1 , y) = ||xn − y||2 − ||xn − yn+1 ||2 − 2r i i × Ai (yni ), yn+1 − y + 2r f i (yn+1 , y) Substituting y = x ∗ into the last inequality, we obtain i i i i ||yn+1 −x ∗ ||2 ≤ ||xn −x ∗ ||2 −||xn − yn+1 ||2 −2r Ai (yni ), yn+1 − x ∗ +2r f i (yn+1 , x ∗ ) (35) 123 D V Hieu et al By arguing similarly to the proofs of (10)–(12), we get i ||xn − yn+1 ||2 ≥−k||xn −xn−1 ||2+ − Since x ∗ ∈ G E P( f i , Ai ) and yni ∈ K i , i i ||yni − yn+1 ||2+2 xn−1 − yni , yni − yn+1 k (36) f i (x ∗ , yni ) + Ai (x ∗ ), yni − x ∗ ≥ (37) From the monotonicity of Ai , we find Ai (yni ) − A(x ∗ ), yni − x ∗ ≥ (38) Adding both sides of (37) and (38), we have f i (x ∗ , yni ) + Ai (yni ), yni − x ∗ ≥ which together with the hypothesis r > 0, implies 2r Ai (yni ), yni − x ∗ ≥ −2r f i (x ∗ , yni ) Thus, by the Lipschitz continuity of Ai , we get i i − x ∗ = 2r Ai (yni ), yn+1 − yni + 2r Ai (yni ), yni − x ∗ 2r Ai (yni ), yn+1 i i i i ≥ 2r Ai (yni ) − Ai (yn−1 ), yn+1 − yni + 2r Ai (yn−1 ), yn+1 − yni − 2r f i (x ∗ , yni ) i i i i ≥ −2r L||yni − yn−1 ||||yn+1 − yni || + 2r Ai (yn−1 ), yn+1 − yni − 2r f i (x ∗ , yni ) i i i i ≥ −r L||yni − yn−1 ||2 − r L||yn+1 − yni ||2 + 2r Ai (yn−1 ), yn+1 − yni − 2r f i (x ∗ , yni ) This together with (36) implies that i i ||2 + 2r Ai (yni ), yn+1 − x ∗ ≥ −k||xn − xn−1 ||2 ||xn − yn+1 + 1− i − r L ||yni − yn+1 ||2 k i i i − 2r f i (x ∗ , yni ) − r L||yni − yn−1 ||2 + xn−1 − r Ai (yn−1 ) − yni , yni − yn+1 =− 123 i n i i − 2r f i (x ∗ , yni ) + xn−1 − r Ai (yn−1 ) − yni , yni − yn+1 (39) Modified hybrid projection methods for finding f From the definitions of yni and Tr i we have f i (yni , y) + i y − yni , yni − (xn−1 − r Ai (yn−1 )) ≥ 0, ∀y ∈ K i r i Substituting y = yn+1 ∈ K i into the last inequality, we get i i i , (xn−1 − r Ai (yn−1 )) − yni ≥ −2r f i (yni , yn+1 ) yni − yn+1 Thus, it follows from (39) that i i ||xn − yn+1 ||2 + 2r Ai (yni ), yn+1 − x∗ ≥ − i − 2r f i (yni , yn+1 ) − 2r f i (x ∗ , yni ) i n (40) From (35) and (40), we obtain i ||yn+1 − x ∗ ||2 ≤ ||xn − x ∗ ||2 + i n i i + 2r f i (yn+1 , x ∗ ) + f i (x ∗ , yni ) + f i (yni , yn+1 ) , i − x ∗ ||2 ≤ ||xn − x ∗ ||2 + which implies that ||yn+1 of f i i n due to the - cyclic monotonicity Claim The sets F, Cn , Q n are closed and convex, F ⊂ Cn ∩ Q n for all n ≥ 0, and i lim ||xn+1 − xn || = lim ||yni − xn || = lim ||yni − yn+1 || = 0, ∀i = 1, , N n→∞ n→∞ (41) Claim is proved similarly to the proof of Claim in Theorem 3.1 n→∞ Claim {xn } then p ∈ F Without loss of generality, we assume that xn p Since ||yni − xn || → 0, yni p i From the fact yn ⊂ K i and the weak closedness of the convex set K i , we can conclude that p ∈ K i It follows from (41) and the triangle inequality that lim n→∞ i yn+1 − xn = lim n→∞ i yn+1 − yni = 0, (42) which together with the L - Lipschitz continuity of Ai and the hypothesis r > 0, implies that i lim ||Ai yn+1 − Ai yni || = and lim n→∞ n→∞ i yn+1 − xn r = (43) f i From the definitions of yn+1 and Tr i we have i i + , y) + Ai yni , y − yn+1 f i (yn+1 i i y − yn+1 , yn+1 − xn ≥ 0, ∀y ∈ K i r 123 D V Hieu et al Due to the monotonicity of f i we find i i i i y − yn+1 , yn+1 − xn ≥ −f i (yn+1 , y) ≥ f i (y, yn+1 ) ∀y ∈ K i r (44) For each t ∈ (0, 1] and y ∈ K i , set yt = t y + (1 − t) p It follows from the convexity of K i that yt ∈ K i Thus, the monotonicity of Ai and relation (44) yield i Ai yni , y − yn+1 + i i i yt − yn+1 , Ai yt ≥ yt − yn+1 , Ai yt − Ai yni , yt − yn+1 − i i i yt − yn+1 , yn+1 − xn + f i (yt , yn+1 ) r i i i i ≥ yt − yn+1 + Ai yn+1 , Ai yt − Ai yn+1 − Ai yni , yt − yn+1 i i i yt − yn+1 , yn+1 − xn + f i (yt , yn+1 ) r i i i i ≥ Ai yn+1 yt − yn+1 − − Ai yni , yt − yn+1 , yn+1 − xn r i + f i (yt , yn+1 ) − Passing to the limit in the last inequality as n → ∞, and using relation (43) and the hypothesis A4, we obtain yt − p, Ai yt ≥ f i (yt , p) (45) It follows from the assumptions A1, A4, relation (45) and the fact yt − p = t (y − p) that = f i (yt , yt ) = f i (yt , t y + (1 − t) p) ≤ t f i (yt , y) + (1 − t) f i (yt , p) ≤ t f i (yt , y) + (1 − t) yt − p, Ai yt = t f i (yt , y) + (1 − t)t y − p, Ai yt Dividing both sides of the last inequality by t > 0, we obtain f i (yt , y) + (1 − t) y − p, Ai yt ≥ 0, ∀y ∈ K i Passing to the limit in the last inequality as t → 0+ and using from the assumption A3, we get f i ( p, y) + y − p, Ai p ≥ 0, ∀y ∈ K i , i = 1, , N N G E P( f , A ) Thus, p ∈ F = ∩i=1 i i The proof of the strong convergence of the sequences {xn } , yni to x † = PF (x0 ) is similar to that of Claim in Theorem 3.1 Theorem 4.3 is proved 123 Modified hybrid projection methods for finding Remark 4.1 Algorithm 3.3 can be extended to CSGEPs in the same manner as Algorithm 4.4 Numerical experiments Example Consider the operators Ai (x) = Mi x + qi , see [19], where Mi = Bi BiT + Ci + Di , i = 1, , N , and Bi is an m × m matrix, Ci is an m × m skew-symmetric matrix, Di is an m × m diagonal matrix, whose diagonal entries are nonnegative (so Mi is positive semidefinite), qi is a vector in m The feasible set K i = K ⊂ m is a closed convex subset defined by K = {x ∈ m : Ax ≤ b}, where A is an l × m matrix and b is a nonnegative vector It is clear that Ai is monotone and Lipschitz continuous with the constant L = max {||Mi || : i = 1, , N } The initial data is listed in Table We see that K is a polyhedral convex set The sets Cni , Q n in Algorithms 1.1 and 1.2 are either the half-spaces or the whole space m , thus Cn ∩ Q n is also a polyhedral convex set which, in general, is the intersection of N + half-spaces The set Cn in Algorithm 3.3 is the intersection of n N half-spaces All projections on half-spaces are explicitly defined All projections onto polyhedral convex sets are effectively performed by Haugazeau’s method [8, Corollary 29.8] with error TOL In this example, we choose qi = Thus, the solution set F = {0} We compare the execution time (CPU in second) and the number of iterations (Iter.) for Algorithms 1.1, 1.2 and 3.3 The numerical results are showed in Table Example Let H be the functional space L [0, 1] and K i be the unit ball B[0, 1] ⊂ H In this example, we consider the operators Ai : K i → H defined by Ai (x)(t) = [x(t) − Fi (t, s) f i (x(s))] ds + gi (t), x ∈ K , t ∈ [0, 1], i = 1, 2, (46) Table The initial data The starting points x0 , x1 The starting points y0i , y1i x0 = x1 y0i = 0, = (1, 1, , 1)T ∈ m The tolerance TOL y1i = PKi (x0 − λAi (y0i )) † ||xn − x || ≤ TOL = 0.001 The number of subproblems N 10 The parameters λ, k ,k=3 λ = 4L The feasible sets Ki Ki = K = x ∈ The size l × m of matrix A l = 20 and m = 2, 5, 10, 20 The matrixes A,b,Bi , Ci , Di Generated randomly m : Ax ≤ b 123 D V Hieu et al Table Numerical results for Algorithms 3.3, 1.2 and 1.1 m Iter CPU in s Algorithm 3.3 Algorithm 1.2 Algorithm 1.1 Algorithm 3.3 Algorithm 1.2 Algorithm 1.1 112 510 476 7.6 11.5 12.15 185 687 611 11.15 16.34 20.43 10 217 721 675 16.53 19.21 33.27 20 204 845 754 17.01 20.56 38.97 where 2tset+s 2tet F1 (t, s) = √ , f (x) = cos x, g1 (t) = √ , e e2 − e e2 − √ √ 21 21 (t + s), f (x) = exp(−x ), g2 (t) = t+ F2 (t, s) = 7 Set Si (x)(t) = Fi (t, s) f i (x(s))ds − gi (t), we obtain Ai (x) = x − Si (x) We see that Si is Fréchet differentiable and ||S (x)h|| ≤ ||h|| for all x ∈ B[0, 1], h ∈ L [0, 1], see [39, p 168] Thus, a straightforward computation implies that Ai is monotone and - Lipschitz continuous Moreover, the solution set of CSVIP for the operators Ai on , k = and the starting point x0 (t) = The B[0, 1] is F = {0} We choose λ = 4L † stopping criteria is ||xn − x || ≤ TOL = 10−p , p = 3, From Algorithm 1.2 and K i = B[0, 1], we obtain i = yn+1 xn − λAi (yni ) if ||xn − λAi (yni )|| ≤ xn −λAi (yni ) ||xn −λAi (yni )|| if otherwise (47) Also, from Algorithm 1.2 we need to find the projection xn+1 = PCn1 ∩Cn2 ∩Q n (x0 ) Note that the sets Cn1 , Cn2 , Q n are the half-spaces, thus the metric projections onto them are explicitly defined To obtain the next iterate xn+1 we use Haugazeau’s method [8, Corollary 29.8] with error TOL Similarly, we can find the approximations yni , z ni , xn+1 in Algorithm 1.1 (CGRS’s method) It is not easy to implement Algorithm 3.3 in this example because of the complicated structure of the set Cn+1 Thus, in this case, we compare the execution time (CPU in second) and the number of the iterative steps (Iter.) for Algorithm 1.1 and Algorithm 1.2 with the different given tolerances (TOL) All integrals in (46), (47) and others are computed by the trapezoidal formula with the stepsize τ = 0.001 (Table 3) All programs are writen in Matlab version 7.0 and performed on a PC Desktop Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz 2.50 GHz, RAM 2.00 GB From the implemented numerical results above, we see that our proposed algorithms have competitive performance with the extragradient method Specially, our modified hybrid projection method is significantly less time consuming than the extragradient 123 Modified hybrid projection methods for finding Table Numerical results for Algorithms 1.2 and 1.1 TOL Iter CPU in s Algorithm 1.2 Algorithm 1.1 Algorithm 1.2 Algorithm 1.1 10−3 87 85 83.11 138.69 10−4 141 137 107.23 175.82 one when the number of subproblems is large and the operators Ai have complex structures Acknowledgments The authors would like to thank the Associate Editor and the anonymous referees for their valuable comments and suggestions which helped us very much in improving the original version of this paper The work of the second and third authors is supported by VIASM References Alber, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type Spinger, Dordrecht (2006) Alizadeh, M.H., Bianchi, M., Hadjisavvas, N., Pini, R.: On cyclic and n-cyclic monotonicity of bifunctions J Glob Optim 60, 599–616 (2014) Anh, P.K., Buong, N., Hieu, D.V.: Parallel methods for regularizing systems of equations involving accretive operators Appl Anal 93, 2136–2157 (2014) Anh, P.K., Hieu, D.V.: Parallel and sequential hybrid methods for a finite family of asymptotically quasi φ-nonexpansive mappings J Appl Math Comput 48, 241–263 (2015) Anh, P.K., Hieu, D.V.: Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems Vietnam J Math 44(2), 351–374 (2016) Bartz, S., Bauschke, H.H., Borwein, J.M., Reich, S., Wang, X.: Fitzpatrick functions, cyclic monotonicity and Rockafellar’s antiderivative Nonlinear Anal 66, 1198–1223 (2007) Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems SIAM Rev 38, 367–426 (1996) Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces Springer, New York (2011) Boyd, S., Vandenberghe, L.: Convex Optimization Cambridge University Press, Cambridge (2004) 10 Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space J Optim Theory Appl 148, 318–335 (2011) 11 Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem Numer Algorithms 59, 301–323 (2012) 12 Censor, Y., Gibali, A., Reich, S., Sabach, S.: Common solutions to variational inequalities Set-Valued Var Anal 20, 229–247 (2012) 13 Censor, Y., Gibali, A., Reich, S.: A von Neumann alternating method for finding common solutions to variational inequalities Nonlinear Anal 75, 4596–4603 (2012) 14 Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space Optim Methods Softw 26(4–5), 827–845 (2011) 15 Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevichs extragradient method for the variational inequality problem in Euclidean space Optimization 61, 1119–1132 (2012) 16 Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces J Nonlinear Convex Anal 6, 117–136 (2005) 17 Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems Springer, Berlin (2003) 18 Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings Marcel Dekker, New York and Basel (1984) 19 Harker, P.T., Pang, J.-S.: A damped-newton method for the linear complementarity problem Lect Appl Math 26, 265–284 (1990) 123 D V Hieu et al 20 Hartman, P., Stampacchia, G.: On some non-linear elliptic diferential-functional equations Acta Math 115, 271–310 (1966) 21 Hieu, D.V.: A parallel hybrid method for equilibrium problems, variational inequalities and nonexpansive mappings in Hilbert space J Korean Math Soc 52, 373–388 (2015) 22 Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings Numer Algorithms (2016) doi:10.1007/ s11075-015-0092-5 23 Hieu, D.V.: Parallel hybrid methods for generalized equilibrium problems and asymptotically strictly pseudocontractive mappings J Appl Math Comput (2016) doi:10.1007/s12190-015-0980-9 24 Hieu, D.V.: Parallel extragradient-proximal methods for split equilibrium problems Math Model Anal (2016) doi:10.3846/13926292.2016.1183527 25 Kassay, G., Reich, S., Sabach, S.: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces SIAM J Optim 21, 1319–1344 (2011) 26 Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications Academic Press, New York (1980) 27 Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities Springer, Berlin (2000) 28 Korpelevich, G.M.: The extragradient method for finding saddle points and other problems Ekon Mat Metody 12, 747–756 (1976) 29 Lions, J.L.: Optimal Control of Systems Governed by Partial Differential Equations Springer, New York (1971) 30 Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities SIAM J Optim 25, 502–520 (2015) 31 Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational inequality problems J Glob Optim 61, 193–202 (2015) 32 Nadezhkina, N., Takahashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings SIAM J Optim 16, 1230–1241 (2006) 33 Peng, J.W., Yao, J.C.: Some new iterative algorithms for generalized mixed equilibrium problems with strict pseudocontractions and monotone mappings Taiwan J Math 13, 1537–1582 (2009) 34 Peng, J.W., Yao, J.C.: Two extragradient methods for generalized mixed equilibrium problems, nonexpansive mappings and monotone mappings Comput Math Appl 58, 1287–1301 (2009) 35 Petrot, N., Wattanawitoon, K., Kumam, P.: A hybrid projection method for generalized mixed equilibrium problems and fixed point problems in Banach spaces Nonlinear Anal.: Hybrid Syst 4, 631–643 (2010) 36 Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators Trans Am Math Soc 149, 75–88 (1970) 37 Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings SIAM J Control Optim 38, 431–446 (2000) 38 Takahashi, S., Takahashi, W.: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space Nonlinear Anal 69, 1025–1033 (2008) 39 Vilenkin, N.Y., Gorin, E.A., Kostyuchenko, A.G., Krasnosel’skii, M.A., Krein, S.G., Maslov, V.P., Mityagin, B.S., Petunin, Y., et al.: Functional Analysis Wolters-Noordhoff, Groningen (1972) 123 ... other problems including: convex feasibility problems, common fixed point problems, common minimizer problems, common saddle-point problems, hierarchical variational inequality problems, variational. .. We begin with some concepts of the monotonicity of an operator, see, [1,25] An operator A : C → H is said to be 123 Modified hybrid projection methods for finding x0 xn − λ A2(yn) n xn − λ A1(y... Specially, our modified hybrid projection method is significantly less time consuming than the extragradient 123 Modified hybrid projection methods for finding Table Numerical results for Algorithms