A splitting algorithm for system of composite monotone inclusions

20 204 0
A splitting algorithm for system of composite monotone inclusions

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

We propose a splitting algorithm for solving a system of composite monotone inclusions formulated in the form of the extended set of solutions in real Hilbert spaces. The resluting algorithm is an extension of the algorithm in 4. The weak convergence of the algorithm proposed is proved. Applications to minimization problems is demonstrated

A splitting algorithm for system of composite monotone inclusions Dinh D˜ ung1 and Ba`˘ng Cˆong V˜ u2 Dedicated to the 65th birthday of Professor Nguyen Khoa Son 1 Information Technology Institute, Vietnam National University 144 Xuan Thuy, Cau Giay, Hanoi Vietnam dinhzung@gmail.com 2 Department of Mathematics, Vietnam National University 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam bangvc@vnu.edu.vn Abstract We propose a splitting algorithm for solving a system of composite monotone inclusions formulated in the form of the extended set of solutions in real Hilbert spaces. The resluting algorithm is an extension of the algorithm in [4]. The weak convergence of the algorithm proposed is proved. Applications to minimization problems is demonstrated. Keywords: coupled system, monotone inclusion, monotone operator, operator splitting, lipschizian, forward-backward-forward algorithm, composite operator, duality, primal-dual algorithm Mathematics Subject Classifications (2010): 47H05, 49M29, 49M27, 90C25 1 Introduction Let H be a real Hilbert space, let A : H → 2H be a set-valued operator. The domain and the graph of A are respectively defined by dom A = x ∈ H | Ax = ∅ and gra A = (x, u) ∈ H × H | u ∈ Ax . We denote by zer A = x ∈ H | 0 ∈ Ax the set of zeros of A, and by ran A = u ∈ H | (∃ x ∈ H) u ∈ Ax the range of A. The inverse of A is A−1 : H → 2H : u → x ∈ H | u ∈ Ax . Moreover, A is monotone if (∀(x, y) ∈ H × H) (∀(u, v) ∈ Ax × Ay) x − y | u − v ≥ 0, (1.1) and maximally monotone if it is monotone and there exists no monotone operator B : H → 2H such that gra B properly contains gra A. 1 A basis problem in monotone operator theory is to find a zero point of the sum of two maximally monotone operators A and B acting on a real Hilbert space H, that is, find x ∈ H such that 0 ∈ Ax + Bx. (1.2) Suppose that the problem (1.2) has at least one solution x. Then, there exists v ∈ Bx such that −v ∈ Ax. The set of all such pairs (x, v) define the extended set of solutions to the problem (1.2) [20], E(A, B) = (x, v) | v ∈ Bx, −v ∈ Ax . (1.3) Inversely, if E(A, B) is non-empty and (x, v) ∈ E(A, B), then the set of solutions to the problem (1.2) is also nonempty since x solves (1.2) and v solves its dual problem [2], i.e, 0 ∈ B −1 v − A−1 (−v). (1.4) It is remarkable that three fundamental methods such as Douglas-Rachford splitting method, forward-backward splitting method, forward-backward-forward splitting method converge weakly to points in E(A, B) [22, Theorem 1], [14], [23]. We next consider a more general problem where one of the operator has a linearly composite structure. In this case, the problem (1.2) becomes [11, Eq. (1.2)], 0 ∈ Ax + (L∗ ◦ B ◦ L)x, (1.5) where B acts on a real Hilbert space G and L is a bounded linear operator from H to G. Then, it is shown in [11, Proposition 2.8(iii)(iv)] that whenever the set of solutions to (1.5) is non-empty, the extended set of solutions E(A, B, L) = (x, v) | −L∗ v ∈ Ax, Lx ∈ B −1 v (1.6) is non-empty and, for every (x, v) ∈ E(A, B, L), v is a solution to the dual problem of (1.5) [11, Eq.(1.3)], 0 ∈ B −1 v − L ◦ A−1 ◦ (−L∗ )v. (1.7) Algorithm proposed in [11, Eq.(3.1)] to solve the pair (1.5) and (1.7) converges weakly to a point in E(A, B, L) [11, Theorem 3.1]. Let us consider the case when monotone inclusions involving the parallel-sum monotone operators. This typical inclusion is firstly introduced in [18, Problem 1.1] and then studied in [24] and [6]. A simple case is 0 ∈ Ax + L∗ ◦ (B D) ◦ Lx + Cx, (1.8) where B, D act on G and C acts on H, and the sign denotes the parallel sums operations defined by B D = (B −1 + D−1 )−1 . (1.9) Then, under the assumption that the set of solutions to (1.8) is non-empty, so is its extended set of solutions defined by E(A, B, C, D, L) = (x, v) | −L∗ v ∈ (A + C)x, Lx ∈ (B −1 + D−1 )v . (1.10) Furthermore, if there exists (x, v) ∈ E(A, B, C, D, L), then x solves (1.8) and v solves its dual problems defined by 0 ∈ B −1 v − L ◦ (A + C)−1 ◦ (−L∗ )v + D−1 v. (1.11) 2 Under suitable conditions on operators, the algorithms in [18], [6] and [24] converge weakly to a point in E(A, B, C, D, L). We also note that even in the more complex situation when B and D in (1.8) admit linearly composites structures introduced firstly [4] and then in [7], in this case (1.8) becomes 0 ∈ Ax + L∗ ◦ (M ∗ ◦ B ◦ M ) (N ∗ ◦ D ◦ N ) ◦ Lx + Cx, (1.12) where M and N are, respectively, bounded linear operator from G to real Hilbert spaces Y and X , B and D act on Y and X , respectively. Then under suitable conditions on operators, simple calculations show that, the algorithm proposed in [4] and [7] converge weakly to the points in the extended set of solutions, E(A, B, C, D, L, M, N ) = (x, v) | −L∗ v ∈ (A + C)x, Lx ∈ ((M ∗ ◦ B ◦ M )−1 + (N ∗ ◦ D ◦ N )−1 )v . (1.13) Furthermore, for each (x, v) ∈ E(A, B, C, D, L, M, N ), then v solves the dual problem of (1.12), 0 ∈ (M ∗ ◦ B ◦ M )−1 v − L ◦ (A + C)−1 ◦ (−L∗ )v + (N ∗ ◦ D ◦ N )−1 v. (1.14) To sum up, above analysis shows that each primal problem formulation mentioned has a dual problem which admits an explicit formulation and the corresponding algorithm converges weakly to a point in the extended set of solutions. However, there is a class of inclusions in which their dual problems are no longer available, for instance, when A is univariate and C is multivariate, as in [1, Problem 1.1]. Therefore, it is necessary to find a new way to overcome this limit. Observer that the problem in the form of (1.13) can recover both the primal problem and dual problem. Hence, it will be more convenience to formulate the problem in the form of (1.13) to overcome this limitation. This approach is firstly used in [25]. In this paper we extend it to the following problem to unify some recent primal-dual frameworks in the literature. Problem 1.1 Let m, s be strictly positive integers. For every i ∈ {1, . . . , m}, let (Hi , · | · ) be a real Hilbert space, let zi ∈ Hi , let Ai : Hi → 2Hi be maximally monotone, let Ci : H1 ×. . .×Hm → Hi be such that ∃ν0 ∈ [0, +∞[ ∀(xi )1≤i≤m ∈ H1 × . . . × Hm m i=1 m i=1 ∀(yi )1≤i≤m ∈ H1 × . . . × Hm Ci (x1 , . . . , xm ) − Ci (y1 , . . . , ym ) 2 ≤ ν02 m i=1 xi − yi Ci (x1 , . . . , xm ) − Ci (y1 , . . . , ym ) | xi − yi ≥ 0. 2 (1.15) For every k ∈ {1, . . . , s}, let (Gk , · | · ), (Yk , · | · ) and (Xk , · | · ) be real Hilbert spaces, let rk ∈ Gk , let Bk : Yk → 2Yk be maximally monotone, let Dk : Xk → 2Xk be maximally monotone, let Mk : Gk → Yk and Nk : Gk → Xk be bounded linear operators, and every i ∈ {1, . . . , m}, let Lk,i : Hi → Gk be a bounded linear operator. The problem is to find x1 ∈ H1 , . . . , xm ∈ Hm and 3 v 1 ∈ G1 , . . . , v s ∈ Gs such that  s    z − L∗k,1 v k ∈ A1 x1 + C1 (x1 , . . . , xm )  1    k=1   ..   .    s     L∗k,m v k ∈ Am xm + Cm (x1 , . . . , xm )  zm − k=1 m    L1,i xi − r1 ∈ (M1∗ ◦ B1 ◦ M1 )−1 v 1 + (N1∗ ◦ D1 ◦ N1 )−1 v 1     i=1    ..   .     m   Ls,i xi − rs ∈ (Ms∗ ◦ Bs ◦ Ms )−1 v s + (Ns∗ ◦ Ds ◦ Ns )−1 v s .   (1.16) i=1 We denote by Ω the set of solutions to (1.16). Here are some connections to existing primal-dual problems in the literature. (i) In Problem 1.1, set m = 1, (∀k ∈ {1, . . . , s}) Lk,1 = Id, then by removing v 1 , . . . , v s from (1.16), we obtain the primal inclusion in [4, Eq.(1.7)]. Furthermore, by removing x1 from (1.16), we obtain the dual inclusion which is weaker than the dual inclusion in [4, Eq.(1.8)]. (ii) In Problem 1.1, set m = 1, C1 is restricted to be cocoercive (i.e., C1−1 is strongly monotone), then by removing v 1 , . . . , v s from (1.16), we obtain the primal inclusion in [7, Eq.(1.1)]. Furthermore, by removing x1 from (1.16), we obtain the dual inclusion which is weaker than the dual inclusion in [7, Eq.(1.2)]. (iii) In Problem 1.1, set (∀k ∈ {1, . . . , s}) Yk = Xk = Gk and Mk = Nk = Id, (Dk−1 )1≤k≤s are single-valued, then we obtain an instance of the system of inclusions in [25, Eq.(1.3)] where the coupling terms are restricted to be cocoercive in product space. Furthermore, if for every i ∈ {1, . . . , m}, Ci is restricted on Hi and (Dk−1 )1≤k≤s are Lipschitzian, then by removing respectively v 1 , . . . , v s and x1 , . . . , xm , we obtain respectively the primal inclusion in [16, Eq.(1.2)] and the dual inclusion in [16, Eq.(1.3)]. (iv) In Problem 1.1, set s = m, (∀i ∈ {1, . . . , m}) zi = 0, Ai = 0 and (∀k ∈ {1, . . . , s}) rk = 0, (k = i) Lk,i = 0. Then, we obtain the dual inclusion in [5, Eq.(1.2)] where (Dk−1 )1≤k≤s are single-valued and Lipschitzian. Moreover, by removing the variables v 1 , . . . , v s , we obtain the primal inclusion in [5, Eq.(1.2)]. In the present paper, we develop the splitting technique in [4], and base on the convergence result of the algorithm proposed in [16], we propose a splitting algorithm for solving Problem 1.1 and prove its convergence in Section 2. We provide some application examples in the last section. Notations. (See [3]) The scalars product and the norms of all Hilbert spaces used in this paper are denoted respectively by · | · and · . We denote by B(H, G) the space of all bounded linear 4 operators from H to G. The symbols The resolvent of A is and → denote respectively weak and strong convergence. JA = (Id +A)−1 , (1.17) where Id denotes the identity operator on H. We say that A is uniformly monotone at x ∈ dom A if there exists an increasing function φ : [0, +∞[ → [0, +∞] vanishing only at 0 such that ∀u ∈ Ax ∀(y, v) ∈ gra A x − y | u − v ≥ φ( x − y ). (1.18) The class of all lower semicontinuous convex functions f : H → ]−∞, +∞] such that dom f = x ∈ H | f (x) < +∞ = ∅ is denoted by Γ0 (H). Now, let f ∈ Γ0 (H). The conjugate of f is the function f ∗ ∈ Γ0 (H) defined by f ∗ : u → supx∈H ( x | u − f (x)), and the subdifferential of f ∈ Γ0 (H) is the maximally monotone operator ∂f : H → 2H : x → u ∈ H | (∀y ∈ H) y − x | u + f (x) ≤ f (y) (1.19) with inverse given by (∂f )−1 = ∂f ∗ . (1.20) Moreover, the proximity operator of f is proxf = J∂f : H → H : x → argmin f (y) + y∈H 2 1 x − y 2. 2 (1.21) Algorithm and convergence The main result of the paper can be now stated in which we introduce our splitting algorithm, prove its convergence and provide the connections to existing work. Theorem 2.1 In Problem 1.1, suppose that Ω = ∅ and that m s β = ν0 + Nk Lk,i 2 + max ( Nk 1≤k≤s i=1 k=1 2 + Mk 2 ) > 0. (2.1) For every i ∈ {1, . . . , m}, let (ai1,1,n )n∈N , (bi1,1,n )n∈N , (ci1,1,n )n∈N be absolutely summable sequences in Hi , for every k ∈ {1, . . . , s}, let (ak1,2,n )n∈N , (ck1,2,n )n∈N be absolutely summable sequences in Gk , let (ak2,1,n )n∈N (bk2,1,n )n∈N , (ck2,1,n )n∈N absolutely summable sequences in Xk , (ak2,2,n )n∈N , (bk2,2,n )n∈N , (ck2,2,n )n∈N be absolutely summable sequences in Yk . For every i ∈ {1, . . . , m} and k ∈ {1, . . . , s}, k ∈ X , v k ∈ Y , let ε ∈ ]0, 1/(β + 1)[, let (γ ) let xi1,0 ∈ Hi , xk2,0 ∈ Gk and v1,0 n n∈N be sequence in k k 2,0 5 [ε, (1 − ε)/β] and set For n = 0, 1, . . . ,   For i = 1, . . . , m  s ∗ ∗ k i  si1,1,n = xi1,n − γn Ci (x11,n , . . . , xm 1,n ) + k=1 Lk,i Nk v1,n + a1,1,n  i i  pi 1,1,n = Jγn Ai (s1,1,n + γn zi ) + b1,1,n   For k = 1, . . . , s     pk k ∗ k ∗ k k   1,2,n = x2,n + γn Nk v1,n − Mk v2,n + a1,2,n   sk m k i k k   2,1,n = v1,n + γn i=1 Nk Lk,i x1,n − Nk x2,n + a2,1,n   k −1 k k   p2,1,n = s2,1,n − γn Nk rk + Jγn−1 Dk (γn s2,1,n − Nk rk ) + bk2,1,n   k i k k   q2,1,n = pk2,1,n + γn Nk m i=1 Lk,i p1,1,n − Nk p1,2,n + c2,1,n   k k − sk k   v1,n+1 = v1,n 2,1,n + q2,1,n   k k k k   s   2,2,n = v2,n + γn Mk x2,n + a2,2,n −1 k k k   pk   2,2,n = s2,2,n − γn Jγn−1 Bk (γn s2,2,n ) + b2,2,n   qk k k k   2,2,n = p2,2,n + γn Mk p1,2,n + c2,2,n   k k − sk k   v2,n+1 = v2,n 2,2,n + q2,2,n   k k k ∗  q1,2,n = p1,2,n + γn Nk p2,1,n − Mk∗ pk2,2,n + ck1,2,n  k  xk2,n+1 = xk2,n − pk1,2,n + q1,2,n   For i = 1, . . . , m  s i ∗ k ∗ i  q1,1,n = pi1,1,n − γn Ci (p11,1,n , . . . , pm 1,1,n ) + k=1 Lk,i Nk p2,1,n + c1,1,n i i i i x1,n+1 = x1,n − s1,1,n + q1,1,n . (2.2) Then, the following hold for each i ∈ {1, . . . , m} and k ∈ {1, . . . , s}. (i) xi1,n − pi1,1,n 2 n∈N (ii) k − pk v1,n 2,1,n 2 n∈N (iii) xi1,n < +∞ and xk2,n − pk1,2,n 2 n∈N < +∞. < +∞ and k − pk v2,n 2,2,n 2 n∈N < +∞. k x1,i , xk2,n → y k , v1,n k v 1,k , v2,n v 2,k and for every (i, k) ∈ {1 . . . , m} × {1 . . . , s},   zi − sk=1 L∗k,i Nk∗ v 1,k ∈ Ai x1,i + Ci (x1,1 , . . . , x1,m ) and Mk∗ v 2,k = Nk∗ v 1,k ,    m −1 −1 (2.3) Nk i=1 Lk,i x1,i − rk − y k ∈ Dk v 1,k and Mk y k ∈ Bk v 2,k ,    (x , . . . , x , N ∗ v , . . . , N ∗ v ) ∈ Ω 1,1 1,m s 1,s 1 1,1 . (iv) Suppose that Aj is uniformly monotone at x1,j , for some j ∈ {1, . . . , m}, then xj1,n → x1,j . (v) Suppose that the operator (xi )1≤i≤m → (Cj (xi )1≤i≤m )1≤j≤m is uniformly monotone at (x1,1 , . . . , x1,m ), then (∀i ∈ {1, . . . , m}) xi1,n → x1,i . (vi) Suppose that there exists j ∈ {1, . . . , m} and an increasing function φj : [0, +∞[ → [0, +∞] vanishing only at 0 such that ∀(xi )1≤i≤m ∈ H1 × . . . × Hm m Ci (x1 , . . . , xm ) − Ci (x1,1 , . . . , x1,m ) | xi − x1,i ≥ φj ( xj − x1,j ), i=1 then xj1,n → x1,j . 6 (2.4) j (vii) Suppose that Dj−1 is uniformly monotone at v 1,j , for some j ∈ {1, . . . , k}, then v1,n → v 1,j . j (viii) Suppose that Bj−1 is uniformly monotone at v 2,j , for some j ∈ {1, . . . , k}, then v2,n → v 2,j . Proof. Let us introduce the Hilbert direct sums H = H1 ⊕ . . . ⊕ H m , G = G1 ⊕ . . . ⊕ G s , Y = Y1 ⊕ . . . ⊕ Ys , X = X1 ⊕ . . . ⊕ Xs . (2.5) We use the boldsymbol to indicate the elements in these spaces. The scalar products and the norms of these spaces are defined in the normal way. For example, in H, m · | · : (x, y) → xi | yi and · :x→ x|x . (2.6) i=1 Set ×  m H  i=1 Ai xi A : H → 2 : x →     C : H → H : x → (Ci x)1≤i≤m m L: H → G : x → i=1 Lk,i xi     N : G → X : v → (Nk vk )1≤k≤s    z = (z1 , . . . , zm ), 1≤k≤s and × ×  s  B : Y → 2Y : v →  k=1 Bk vk   s  X D: X → 2 : v → k=1 Dk vk M : G → Y : v → (Mk vk )1≤k≤s     r = (r1 , . . . , rs ). (2.7) Then, it follows from (1.15) that (∀(x, y) ∈ H2 ) Cx − Cy ≤ ν0 x − y and Cx − Cy | x − y ≥ 0, (2.8) which shows that C is ν0 -Lipschitzian and monotone hence they are maximally monotone [3, Corollary 20.25]. Moreover, it follows from [3, Proposition 20.23] that A, B and D are maximally monotone. Furthermore,   s ∗ ∗   k=1 Lk,i vk L : G → H : v → 1≤i≤m M ∗ : Y → G : v → (Mk∗ vk )1≤k≤s     ∗ N : X → G : v → (Nk∗ vk )1≤k≤s . (2.9) Then, using (2.7) and (2.9), we can rewrite the system of monotone inclusions (1.16) as monotone inclusions in K = H ⊕ G, find (x, v) ∈ K such that z − L∗ v ∈ (A + C)x Lx − r ∈ (M ∗ ◦ B ◦ M )−1 + (N ∗ ◦ D ◦ N )−1 v. It follows from (2.10) that there exists y ∈ G such that   ∗ ∗   z − L v ∈ (A + C)x z − L v ∈ (A + C)x ⇔ v ∈ M∗ ◦ B ◦ My y ∈ (M ∗ ◦ B ◦ M )−1 v     Lx − y − r ∈ (N ∗ ◦ D ◦ N )−1 v v ∈ N ∗ ◦ D ◦ N (Lx − y − r), 7 (2.10) (2.11) which implies that z ∈ (A + C)x + L∗ N ∗ D(N Lx − N y − N r) 0 ∈ M ∗ ◦ B ◦ M y − N ∗ D(N Lx − N y − N r) . (2.12) Since Ω = ∅, the problem (2.12) possesses at least one solution. The problem (2.12) is a special case of the primal problem in [16, Eq.(1.2)] with      m = 2, K = 2,      L = N L, A = A, B 1 = D,     1,1 1       H1 = H, G 1 = X ,  L = −N ,  C = C, D −1 = 0, 1,2 1 1 and (2.13) H2 = G, G 2 = Y,     L = 0, A = 0, B 2 2 = B, 2,1         z 1 = z, , z 2 = 0,      L = M , C = 0, D −1 = 0.  2,2 2  2 r 1 = N r, r 2 = 0, In view of [16, Eq.(1.4)], the dual problem of (2.12) is to find v 1 ∈ X and v 2 ∈ Y such that −N r ∈ −N L(A + C)−1 (z − L∗ N ∗ v 1 ) + N {0}−1 (N ∗ v 1 − M ∗ v 2 ) + D −1 v 1 0 ∈ −M {0}−1 (N ∗ v 1 − M ∗ v 2 ) + B −1 v 2 , where {0}−1 denotes the inverse of zero operator which maps each point to {0}. We next show that the alogorithm (2.2) is an application of the algorithm in [16, Eq.(2.4)] to (2.12). It follows from [3, Proposition 23.16] that (∀x ∈ H)(γ ∈ ]0, +∞[) JγA1 x = (JγAi xi )1≤i≤m (2.14) and (∀v ∈ X )(γ ∈ ]0, +∞[) JγB 1 v = (JγDk vk )1≤k≤s and (∀v ∈ Y) JγB 2 v = (JγBk vk )1≤k≤s . (2.15) Let us set   a1,1,n = (a11,1,n , . . . , am 1,1,n )    1 m   b1,1,n = (b1,1,n , . . . , b1,1,n ) (∀n ∈ N) c1,1,n = (c11,1,n , . . . , cm 1,1,n )   1  a1,2,n = (a1,2,n , . . . , as1,2,n )     c1,2,n = (c11,2,n , . . . , cs1,2,n )   a2,1,n = (a12,1,n , . . . , as2,1,n )    1 s   c2,1,n = (c2,1,n , . . . , c2,1,n ) (∀n ∈ N) a2,2,n = (a12,2,n , . . . , as2,2,n )    b2,2,n = (b12,2,n , . . . , bs2,2,n )     c2,2,n = (c12,2,n , . . . , cs2,2,n ). and (2.16) Then, it follows from our assumptions that every sequence defined in (2.16) is absolutely summable. Now set 1 , . . . , vs ) x1,n = (x11,n , . . . , xm v 1,n = (v1,n 1,n ) 1,n (∀n ∈ N) and (2.17) 1 , . . . , v s ), x2,n = (x12,n , . . . , xs2,n ) v 2,n = (v2,n 2,n and set   s1,1,n = (s11,1,n , . . . , sm 1,1,n )    1 m   p1,1,n = (p1,1,n , . . . , p1,1,n ) 1 m ) (∀n ∈ N) q 1,1,n = (q1,1,n , . . . , q1,1,n    p1,2,n = (p11,2,n , . . . , ps1,2,n )     1 s q 1,2,n = (q1,2,n , . . . , q1,2,n )   s2,1,n = (s12,1,n , . . . , ss2,1,n )      p2,1,n = (p12,1,n , . . . , ps2,1,n )    q 1 s 2,1,n = (q2,1,n , . . . , q2,1,n ) (∀n ∈ N)  s2,2,n = (s12,2,n , . . . , ss2,2,n )     p2,2,n = (p12,2,n , . . . , ps2,2,n )    q 1 s 2,2,n = (q2,2,n , . . . , q2,2,n ). and 8 (2.18) Then, in view of (2.7),(2.9), (2.13) and (2.14), (2.15), algorithm (2.2) reduces to a special case of the algorithm in [16, Eq. (2.4)]. Moreover, it follows from (2.1) and (2.13) that the condition [16, Eq.(1.1)] is satisfied. Furthermore, the conditions on stepsize (γn )n∈N and, as shown above, every specific conditions on operators and the error sequences are also satisfied. To sum up, every specific conditions in [16, Problem 1.1] and [16, Theorem 2.4] are satisfied. (i)(ii): These conclusions follow from [16, Theorem 2.4(i)] and [16, Theorem 2.4(ii)], respectively. (iii): It follows from [16, Theorem 2.4(iii)(c)] and [16, Theorem 2.4(iii)(d)] that x1,n x1 , x2,n y and v 1,n v 1 , v 2,n v 2 , We next derive from [16, Theorem 2.4(iii)(a)] and [16, Theorem 2.4(iii)(b)] that, for every i ∈ {1 . . . , m} and k ∈ {1 . . . , s}, s L∗k,i Nk∗ v 1,k ∈ Ai x1,i + Ci (x1,1 , . . . , x1,m ) zi − and Mk∗ v 2,k = Nk∗ v 1,k . (2.19) k=1 and m Lk,i x1,i − rk − y k Nk ∈ Dk−1 v 1,k and Mk y k ∈ Bk−1 v 2,k . (2.20) i=1 We have m (2.20) ⇔ v 1,k ∈ Dk Nk Lk,i x1,i − rk − y k and v 2,k ∈ Bk Mk y k (2.21) i=1 m ⇒ Nk∗ v 1,k ∈ Nk∗ Dk Nk Lk,i x1,i − rk − y k ) and Mk∗ v 2,k ∈ Mk∗ Bk Mk y k i=1 m Lk,i x1,i − rk − y k ∈ (Nk∗ ◦ Dk ◦ Nk )−1 (Nk∗ v 1,k ) and y k ∈ (Mk∗ ◦ Bk ◦ Mk )−1 (Mk∗ v 2,k ) ⇒ i=1 m Lk,i x1,i − rk ∈ (Nk∗ ◦ Dk ◦ Nk )−1 (Nk∗ v 1,k ) + (Mk∗ ◦ Bk ◦ Mk )−1 (Nk∗ v 1,k ). ⇒ (2.22) i=1 Therefore, (2.19) and (2.21) shows that (x1,1 , . . . , x1,m , N1∗ v 1,1 , . . . , Ns∗ v 1,s ) is a solution to (1.16). (iv): For every n ∈ N and every i ∈ {1, . . . , m} and k ∈ {1, . . . , s}, set  m i k k +γ  sk2,1,n = v1,n  n i=1 Nk Lk,i x1,n − Nk x2,n  i i 1 m   s1,1,n = x1,n − γn Ci (x1,n , . . . , x1,n )   k k     s  p2,1,n = s2,1,n − γn Nk rk ∗ ∗ k + k=1 Lk,i Nk v1,n and +Jγn−1 Dk (γn−1 sk2,1,n − Nk rk ) k k − γ N ∗vk − M ∗vk   p = x n   1,2,n 2,n 1,n 2,n k k k + γ M xk   sk2,2,n = v2,n   n k 2,n pi  i  = J (s + γ z ) n i γ A  n i 1,1,n 1,1,n k k p2,2,n = s2,2,n − γn Jγn−1 Bk (γn−1 sk2,2,n ). (2.23) i i k k Since (∀i ∈ {1, . . . , m}) a1,1,n → 0, b1,1,n → 0, (∀k ∈ {1, . . . , s}) a2,1,n → 0, a2,2,n → 0 and bk2,1,n → 0, bk2,2,n → 0 and since the resolvents of (Ai )1≤i≤m , (Bk−1 )1≤k≤s and (Dk−1 )1≤k≤s are nonexpansive, we obtain (∀i ∈ {1, . . . , m}) pi1,1,n − pi1,1,n → 0 (∀k ∈ {1, . . . , s}) pk1,2,n − pk1,2,n → 0 (∀k ∈ {1, . . . , s}) pk2,1,n − pk2,1,n → 0 (∀k ∈ {1, . . . , s}) pk2,2,n − pk2,2,n → 0. and 9 (2.24) In turn, by (i) and (ii), we obtain (∀i ∈ {1, . . . , m}) pi1,1,n − xi1,n → 0, pi1,1,n (∀k ∈ {1, . . . , s}) pk1,2,n − pk1,2,n → 0, pk1,2,n x1,i yk (2.25) v 1,k v 2,k . (2.26) and (∀k ∈ {1, . . . , s}) k → 0, pk2,1,n − v1,n k → 0, pk2,2,n − v2,n pk2,1,n pk2,2,n Set (∀n ∈ N) p1,1,n = (p11,1,n , . . . , pm 1,1,n ) 1 p1,2,n = (p1,2,n , . . . , ps1,2,n ) p2,1,n = (p12,1,n , . . . , ps2,1,n ) p2,2,n = (p12,2,n , . . . , ps2,2,n ). and (2.27) Then, it follows from (2.26) that γn−1 (x1,n − p1,1,n ) → 0 γn−1 (x2,n − p1,2,n ) → 0 and γn−1 (v 1,n − p2,1,n ) → 0 γn−1 (v 2,n − p2,2,n ) → 0. Furthermore, we derive from (2.23) that, for every i ∈ {1, . . . , m} and k ∈ {1, . . . , s}  s 1 m i ∗ k ∗ −1 i i  γn (x1,n − p1,1,n ) − k=1 Lk,i Nk v1,n − Ci (x1,n , . . . , x1,n ) ∈ −zi + Ai p1,1,n −1 (∀n ∈ N) γn−1 (sk2,2,n − pk2,2,n ) ∈ Bk pk2,2,n   −1 k γn (s2,1,n − pk2,1,n ) ∈ rk + Dk−1 pk2,1,n . (2.28) (2.29) Since Aj is uniformly monotone at x1,j , using (2.29) and (2.19), there exists an increasing function φAj : [0, +∞[ → [0, +∞] vanishing only at 0 such that, for every n ∈ N, s φAj ( pj1,1,n − x1,j ) pj1,1,n − x1,j | γn−1 (xj1,n − pj1,1,n ) − k ¯ 1) − v 1,k ) − (Cj x1,n − Cj x L∗k,j Nk∗ (v1,n k=1 s k pj1,1,n − x1,j | L∗k,j Nk∗ (v1,n − v 1,k ) = pj1,1,n − x1,j | γn−1 (xj1,n − pj1,1,n − k=1 − χj,n , (2.30) ¯ 1 . Therefore, where we denote ∀n ∈ N χj,n = pj1,1,n − x ¯1,j | Cj x1,n − Cj x φAj ( pj1,1,n − x1,j ) ≤ p1,1,n − x1 | γn−1 (x1,n − p1,1,n − p1,1,n − x1 | L∗ N ∗ (v 1,n − v 1 ) − χn = p1,1,n − x1 | γn−1 (x1,n − p1,1,n − p1,1,n − x1,n | L∗ N ∗ (v 1,n − v 1 ) − x1,n − x1 | L∗ N ∗ (v 1,n − v 1 ) − χn , (2.31) m ¯ 1 | Cx1,n − C x ¯ 1 . Since (Bk−1 )1≤k≤s and (Dk−1 )1≤k≤s are where χn = i=1 χi,n = p1,1,n − x monotone, we derive from (2.20) and (2.29) that for every k ∈ {1, . . . , s}, m k − pk i k 0 ≤ pk2,1,n − v 1,k | γn−1 (v1,n 2,1,n ) + i=1 Nk Lk,i (x1,n − x1,i ) − Nk (x2,n − y k ) k − pk k 0 ≤ pk2,2,n − v 2,k | γn−1 (v2,n 2,2,n ) + Mk (x2,n − y k ) , 10 (2.32) which implies that 0 ≤ p2,2,n − v 2 | γn−1 (v 2,n − p2,2,n ) + p2,2,n − v 2 | M (x2,n − y) (2.33) and 0 ≤ p2,1,n − v 1 | γn−1 (v 1,n − p2,1,n ) + N L(x1,n − x1 ) | p2,1,n − v 1 − p2,1,n − v 1 | N (x2,n − y) . (2.34) We expand (χn )n∈N as (∀n ∈ N) χn = x1,n − x1 | Cx1,n − Cx1 + p1,1,n − x1,n | Cx1,n − Cx1 ≥ p1,1,n − x1,n | Cx1,n − Cx1 , (2.35) where the last inequality follows from the monotonicity of C. Now, adding (2.34), (2.33), (2.31), (2.35) and using M ∗ v 2 = N ∗ v 1 , we obtain, φAj ( pj1,1,n − x1,j ) ≤ p1,1,n − x1 | γn−1 (x1,n − p1,1,n − p1,1,n − x1,n | L∗ N ∗ (v 1,n − v 1 ) + p2,2,n − v 2 | γn−1 (v 2,n − p2,2,n ) + p2,1,n − v 1 | γn−1 (v 1,n − p2,1,n ) + M ∗ p2,2,n − N ∗ p2,1,n | x2,n − y + N L(x1,n − x1 ) | p2,1,n − v 1,n − χn . (2.36) We next derive from (2.2) that k ) + ck1,2,n , (∀k ∈ {1, . . . , s}) Mk∗ pk2,2,n − Nk∗ pk2,1,n = γn−1 (pk1,2,n − q1,2,n (2.37) which and (2.27), (2.28), and [11, Theorem 2.5(i)] imply that M ∗ pk2,2,n − N ∗ pk2,1,n → 0. (2.38) Furthermore, since ((xi,n )n∈N )1≤i≤2 and (p1,1,n )n∈N , (p2,1,n )n∈N , (p2,2,n )n∈N , (v 1,n )n∈N converge weakly, they are bounded. Hence, τ = sup{ max { xi,n − xi , p2,i,n − v i , p1,1,n − x1 }, v 1,n − v 1 } < +∞. n∈N 1≤i≤2 (2.39) Then, using Cauchy-Schwarz’s inequality, the Lipchitzianity of C and (2.39), (2.28), it follows from (2.36) that φAj ( pj1,1,n − x1,j ) ≤ τ γn−1 + N L p1,1,n − x1 + p2,1,n − v 1,n + γn−1 (v 2,n − p2,2,n ) + γn−1 (v 1,n − p2,1,n ) + µ p1,1,n − x1 → 0, (2.40) in turn, pj1,1,n → x1,j and hence, by (2.25), xj1,n → x1,j . (v): Since C is uniformly monotone at x1 , there exists an increasing function φC : [0, +∞[ → [0, +∞] vanishing only at 0 such that (∀n ∈ N) x1,n − x1 | Cx1,n − Cx1 ≥ φC ( x1,n − x1 ), 11 (2.41) and hence, (2.35) becomes (∀n ∈ N) χn = x1,n − x1 | Cx1,n − Cx1 + p1,1,n − x1,n | Cx1,n − Cx1 ≥ p1,1,n − x1,n | Cx1,n − Cx1 + φC ( x1,n − x1 ). (2.42) Processing as in (iv), (2.40) becomes φC ( x1,n − x1 ) ≤ τ γn−1 + N L p1,1,n − x1 + p2,1,n − v 1,n + γn−1 (v 2,n − p2,2,n ) + γn−1 (v 1,n − p2,1,n ) + µ p1,1,n − x1 → 0, (2.43) in turn, x1,n → x1 or equivalently (∀i ∈ {1, . . . , m}) xi1,n → x1,i . (vi): Using the same argument as in the proof of (v), we reach at (2.43) where φC ( x1,n − x1 ) is replaced by φj ( xj1,n − x1,j ), and hence we obtain the conclusion. (vii)&(vi): Using the same argument as in the proof of (v). Remark 2.2 Here are some remarks. (i) In the special case when m = 1 and (∀k ∈ {1, . . . , s}) Gk = H1 , Lk,i = Id, algorithm (2.2) reduces to the recent algorithm proposed in [4, Eq.(3.15)] where the convergence results are proved under the same conditions. (ii) In the special case when m = 1, an alternative algorithm proposed in [7] can be used to solve Problem 1.1. (iii) In the case when (∀k ∈ {1 . . . , s})(∀i ∈ {1, . . . , m}) Lk,i = 0, algorithm (2.2) is separated into two different algorithms which solve respectively the first m inclusions and the last k inclusions in (1.16) independently. (iv) In the case when (∀k ∈ {1, . . . , s}) Xk = Yk = Gk , Nk = Mk = Id, we obtain a new splitting method for solving coupled of system monotone inclusion. An alternative method can be found in [25] for the case when C is restricted to be cocoercive and (Dk )1≤k≤s are strongly monotone. (v) Condition (2.4) is satisfied, for example, when each Ci is restricted to be univariate and monotone, and Cj is uniformly monotone. 3 Applications to minimization problems The algorithm proposed has a structure of the forward-backward-forward splitting as in [4, 11, 16, 18, 23]. The applications of this type of algorithm to specific problems in applied mathematics can be found in [3, 4, 10, 11, 16, 18, 19, 23] and references therein. We provide an application to the 12 following minimization problem which extends [4, Problem 4.1] and [7, Problem 4.1]. We recall that the infimal convolution of the two functions f and g from H to ]−∞, +∞] is g : x → inf (f (y) + g(x − y)). f (3.1) y∈H Problem 3.1 Let m, s be strictly positive integers. For every i ∈ {1, . . . , m}, let (Hi , · | · ) be a real Hilbert space, let zi ∈ Hi , let fi ∈ Γ0 (Hi ), let ϕ : H1 × . . . × Hm → R be convex differentiable function with ν0 -Lipschitz continuous gradient ∇ϕ = (∇1 ϕ, . . . , ∇m ϕ), for some ν0 ∈ [0, +∞[. For every k ∈ {1, . . . , s}, let (Gk , · | · ), (Yk , · | · ) and (Xk , · | · ) be real Hilbert spaces, let rk ∈ Gk , let gk ∈ Γ0 (Yk ), let k ∈ Γ0 (Xk ), let Mk : Gk → Yk and Nk : Gk → Xk be bounded linear operators. For every i ∈ {1, . . . , m} and every k ∈ {1, . . . , s}, let Lk,i : Hi → Gk be a bounded linear operator. The primal problems is to s minimize x1 ∈H1 ,...,xm ∈Hm m ( k ◦ Nk ) (gk ◦ Mk ) Lk,i xi − rk i=1 k=1 m fi (xi ) − xi | zi + + ϕ(x1 , . . . , xm ), (3.2) i=1 and the dual problem is to m minimize v 1 ∈X ,v 2 ∈Y (∀k∈{1,...,s}) Mk∗ v2,k =Nk∗ v1,k ϕ∗ s fi∗ L∗k,i Nk∗ v1,k zi − i=1 k=1 1≤i≤m s ∗ k (v1,k ) + + gk∗ (v2,k ) + Nk∗ v1,k | rk . (3.3) k=1 Corollary 3.2 In Problem 3.1, suppose that (2.1) is satisfied and there exists x = (x1 , . . . , xm ) ∈ H1 × . . . × Hm such that, for all j ∈ {1, . . . , m}, s m L∗k,i zi ∈ ∂fi (xi )+ (Nk∗ ◦(∂ k )◦Nk ) (Mk∗ ◦(∂gk )◦Mk ) Lk,j xj −rk +∇i ϕ(x). (3.4) j=1 k=1 For every i ∈ {1, . . . , m}, let (ai1,1,n )n∈N , (bi1,1,n )n∈N , (ci1,1,n )n∈N be absolutely summable sequences in Hi , for every k ∈ {1, . . . , s}, let (ak1,2,n )n∈N , (ck1,2,n )n∈N be absolutely summable sequences in Gk , let (ak2,1,n )n∈N (bk2,1,n )n∈N , (ck2,1,n )n∈N absolutely summable sequences in Xk , (ak2,2,n )n∈N , (bk2,2,n )n∈N , (ck2,2,n )n∈N be absolutely summable sequences in Yk . For every i ∈ {1, . . . , m} and k ∈ {1, . . . , s}, k ∈ X , v k ∈ Y , let ε ∈ ]0, 1/(β + 1)[, let (γ ) let xi1,0 ∈ Hi , xk2,0 ∈ Gk and v1,0 n n∈N be sequence in k k 2,0 13 [ε, (1 − ε)/β] and set For n = 0, 1, . . . ,   For i = 1, . . . , m  s ∗ ∗ k i  si1,1,n = xi1,n − γn ∇i ϕ(x11,n , . . . , xm 1,n ) + k=1 Lk,i Nk v1,n + a1,1,n  i i  pi 1,1,n = proxγn fi (s1,1,n + γn zi ) + b1,1,n   For k = 1, . . . , s     pk k ∗ k ∗ k k   1,2,n = x2,n + γn Nk v1,n − Mk v2,n + a1,2,n   sk m i k k k   2,1,n = v1,n + γn i=1 Nk Lk,i x1,n − Nk x2,n + a2,1,n   k   p2,1,n = sk2,1,n − γn Nk rk + proxγn−1 k (γn−1 sk2,1,n − Nk rk ) + bk2,1,n   k i k k   q2,1,n = pk2,1,n + γn Nk m i=1 Lk,i p1,1,n − Nk p1,2,n + c2,1,n   k k − sk k   v1,n+1 = v1,n 2,1,n + q2,1,n   k k k k   s   2,2,n = v2,n + γn Mk x2,n + a2,2,n −1 k k k   pk   2,2,n = s2,2,n − γn proxγn−1 gk (γn s2,2,n ) + b2,2,n   qk k k k   2,2,n = p2,2,n + γn Mk p1,2,n + c2,2,n   k k − sk k   v2,n+1 = v2,n 2,2,n + q2,2,n   k k k ∗  q1,2,n = p1,2,n + γn Nk p2,1,n − Mk∗ pk2,2,n + ck1,2,n  k  xk2,n+1 = xk2,n − pk1,2,n + q1,2,n   For i = 1, . . . , m  s i ∗ k ∗ i  q1,1,n = pi1,1,n − γn ∇i ϕ(p11,1,n , . . . , pm 1,1,n ) + k=1 Lk,i Nk p2,1,n + c1,1,n i i i i x1,n+1 = x1,n − s1,1,n + q1,1,n . (3.5) Then, the following hold for each i ∈ {1, . . . , m} and k ∈ {1, . . . , s}, (i) xi1,n − pi1,1,n 2 n∈N (ii) k − pk v1,n 2,1,n 2 n∈N < +∞ and xk2,n − pk1,2,n 2 n∈N < +∞. < +∞ and k − pk v2,n 2,2,n 2 n∈N < +∞. k k (iii) xi1,n x1,i , v1,n v 1,k , v2,n v 2,1 , . . . , v 2,s ) solves (3.3). v 2,k and (x1,1 , . . . , x1,m ) solves (3.2) and (v 1,1 , . . . , v 1,s , (iv) Suppose that fj is uniformly convex at x1,j , for some j ∈ {1, . . . , m}, then xj1,n → x1,j . (v) Suppose that ϕ is uniformly convex at (x1,1 , . . . , x1,m ), then (∀i ∈ {1, . . . , m}) xi1,n → x1,i . (vi) Suppose that ∗ j j → v 1,j . is uniformly convex at v 1,j , for some j ∈ {1, . . . , k}, then v1,n j (vii) Suppose that gj∗ is uniformly convex at v 2,j , for some j ∈ {1, . . . , k}, then v2,n → v 2,j . Proof. Set (∀i ∈ {1, . . . , m}) Ai = ∂fi and Ci = ∇i ϕ, (∀k ∈ {1, . . . , s}) Bk = ∂gk , Dk = ∂ k . (3.6) Then, it follows from [3, Theorem 20.40] that (Ai )1≤i≤m , (Bk )1≤k≤s , and (Dk )1≤k≤s are maximally monotone. Moreover, (C1 , . . . , Cm ) = ∇ϕ is ν0 -Lipschitzian. Therefore, every conditions on the 14 operators in Problem 1.1 are satisfied. Let H, G, X and Y be defined as in the proof of Theorem 2.1, and let L, M , N , z and r be defined as in (2.7), and define  m  f : H → ]−∞, +∞[ : x → i=1 fi (xi ) (3.7) g : Y → ]−∞, +∞[ : v → sk=1 gk (vk )   s : X → ]−∞, +∞[ : v → k=1 k (vk ). Observe that [3, Proposition 13.27], m ∗ s fi∗ (yi ), f :y→ s ∗ gk∗ (vk ), g :v→ i=1 and k=1 We also have ∗ ∗ k (vk ). :v→ (3.8) k=1 s ( ◦ N) (g ◦ M ) : v → ( k ◦ Nk ) (gk ◦ Mk ) (vk ). (3.9) k=1 Then, the primal problem becomes minimize f (x) − x | z + (( ◦ N ) x∈H (g ◦ M ))(Lx − r) + ϕ(x), (3.10) ∗ (3.11) and the dual problem becomes minimize (ϕ∗ v 2 ∈Y,v 1 ∈X M ∗ v 2 =N ∗ v 1 f ∗ )(z − L∗ N ∗ v 1 ) + (v 1 ) + g ∗ (v 2 ) + N ∗ v 1 | r . Using the same argument as in [7, Page 15], we have, inf f (x)− x | z + (( ◦ N ) x∈H sup − (ϕ∗ (g ◦ M ))(Lx − r) + ϕ(x) ≥ f ∗ )(z − L∗ N ∗ v 1 ) − ∗ (v 1 ) − g ∗ (v 2 ) − N ∗ v 1 | r . (3.12) v 2 ∈Y,v 1 ∈X M ∗ v 2 =N ∗ v 1 Furthermore, the condition (3.4) implies that the set of solutions to (1.16) is non-empty. Furthermore, we derive from (1.21), (3.6) and [15, Lemma 2.10] that (3.5) reduces to a special case of (2.2). Moreover, every specific condition in Theorem 2.1 is satisfied. Therefore, by Theorem 2.1(iii), we have zi − Nk s ∗ ∗ k=1 Lk,i Nk v 1,k ∈ ∂fi (x1,i ) + ∇i ϕ(x1,1 , . . . , x1,m ) m ∗ i=1 Lk,i x1,i − rk − y k ∈ ∂ k (v 1,k ) and Mk y k ∈ and Mk∗ v 2,k = Nk∗ v 1,k , ∂gk∗ (v 2,k ), (3.13) which is equivalent to z − L∗ N ∗ v 1 ∈ ∂f (x1 ) + ∇ϕ(x1 ) and M ∗ v 2 = N ∗ v 1 , N (Lx1 − r − y) ∈ ∂ ∗ (v 1 ) and M y ∈ ∂g ∗ (v 2 ). (3.14) We next prove that x1 = (x1,1 , . . . , x1,m ) ∈ H is a solution to primal problem and (v 1 , v 2 ) = (v 1,1 , . . . , v 1,s , v 2,1 , . . . , v 2,s ) ∈ X × Y is a solution to dual problem. Now, we have  ∗ ∗ ∗ ∗ ∗  f (x1 ) + ϕ(x1 ) + (f + ϕ) (z − L N v 1 ) = x1 | z − L N v 1 , (3.15) (N (Lx1 − r − y)) + ∗ (v 1 ) = N (Lx1 − r − y) | v 1 ,   ∗ g(M y) + g (v 2 ) = M y | v 2 , 15 which implies that f (x1 ) − x1 | z + (( ◦ N ) (g ◦ M ))(Lx1 − r) + ϕ(x1 ) ≤ f (x1 ) − x1 | z + g(M y) + (N (Lx1 − r − y)) + ϕ(x) ≤ −(f + ϕ)∗ (z − L∗ N ∗ v 1 ) − ∗ = −(f ∗ ϕ∗ )(z − L∗ N ∗ v 1 ) − ∗ (v 1 ) − g ∗ (v 2 ) − r | N ∗ v 1 (v 1 ) − g ∗ (v 2 ) − r | N ∗ v 1 . (3.16) Combining this inequality and (3.12), we get f (x1 ) − x1 | z +(( ◦ N ) (g ◦ M ))(Lx1 − r) + ϕ(x1 ) = inf f (x) − x | z + (( ◦ N ) x∈H (g ◦ M ))(Lx − r) + ϕ(x) (3.17) and (f ∗ ϕ∗ )(z − L∗ N ∗ v 1 ) + ∗ (v 1 ) + g ∗ (v 2 ) + r | N ∗ v 1 m = minimize v 2 ∈Y,v 1 ∈X (∀k∈{1,...,s}) Mk∗ v2,k =Nk∗ v1,k ϕ∗ s fi∗ L∗k,i Nk∗ v1,k zi − i=1 k=1 1≤i≤m s ∗ k (v1,k ) + + gk∗ (v2,k ) + Nk∗ v1,k | rk . (3.18) k=1 Therefore, the conclusions follow from Theorem 2.1 and the fact that the uniform convexity of a function in Γ0 (H) at a point in the domain of its subdifferentiable implies the uniform monotonicity of its subdifferential at that point. Remark 3.3 Here are some remarks. (i) In the special case when m = 1 and (∀k ∈ {1, . . . , s}) Gk = H1 , Lk,i = Id, algorithm (3.5) reduces to [4, Eq.(4.20)]. In the case when m > 1, one can apply algorithm (3.5) to multicomponent signal decomposition and recovery problems [8, 9] where the smooth multivariate function ϕ models the smooth couplings and the first term in (3.2) models non-smooth couplings. (ii) Some sufficient conditions, which ensure that (3.4) is satisfied, are in [7, Proposition 4.2]. In the remainder of this section, we provide some concrete examples in image restoration [12, 8, 9, 21], which can be formulated as special cases of the problem (3.10). Example 3.4 (Image decomposition) Let us consider the case where the noisy image r ∈ RK×K is decomposed into three parts, r = x1 + x2 + w, (3.19) where w is noise. To find the ideal image x = x1 + x2 = “ the piecewise constant part” + “the piecewise smooth part“. We propose to solve the following variational problem 1 r − x1 − x2 x1 ∈C1 ,x2 ∈C2 2 minimize 2 + α ∇x1 16 1,2 + β ∇ 2 x2 1,4 , (3.20) where ∇ and ∇2 are respectively the first and the second order discrete gradient (see [21, Section 2.1] for their closed form expressions), C1 and C2 are non-empty closed convex subsets of RK×K and model the prior information on the ideal solutions x1 and x2 , respectively. The norm · 1,2 and · 1,4 are, respectively, defined by · 1,2 : RK×K × RK×K |x(i, j)|2 + |y(i, j)|2 (x, y) → (3.21) 1≤i,j≤K and · 1,4 : (RK×K )4 |x(i, j)|2 + |y(i, j)|2 + |u(i, j)|2 + |v(i, j)|2 . (x, y, u, v) → (3.22) 1≤i,j≤K The problem (3.20) is a special case of (3.10) with    m = 2 = s, N1 = Id, N2 = Id, M1 = Id, M2 = Id  L = ∇, L = L = 0, L = ∇2 , r = 0, r = 0 1,1 1,2 2,1 2,2 1 2  g1 = · 1,2 , g2 = · 1,4 , 1 = 2 = ι{0} ,    f = ι , f = ι , z = 0, z = 0, ϕ : (x , x ) → 1 r − x − x 2 . 1 2 1 2 1 2 C1 2 C2 1 2 (3.23) We note that in the case when C1 = C2 = RK×K , the problem (3.20) was proposed in [12, Eq. (30)]. The next example will be an application to the problem of recovery an ideal image from multiobservation [17, Eq.(3.4)]. Example 3.5 Let p, K,(qi )1≤i≤p be a strictly positive integers, let H = RK×K , and for every i ∈ {1, . . . , p}, let Gi = Rqi and Ti : H → Gi be a linear mapping. Consider the problem of recovery an ideal image x from (∀i ∈ {1, . . . , p}) ti = Ti x + wi , (3.24) where each wi is a noise component. Let (α, β) ∈ [0, +∞[2 , (ωi )1≤i≤p ∈ ]0, +∞[p , let C1 and C2 be a non empty, closed convex subsets of H, model the prior information of the ideal image. We propose the following variational problem to recover x, p minimize x∈C1 ∩C2 k=1 ωk tk − Tk x 2 2 + (α · 1,2 ◦ ∇) (β · 1,4 ◦ ∇2 )(x). The problem (3.25) is a special case of the primal problem (3.2) with   m = 1, s = 2, L1,1 = Id, L1,2 = Id,    f = ι , N = ∇, = β · 2 1 1 1 1,2 , g1 = α · 1,4 , M1 = ∇ , C1  N2 = Id, 2 = ι{0} , g2 = ιC2 , M2 = Id, ϕ = 21 pk=1 ωk tk − Tk ·    ν = p ω T 2 , ∇ 2 ≤ 8. 0 k k=1 k 2, (3.25) (3.26) Using the same argument as in [4, Section 5.3], we can check that (3.4) is satisfied. In the following experiment, we use p = 2, C2 = [0, 1]K×K and C1 is defined by [13] C1 = x ∈ RK×K | (∀(i, j) ∈ {1, . . . , K/8}2 17 ˆ (i, j)) , x ˆ(i, j) = x (3.27) where x ˆ is the discrete Fourier transform of x. The operators T1 and T2 are convolution operators with uniform kernel of sizes 15 × 15 and 17 × 17, respectively. Furthermore, ω1 = ω2 = 0.5, α = β = 0.001, and w1 , w2 are white noise with mean zero. The results are presented in the following table 1 and Figure 1. n = 300 iterations SNR Observation 1 21.870 Observation 2 22.850 Original Observer 1 Observer 2 Restoration Result 27.714 Figure 1: Debluring by algorithm (3.5) Acknowledgements This work is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 102.01-2014.02. A part of the research work of Dinh D˜ ung was done when the author was working as a research professor at the Vietnam Institute for Advanced Study in Mathematics (VIASM). He would like to thank the VIASM for providing a fruitful research environment and working condition. References [1] H. Attouch, L. M. Brice˜ no-Arias, and P. L. Combettes, A parallel splitting method for coupled monotone inclusions, SIAM J. Control Optim., vol. 48, pp. 3246–3270, 2010. [2] H. Attouch and M. Th´era, A general duality principle for the sum of two operators, J. Convex Anal., vol. 3, pp. 1–24, 1996. [3] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York, 2011. 1 SNR between an image y and the original image y is defined as 20 log10 ( y / y − y ) 18 [4] S. Becker and P. L. Combettes, An algorithm for splitting parallel sums of linearly composed monotone operators, with applications to signal recovery, J. Convex Nonlinear Anal., vol. 15, pp. 137–159, 2014. [5] R. I. Bot¸, E. R. Csetnek, E. Nagy, Solving Systems of Monotone Inclusions via Primal-Dual Splitting Techniques, Taiwan. J. Math., vol. 17, pp. 1983-2009, 2013. [6] R. I. Bot¸ and C. Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators, SIAM J. Optim. vol. 23, pp. 2541–2565, 2013. [7] R. I. Bot¸ and C. Hendrich, An algorithm for solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators, 2013. http://arxiv.org/abs/ 1306.3191 [8] L. M. Brice˜ no-Arias, P. L. Combettes, Convex variational formulation with smooth coupling for multicomponent signal decomposition and recovery, Numer. Math. Theory Methods Appl., vol. 2, pp. 485–508, 2009. [9] L. M. Brice˜ no-Arias, P. L. Combettes, J.-C. Pesquet, N. Pustelnik, Proximal algorithms for multicomponent image recovery problems, J. Math. Imaging Vision, voll. 41, pp. 3-22, 2011. [10] L. M. Brice˜ no-Arias and P. L. Combettes, Monotone operator methods for Nash equilibria in non-potential games, in: Computational and Analytical Mathematics, (D. Bailey, H. H. Bauschke, P. Borwein, F. Garvan, M. Th´era, J. Vanderwerff, and H. Wolkowicz, Editors). Springer, New York, 2013. [11] L. M. Brice˜ no-Arias and P. L. Combettes, A monotone+skew splitting model for composite monotone inclusions in duality, SIAM J. Optim., vol. 21, pp. 1230–1250, 2011. [12] A. Chambolle, P.-L. Lions, Image recovery via total variation minimization and related problems, Numer. Math., vol. 76, pp. 167–188, 1997. [13] P. L. Combettes, The convex feasibility problem in image recovery, in Advances in Imaging and Electron Physics (P. Hawkes, Ed.), vol. 95, pp. 155–270, Academic Press, New York, 1996. [14] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, vol. 53, pp. 475–504, 2004. [15] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., vol. 4, pp. 1168–1200, 2005. [16] P. L. Combettes, Systems of structured monotone inclusions: duality, algorithms, and applications, SIAM J. Optim., vol. 23, pp. 2420–2447, 2013. - inh D˜ [17] P. L. Combettes, D ung, and B. C. V˜ u, Proximity for sums of composite functions, J. Math. Anal. Appl., vol. 380, pp. 680–688, 2011. [18] P. L. Combettes and J.-C. Pesquet, Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum monotone operators, Set-Valued Var. Anal., vol. 20, pp. 307–330, 2012. 19 [19] A. Jezierska, E. Chouzenoux, J.-C. Pesquet and H. Talbot, A primal-dual proximal splitting approach for restoring data corrupted with Poisson-Gaussian noise, Proc. Int. Conf. Acoust., Speech Signal Process., Kyoto, Japan, pp. 1085 - 1088, 2012. [20] J. Eckstein and B. F. Svaiter, A family of projective splitting methods for the sum of two maximal monotone operators, Math. Program., vol. 111, pp. 173–199, 2008. [21] K. Papafitsoros, C.-B. Sch¨ onlieb, B. Sengul, Combined first and second order total variation inpainting using split Bregman, Image Processing Online, 2013. [22] B. F. Svaiter, Weak convergence on Douglas-Rachford method, SIAM J. Control Optim., vol. 49, pp. 280–287, 2011. [23] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., vol. 38, pp. 431–446, 2000. [24] B. C. V˜ u, A splitting algorithm for dual monotone inclusions involving cocoercive operators, Adv. Comput. Math., vol. 38, pp. 667–681, 2013. [25] B. C. V˜ u, A splitting algorithm for coupled system of primal-dual monotone inclusions, J. Optim. Theory Appl., to appear, 2013. 20 [...]... and parallel-sum monotone operators, Set-Valued Var Anal., vol 20, pp 307–330, 2012 19 [19] A Jezierska, E Chouzenoux, J.-C Pesquet and H Talbot, A primal-dual proximal splitting approach for restoring data corrupted with Poisson-Gaussian noise, Proc Int Conf Acoust., Speech Signal Process., Kyoto, Japan, pp 1085 - 1088, 2012 [20] J Eckstein and B F Svaiter, A family of projective splitting methods for. .. operators, with applications to signal recovery, J Convex Nonlinear Anal., vol 15, pp 137–159, 2014 [5] R I Bot¸, E R Csetnek, E Nagy, Solving Systems of Monotone Inclusions via Primal-Dual Splitting Techniques, Taiwan J Math., vol 17, pp 1983-2009, 2013 [6] R I Bot¸ and C Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone. .. working as a research professor at the Vietnam Institute for Advanced Study in Mathematics (VIASM) He would like to thank the VIASM for providing a fruitful research environment and working condition References [1] H Attouch, L M Brice˜ no-Arias, and P L Combettes, A parallel splitting method for coupled monotone inclusions, SIAM J Control Optim., vol 48, pp 3246–3270, 2010 [2] H Attouch and M Th´era, A. .. Id, we obtain a new splitting method for solving coupled of system monotone inclusion An alternative method can be found in [25] for the case when C is restricted to be cocoercive and (Dk )1≤k≤s are strongly monotone (v) Condition (2.4) is satisfied, for example, when each Ci is restricted to be univariate and monotone, and Cj is uniformly monotone 3 Applications to minimization problems The algorithm. .. general duality principle for the sum of two operators, J Convex Anal., vol 3, pp 1–24, 1996 [3] H H Bauschke and P L Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces Springer, New York, 2011 1 SNR between an image y and the original image y is defined as 20 log10 ( y / y − y ) 18 [4] S Becker and P L Combettes, An algorithm for splitting parallel sums of linearly composed monotone. .. Combettes, Systems of structured monotone inclusions: duality, algorithms, and applications, SIAM J Optim., vol 23, pp 2420–2447, 2013 - inh D˜ [17] P L Combettes, D ung, and B C V˜ u, Proximity for sums of composite functions, J Math Anal Appl., vol 380, pp 680–688, 2011 [18] P L Combettes and J.-C Pesquet, Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and... has a structure of the forward-backward-forward splitting as in [4, 11, 16, 18, 23] The applications of this type of algorithm to specific problems in applied mathematics can be found in [3, 4, 10, 11, 16, 18, 19, 23] and references therein We provide an application to the 12 following minimization problem which extends [4, Problem 4.1] and [7, Problem 4.1] We recall that the infimal convolution of. .. forward-backward splitting method for maximal monotone mappings, SIAM J Control Optim., vol 38, pp 431–446, 2000 [24] B C V˜ u, A splitting algorithm for dual monotone inclusions involving cocoercive operators, Adv Comput Math., vol 38, pp 667–681, 2013 [25] B C V˜ u, A splitting algorithm for coupled system of primal-dual monotone inclusions, J Optim Theory Appl., to appear, 2013 20 ... Methods Appl., vol 2, pp 485–508, 2009 [9] L M Brice˜ no-Arias, P L Combettes, J.-C Pesquet, N Pustelnik, Proximal algorithms for multicomponent image recovery problems, J Math Imaging Vision, voll 41, pp 3-22, 2011 [10] L M Brice˜ no-Arias and P L Combettes, Monotone operator methods for Nash equilibria in non-potential games, in: Computational and Analytical Mathematics, (D Bailey, H H Bauschke,... monotone operators, SIAM J Optim vol 23, pp 2541–2565, 2013 [7] R I Bot¸ and C Hendrich, An algorithm for solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators, 2013 http://arxiv.org/abs/ 1306.3191 [8] L M Brice˜ no-Arias, P L Combettes, Convex variational formulation with smooth coupling for multicomponent signal decomposition and recovery, Numer Math Theory ... dual problem [2], i.e, ∈ B −1 v − A 1 (−v) (1.4) It is remarkable that three fundamental methods such as Douglas-Rachford splitting method, forward-backward splitting method, forward-backward-forward... proposed has a structure of the forward-backward-forward splitting as in [4, 11, 16, 18, 23] The applications of this type of algorithm to specific problems in applied mathematics can be found.. .A basis problem in monotone operator theory is to find a zero point of the sum of two maximally monotone operators A and B acting on a real Hilbert space H, that is, find x ∈ H such that ∈ Ax

Ngày đăng: 14/10/2015, 15:28

Từ khóa liên quan

Mục lục

  • Introduction

  • Algorithm and convergence

  • Applications to minimization problems

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan