This article was downloaded by: [Dicle University] On: 11 November 2014, At: 06:31 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gcom20 Parallel iteratively regularized Gauss–Newton method for systems of nonlinear ill-posed equations a a Pham Ky Anh & Vu Tien Dzung a Department of Mathematics, Vietnam National University, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam Accepted author version posted online: 19 Mar 2013.Published online: 15 Apr 2013 To cite this article: Pham Ky Anh & Vu Tien Dzung (2013) Parallel iteratively regularized Gauss–Newton method for systems of nonlinear ill-posed equations, International Journal of Computer Mathematics, 90:11, 2452-2461, DOI: 10.1080/00207160.2013.782399 To link to this article: http://dx.doi.org/10.1080/00207160.2013.782399 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content This article may be used for research, teaching, and private study purposes Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions International Journal of Computer Mathematics, 2013 Vol 90, No 11, 2452–2461, http://dx.doi.org/10.1080/00207160.2013.782399 Parallel iteratively regularized Gauss–Newton method for systems of nonlinear ill-posed equations Pham Ky Anh and Vu Tien Dzung* Downloaded by [Dicle University] at 06:31 11 November 2014 Department of Mathematics, Vietnam National University, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam (Received 21 May 2012; revised version received 16 January 2013; accepted 26 February 2013) We propose a parallel version of the iteratively regularized Gauss–Newton method for solving a system of ill-posed equations Under certain widely used assumptions, the convergence rate of the parallel method is established Numerical experiments show that the parallel iteratively regularized Gauss–Newton method is computationally convenient for dealing with underdetermined systems of nonlinear equations on parallel computers, especially when the number of unknowns is much larger than that of equations Keywords: ill-posed problem; IRGNM; parallel computation; componentwise source condition; undetermined system 2010 AMS Subject Classifications: 47J06; 47J25; 65J15; 65Y05 Introduction Many parameter identification problems lead to a system of operator equations Fi (x) = yi , ≤ i ≤ N, (1) where Fi , ≤ i ≤ N, are possibly nonlinear operators mapping a Hilbert space X of unknown parameter x into Hilbert spaces Yi of observations yi In the case of noisy data yiδ with yiδ − y Yi ≤ δ, we have the perturbed system Fi (x) = yiδ , ≤ i ≤ N (2) Clearly, systems (1) and (2) can be rewritten as operator equations in the product space F(x) = y, (3) and F(x) = yδ , (4) where F : X → Y = Y1 × Y2 × · · · × YN , F(x) = (F1 (x), , FN (x)), y = (y1 , , yN ) and y = (y1δ , , yNδ ) For u = (u1 , , uN ) ∈ Y and v = (v1 , , vN ) ∈ Y , the inner product and the norm in Y are defined as < u, v >= Ni=1 < ui , vi >Yi and u Y = ( Ni=1 ui 2Yi )1/2 , respectively δ *Corresponding author Email: anhpk@vnu.edu.vn © 2013 Taylor & Francis Downloaded by [Dicle University] at 06:31 11 November 2014 International Journal of Computer Mathematics 2453 One of the most efficient regularization methods for nonlinear ill-posed problems is the iteratively regularized Gauss–Newton method (IRGNM), proposed by Bakushinskii [4] in 1992 Convergence results of the IRGNM were obtained by Blaschke et al [5], Hohage [10], Deuflhard et al [7], Jin et al [11] and others, see [9,12,14] Recently, an IRGN–Kaczmarz method has been introduced by Burger and Kaltenbacher [6] The main idea of the last method is to perform a cyclic IRGN iteration over the equations However, when the number of equations N is large, the Kaczmarz-like methods are costly on a single processor In this note, we propose a parallel version of the IRGNM for the system of ill-posed operator Equation (1) Other parallel methods for solving systems of ill-posed equations can be found in [1–3] Suppose that the exact system (1) has a solution x † , which may not depend continuously on the right-hand side y Suppose that the operators Fi , ≤ i ≤ N, are continuously differentiable in some set containing x † and x – an initial approximation of x † Let xnδ be the nth approximation of x † According to the IRGNM, for a fixed number n, we linearize the Tikhonov functional N Jnδ (x) := F(x) − yδ 2Y + αn x − x = Fi (x) − yiδ Yi + αn x − x i=1 about xnδ , and consider the unconstrained optimization problem N δ n( Fi (xnδ ) − yiδ + Fi (xnδ ) x x) := Yi + αn xnδ − x + i=1 x → , x∈X where Fi (xnδ ) stands for the Frechet derivative of Fi (x) computed at xnδ δ Finding x from the equation ∂ δn /∂( x) = 0, we determine the next approximation as xn+1 = δ xn + x, or −1 N δ xn+1 = xnδ Fi (xnδ )∗ Fi (xnδ ) − N Fi (xnδ )∗ (Fi (xnδ ) − yiδ ) + αn (xnδ − x ) + αn I i=1 (5) i=1 However, in some cases, it is much more computationally convenient to apply the IRGNM to each subproblem (2) synchronously, i.e to find in parallel δ xn+1,i = xnδ − (Fi (xnδ )∗ Fi (xnδ ) + βn I)−1 (Fi (xnδ )∗ (Fi (xnδ ) − yiδ ) + βn (xnδ − xi0 )), (6) i = 1, , N, where βn := αn /N, and then define the next approximation as an average of the intermediate approximations xn+1,i , i.e δ xn+1 = N N δ xn+1,i (7) i=1 Although each step (6) consists exactly of one iterate of the IRGNM applied to subproblem (2), the convergence of the ordinary IRGNM (5) does not necessarily imply the convergence of the parallel iteratively regularized Gauss–Newton method (PIRGNM) (6), (7) In the next section, we will study the stopping rule and the convergence of the PIRGNM (6), (7) 2454 P.K Anh and V.T Dzung Convergence analysis For convenience of the reader, we collect some facts, necessary for deriving an error estimate of approximate solutions We begin with a particular case of Lemma 2.4 [5], whose proof is straight-forward Lemma 2.1 Let {γn } be a sequence of nonnegative numbers satisfying the relations γn+1 ≤ a + bγn + cγn2 , n ≥ for some a, b, c > Downloaded by [Dicle University] at 06:31 11 November 2014 2 Let √ M+ := (1 − b + (1 − b) − 4ac)/2c, M− := (1 − b − (1 − b) − 4ac)/2c If b + ac < and γ0 ≤ M+ , then γn ≤ l := max{γ0 , M− } for all n ≥ Proof Clearly, γ0 ≤ l Suppose γk ≤ l, then γk+1 − l ≤ a + bγk + cγk2 − l ≤ a + (b − 1)l + cl2 ≤ because l ∈ [M− , M+ ], hence γk+1 ≤ l It follows by induction that γn ≤ l for n ≥ Lemma 2.2 Let A be a bounded linear operator on a Hilbert space H Then, for every β > 0, the following estimates hold (i) β 1−μ (A∗ A + βI)−1 (A∗ A)μ v ≤ μμ (1 − μ)1−μ v ≤ v , for any fixed μ ∈ (0, 1] and v ∈ H (ii) (A∗ A + βI)−1 ≤ 1/β (iii) (A∗ A + βI)−1 A∗ ≤ 21 β −1/2 (iv) A(A∗ A + βI)−1 (A∗ A)1/2 ≤ The proof of the estimates in Lemma 2.2 can be found in [12, pp 72, 81, 82] In what follows, the parameters αn are chosen such that αn > 0, αn → and 1≤ αn ≤ρ αn+1 for some ρ > Let Br (x ) denote a closed ball centred at x and with radius r > in X Before stating a convergence theorem, we make some widely used assumptions Assumption 2.1 System (1) has an exact solution x † ∈ Br (x ) and Fi , i = 1, 2, , N, are continuously differentiable in B2r (x ) Assumption 2.2 The following componentwise source condition (cf [6, p 8]) holds x † − xi0 = (Fi (x † )∗ Fi (x † ))μ vi , (8) where < μ ≤ and xi0 ∈ B2r (x ), vi ∈ X, ≤ i ≤ N Moreover, suppose that (i) If < μ ≤ 21 , then Fi , i = 1, 2, , N, satisfy the following condition (see [11,14]), ∀x, z ∈ B2r (x ); ∀v ∈ X, ∃hi (x, z, v) ∈ X (Fi (x) − Fi (z))v = Fi (z)hi (x, z, v); (ii) If hi (x, z, v) ≤ K0 x − z v , (9) < μ ≤ 1, then Fi are Lipschitz continuous, i.e Fi (x) − Fi (˜x ) ≤ L x − x˜ , ≤ i ≤ N for all x, x˜ ∈ B2r (x ) (10) International Journal of Computer Mathematics 2455 The assumption (8) is rather restricting and it requires the choice of appropriate initial guesses xi0 , i = 1, , N Further, since the vectors vi in Equation (8) not occur in the iteration process (6) and (7), they need not to be known explicitly Define the stopping index Nδ in the PIRGNM (6) and (7) as the first number n satisfying the μ+1/2 condition ηβn ≤ δ, i.e μ+1/2 ηβNδ ≤ δ < ηβnμ+1/2 , ≤ n < Nδ , (11) Downloaded by [Dicle University] at 06:31 11 November 2014 where βn = αn /N and η > is a fixed parameter This stopping rule is an a priori one and has only a theoretical meaning, since it depends on μ which is not often available in practice However, presently an a posteriori stopping rule for the PIRGNM has not been established yet The proposed here parallel algorithm consists of the following steps: (1) Give an initial approximation x0δ and set n := δ (2) Compute in parallel the ith vectors xn+1,i , ≤ i ≤ N by Equation (6), where the given initial guesses xi , i = 1, , N, are associated with the componentwise source condition (8) δ (3) Define xn+1 by Equation (7) (4) if n > Nδ , where Nδ is the stopping index defined by Equation (11), then stop Else put n := n + and return to Step Theorem 2.3 Let the assumptions 2.1 and 2.2 hold and let the stopping index n∗ = Nδ be chosen according to Equation (11) If Ni=1 vi and η are sufficiently small and x0δ = x is close enough to x † , then there holds the estimate xnδ∗ − x † = O(δ 2μ/(2μ+1) ) (12) Proof We follow the techniques used in [5,6,11,12] to estimate the distance between xnδ and x † δ Let xnδ ∈ Br (x † ) and denote Ai := Fi (x † ), Ain := Fi (xnδ ); en := xnδ − x † and ein+1 := xn+1,i − x† From Equation (6), we get ein+1 = en − (A∗in Ain + βn I)−1 (A∗in (Fi (xnδ ) − yiδ ) + βn (xnδ − xi0 )), or ein+1 = (A∗in Ain + βn I)−1 [βn (xi0 − x † ) + A∗in (yiδ − yi ) − A∗in (Fi (xnδ ) − yi − Ain en )] (13) Depending on the value of μ, we consider two cases Case Let μ ∈ ( 21 , 1] Using the source condition (8) and taking into account the identity (A∗i Ai + βn I)−1 − (A∗in Ain + βn I)−1 = −(A∗in Ain + βn I)−1 [(A∗i − A∗in )Ai + A∗in (Ai − Ain )](A∗i Ai + βn I)−1 , we can rewrite ein+1 as ein+1 = −βn (A∗i Ai + βn I)−1 (A∗i Ai )μ vi + βn (A∗in Ain + βn I)−1 [A∗in (Ai − Ain ) + (A∗i − A∗in )Ai ](A∗i Ai + βn I)−1 (A∗i Ai )μ vi − (A∗in Ain + βn I)−1 A∗in (Fi (xnδ ) − yi − Ain en ) + (A∗in Ain + βn I)−1 A∗in (yiδ − yi ) (14) According to Lemma 2.2, we have ωni (μ) := βn1−μ (A∗i Ai + βn I)−1 (A∗i Ai )μ vi ≤ μμ (1 − −1/2 μ)(1−μ) vi ≤ vi for μ ∈ (0, 1]; (A∗in Ain + βn I)−1 ≤ β1n , (A∗in Ain + βn I)−1 A∗in ≤ 21 βn ; ∗ −1 ∗ 1/2 δ † Ai (Ai Ai + βn I) (Ai Ai ) ≤ and Ain − Ai = Fi (xn ) − Fi (x ) ≤ L en , hence 2456 P.K Anh and V.T Dzung Fi (xnδ ) − yi − Ain en Yi = Fi (xnδ ) − Fi (x † ) − Fi (xnδ )en Yi (A∗in Ain + βn I)−1 A∗in (Fi (xnδ ) − yi − Ain en ) ≤ 21 L en , therefore Yi ≤ 21 βn−1/2 ( 21 L en ) (15) Further, T := βn (A∗in Ain + βn I)−1 )[A∗in (Ai − Ain ) + (A∗i − A∗in )Ai ](A∗i Ai + βn I)−1 (A∗i Ai )μ vi ≤ βn (A∗in Ain + βn I)−1 A∗in + βn (A∗in Ain + βn I)−1 Ai − Ai,n A∗i − A∗in (A∗i Ai + βn I)−1 (A∗i Ai )μ vi Ai (A∗i Ai + βn I)−1 (A∗i Ai ) ) (A∗i Ai )μ− vi 1 Downloaded by [Dicle University] at 06:31 11 November 2014 Thus, T1 ≤ L en ( 21 βnμ−1/2 ωni (μ) + (A∗i Ai )μ−1/2 vi ) (16) (A∗in Ain + βn I)−1 A∗in (yiδ − yi ) ≤ 21 βn−1/2 δ (17) βn (A∗i Ai + βn I)−1 (A∗i Ai )μ vi = βnμ ωni (μ) (18) Besides, Finally, Combining relations (13)–(18), we find ein+1 ≤ βnμ ωni (μ) + L en ( 21 βnμ−1/2 ωni (μ) + (A∗i Ai )μ−1/2 vi ) + 21 βn−1/2 ( 21 L en + δ) This and together with Equation (7) yields the estimate N N en+1 = ein+1 ≤ i=1 L en + βn−1/2 N N μ−1/2 β ωni (μ) + (A∗i Ai )μ−1/2 n βnμ ωni (μ) + L en i=1 +δ Now introducing the sequence γn := en /βnμ and observing that the stopping rule (11) implies μ+1/2 for ≤ n < Nδ , from the last inequality, we have δ < ηβn γn+1 ≤ + ≤ N N βn βn+1 i=1 L en N βnμ μ ρ N ωni (μ) + βn βn+1 μ N ωni (μ) + ρ μ i=1 N L en 2N βnμ (A∗i Ai )μ−1/2 vi + η Lγn μ μ−1/2 + ρ β0 2N N (A∗i Ai )μ−1/2 vi + i=1 N vi + i=1 η + Lρ μ N μ βn βn+1 i=1 N Lγn μ + ρ N ≤ ρμ μ βnμ−1/2 ωni (μ) i=1 L μ−1/2 β n en μ βn βn βn+1 μ + η βn βn+1 μ N ωni (μ) i=1 L μ−1/2 μ β ρ γn μ−1/2 β 2N N vi + i=1 N N (A∗i Ai )μ− vi γn i=1 L μ−1/2 μ ρ γn β Here, we use the inequality ωni (μ) ≤ vi Besides, the above-defined constant ρ satisfies the inequality ρ ≥ αn /αn+1 ≥ + International Journal of Computer Mathematics 2457 γn+1 ≤ a + bγn + cγn2 , (19) Thus, where a= N N vi + i=1 η ρμ; b= μ−1/2 β 2N N vi + i=1 N N (A∗i Ai )μ−1/2 vi ρμ i=1 L μ−1/2 μ and c = β0 ρ Downloaded by [Dicle University] at 06:31 11 November 2014 If N i=1 √ vi and η are small enough, then a and b will be small, hence b + ac ≤ 1, and μ 2aβ0 ≤ r(1 − b + (1 − b)2 − 4ac) −μ (20) −μ Now if x is sufficiently close to x † , then γ0 = β0 x0δ − x † = β0 x − x † ≤ M+ := (1 − b + (1 − b)2 − 4ac)/2c Lemma 2.1 applied to the inequality (19) ensures that γn := en /βnμ ≤ l := max{γ0 ; M− } for ≤ n ≤ Nδ , where M− = (1 − b − (1 − b)2 − 4ac)/2c = μ μ δ 2a/(1 − b + (1 − b)2 − 4ac) In particular, xn+1 − x † = en+1 = γn+1 βn+1 ≤ lβ0 μ μ μ † Observe that γ0 β0 = x − x ≤ r From Equation (20), we find M− β0 = 2aβ0 /(1 − b+ μ δ (1 − b)2 − 4ac) ≤ r, therefore, lβ0 ≤ r, hence xn+1 ∈ Br (x † ) Thus, for the case < μ ≤ 1, the estimate γn ≤ l yields en ≤ lβnμ = lαnμ /N μ = O(αnμ )for ≤ n ≤ n∗ := Nδ Case Let μ ∈ (0, 21 ] and condition (9) hold First observe that Fi (xnδ ) − yi − Fi (xnδ )(xnδ − 1 x † ) = (Fi (x † + t(xnδ − x † )) − Fi (xnδ ))(xnδ − x † ) dt = Fi (xnδ )hti dt = Fi (xnδ ) hti dt, where hti := hi (x † + t(xnδ − x † ), xnδ , xnδ − x † ) and hti dt Yi ≤ K0 /2 xnδ − x † From Equation (13), we find ein+1 ≤ βn (A∗in Ain + βn I)−1 (x − x † ) + (A∗in Ain + βn I)−1 A∗in (yiδ − yi ) + (A∗in Ain + βn I)−1 A∗in Ain K0 δ x − x† n Thus, δ K0 δ ein+1 ≤ βn (A∗in Ain + βn I)−1 (x − x † ) + βn−1/2 + x − x† 2 n This and together with the source condition (8) and the estimate βn (A∗in Ain + βn I)−1 − (A∗i Ai + βn I)−1 ≤ 2K0 xnδ − x † (see [11, Lemma 4.2, p 1613]), gives δ x − x † + βn−1/2 δ K0 (A∗i Ai )μ vi + βn−1/2 + en 2 ein+1 ≤ βn (A∗i Ai + βn I)−1 (A∗i Ai )μ vi + 2K0 xnδ − x † + K0 δ x − x† n ≤ βnμ ωni (μ) + 2K0 en 2458 P.K Anh and V.T Dzung Setting γn := en /βnμ , from the last relations, we find γn+1 = en+1 ≤ μ N βn+1 + 2K0 en N βnμ N i=1 βn βn+1 ein+1 ≤ μ N βn+1 μ N N i=1 βn βn+1 μ ωni (μ) (A∗i Ai )μ vi i=1 K0 δ + βn−1/2 μ + 2 βn+1 en μ βn βn2μ μ βn+1 Downloaded by [Dicle University] at 06:31 11 November 2014 The stopping rule (11) ensures that for ≤ n < Nδ , γn+1 ≤ ρμ N N i=1 2K0 μ ωni (μ) + ρ γn N n (A∗i Ai )μ vi + i=1 η −1/2 μ+1/2 K0 μ μ βn + β γ ρ β0 μ n n βn+1 Thus, γn+1 ≤ a + bγn + where a = ρ μ ((1/N) Ni=1 vi + η/2); b = (2K0 ρ μ /N) Ni=1 ∗ μ μ μ (Ai Ai ) vi , and c = (K0 /2)ρ β0 Again, if Ni=1 vi and η are sufficiently small and x0δ = x is close enough to x † , then arguing δ similarly as in Case 1, we can show that xn+1 ∈ Br (x † ) and xnδ − x † = O(αnμ ) for ≤ n ≤ Nδ Thus, in both cases for ≤ n ≤ Nδ , we have cγn2 , xnδ − x † = O(αnμ ) μ+1/2 Let n = n∗ := Nδ , then ηβn∗ = η(αn∗ /N)μ+1/2 ≤ δ, hence, αnμ∗ ≤ N μ (δ/η)μ/(μ+1/2) , therefore δ † 2μ/(2μ+1) ) Theorem 2.3 is proved xn∗ − x = O(δ Numerical experiments Underdetermined systems of equations arise in a variety of problems, such as, nonlinear complementarity problems, problems of finding interior points of polytopes, image processing, etc We consider a simultaneous underdetermined system of nonlinear equations Fi (x1 , , xm ) = yi , i = 1, , N, (21) where Fi : Rm → R and m N First we rewrite Equation (6) as δ xn+1,i = xi0 + (Fi (xnδ )∗ Fi (xnδ ) + βn I)−1 Fi (xnδ )∗ (yiδ − Fi (xnδ ) − Fi (xnδ )(xi0 − xnδ )) (22) Here, Fi (x) = (∂Fi /∂x1 , , ∂Fi /∂xm ); i = 1, , N are row vectors Further, noting that (Fi (xnδ )∗ Fi (xnδ ) + βn IX )−1 Fi (xnδ )∗ = Fi (xnδ )∗ (Fi (xnδ )Fi (xnδ )∗ + βn IYi )−1 , where IX and IYi are the identity operators on spaces X and Yi , respectively, we have δ xn+1,i = xi0 + Fi (xnδ )∗ (Fi (xnδ )Fi (xnδ )∗ + βn IYi )−1 (yiδ − Fi (xnδ ) − Fi (xnδ )(xi0 − xnδ )) (23) Taking into account that (Fi (xnδ )Fi (xnδ )∗ + βn IYi )−1 = Fi (xnδ + βn , we can rewrite formula (6) as F (x δ )T (yiδ − Fi (xnδ ) − Fi (xnδ )(xi0 − xnδ )) δ xn+1,i = xi0 + i n ; i = 1, , N, (24) Fi (xnδ ) + βn where the symbol T denotes transposition of a matrix or a vector and the Euclidean norm is used δ is defined by Equation (7) as before The next approximation xn+1 International Journal of Computer Mathematics Denoting F = (F1 , , FN )T ; yδ = (y1δ , , yNδ )T and observing that F (x)T F (x) = Fi (x)T Fi (x), by a similar argument as in Equation (23), we can reduce Equation (5) to Downloaded by [Dicle University] at 06:31 11 November 2014 δ = x + F (xnδ )T (F (xnδ )F (xnδ )T + αn I)−1 (yδ − F(xnδ ) − F (xnδ )(x − xnδ )) xn+1 2459 N i=1 (25) At each iteration step the IRGNM (5) requires to solve an m × m system of linear equations, which is time consuming when m is very large On the other hand, using formula (25), we need to solve a N × N system of linear equations, where N m Meanwhile, for the PIRGNM, all the δ components xn+1,i are computed by the explicit formula (24) in parallel, hence the algorithm (24), Equation (7) can give a satisfactory result within reasonable computing time For the sake of simplicity, we choose for our experiments m = 105 , N = 64, x † = (1, 0, , 0)T , x0δ = (0.5, 0, , 0)T , x0δ − x † = 0.5 and αn = 0.2 ∗ 64 ∗ (0.5)n In all the experiments, the matrix [F (x † )]T [F (x † )] will be singular, hence the Newton method and its parallel modification (see [8,15]) may not converge, therefore, the IRGNM should be used However, due to formula (25) at each step, the IRGNM requires to solve a 64 × 64 system of linear equations On the other hand, the application of the PIRGNM to Equation (21) leads to simple explicit formulae (24) All the numerical experiments will be performed on a LINUX cluster 1350 with eight computing nodes Each node contains two Intel Xeon dual core 3.2 GHz, 2GBRam All the programs are written in C We evaluate the accuracy of the IRGNM and PIRGNM using the relative error norm (REN), i.e REN := xnδ − x † / x † In our examples, x † = 1, hence REN = xnδ − x † The notations used in the tables are as follows: Tp : time of the parallel execution on p processors taken in seconds Ts : time of the sequential execution taken in seconds Sp = Ts /Tp : speed up Ep = Sp /p : efficiency of parallel computation by using p processors nmin : the first number n, where the REN of the corresponding method is less than a given tolerance Nδ : the stopping index defined by Equation (11) η : a fixed small positive parameter in stopping rule (11) For the first experiment, we consider the following system of equations Fk (x) := x T Ak x + bkT x = yk , where the matrices Ak are 2k − diagonal with the entries aij(k) = |i − j| ≤ k − 1, otherwise Further, let bk = (8, , 8, 0, , 0)T ; k = 1, , 64, where the component in the vector bk repeats exactly k times Finally, the right-hand sides yi = + χi , where the entries of χi are normally distributed random numbers with zero mean, scaled to yield the noise level δ In this case, the source condition (8) holds with μ = and the initial guesses are xk0 = (0.95, −0.05, , −0.05, 0, , 0)T , k = 1, , 64, where the entry −0.05 in xk occurs exactly k − times Moreover, all the derivatives Fi (x) are Lipschitz continuous Table gives the RENs of the PIRGNM and IRGNM as well as their execution times in sequential mode For solving systems of linear equations in IRGNM, we used the Cholesky method It shows that within a given tolerance, the PIRGNM is less time consuming than IRGNM Table finds stopping indices of the PIRGNM and verifies the conclusion of Theorem 2.3 that xnδ∗ − x † = O(δ 2/3 ), where n∗ = Nδ 2460 P.K Anh and V.T Dzung Table RENs and execution times in sequential mode with η = IRGNM δ REN nmin Ts nmin Ts e−6 1e−4 e−5 1e−6 1e−4 e−5 1e−6 11 15 18 11 15 18 27 36.92 44.25 27 36 44.25 12 12 6.21 12.43 18.65 6.21 12.43 18.65 e−7 Table Downloaded by [Dicle University] at 06:31 11 November 2014 PIRGNM m 105 Stopping indices of the PIRGNM with η = 0.02 δ Nδ REN REN/δ 2/3 e−5 e−6 e−7 e−8 e−9 10 12 1.68e−4 3.85e−5 6.37e−6 2.13e−6 7.4e−7 0.36 0.38 0.29 0.45 0.75 Finally, Table gives the efficiency and the speed up of the PIRGNM in parallel mode For our second experiment, we take F0 (x) = x12 + x22 + · · · + xm2 + 8x1 ; Fi (x) = m−i j=1 xj xj+i + i 10 j=1 xj + 9xi+1 ; i = 1, , 63 The right-hand sides y0 = + χ0 ; yi = 10 + χi , i = 1, , 63 and the entries of χi ; i ≥ are again normally distributed random numbers with zero mean, scaled to yield the noise level δ Clearly, in this case the source condition (8) is satisfied with an exponent μ = and initial guesses x00 = (0.5, 0, , 0)T ; xi0 = (0.5, −0.5, , −0.5, 0, , 0)T ; i = 1, , 63, where number −0.5 in xi0 repeats exactly i times Observe that in this example all the derivatives Fi (x) are Lipschitz continuous and the initial guesses xi0 need not to be closed to the exact solution x † Tables and for the second experiment are similar to Tables and 2, respectively Table m Efficiency and speed up of the PIRGNM Processors Tp Sp Ep 18.65 9.5 5.6 1.96 3.3 0.98 0.82 100,000 Table RENs and execution times in sequential mode with η = 0.4 IRGNM δ e−6 e−7 PIRGNM REN nmin Ts nmin Ts e−4 e−5 e−6 e−4 e−5 e−6 11 15 18 11 15 18 24 33 40 24 33 40 9 0.71 1.4 2.08 0.71 1.4 2.08 International Journal of Computer Mathematics Table m 105 Downloaded by [Dicle University] at 06:31 11 November 2014 2461 Stopping indices of the PIRGNM with η = 0.02 δ Nδ REN REN/δ 2/3 e−5 e−6 e−7 e−8 e−9 10 12 4.5e−5 1.1e−5 1.4e−6 3.5e−7 8.6e−8 0.096 0.112 0.065 0.07 0.08 Conclusion In this article, a parallel version of the IRGNM for solving a system of nonlinear ill-posed operator equations is proposed and its convergence is established Based on parallel computation, we can reduce the overall computational effort without imposing any extra conditions than widely used ones on the nonlinearity of the operators (see [13]) Numerical experiments for underdetermined systems of nonlinear equations show the advantage of the proposed parallel method Acknowledgement The authors are grateful to the anonymous referees and Professor Qin Sheng, Editor-in-Chief of IJCM for their comments which substantially improved the quality of this paper The authors express their sincere thanks to the Advanced Math Program of Ministry of Education and Training, Vietnam for sponsoring their working visit to University of Washington and the Department of Applied Mathematics, University of Washington for the hospitality This work was partially supported by the Vietnam National Foundation for Science and Technology (NAFOSTED) References [1] P.K Anh and C.V Chung, Parallel iterative regularization methods for solving systems of ill-posed equations, Appl Math Comput 212 (2009), pp 542–550 [2] P.K Anh and C.V Chung, Parallel regularized Newton method for nonlinear ill-posed equations, Numer Algorithms 58 (2011), pp 379–398 [3] P.K Anh and V.T Dung, Parallel iterative regularization algorithms for large overdetermined linear systems, Int J Comput Methods (2010), pp 525–537 [4] A.B Bakushinskhii, The problem of the convergence of the iteratively regularized Gauss-Newton method, Comput Math Math Phys 32 (1992), pp 1353–1359 [5] B Blaschke, A Neubauer, and O Scherzer, On convergence rates for the iteratively regularized Gauss-Newton method, IMA J Numer Anal 17 (1997), pp 421–436 [6] M Burger and B Kaltenbacher, Regularizing Newton-Kaczmart methods for nonlinear ill-posed problems, SIAM J Numer Anal 44 (2006), pp 153–182 [7] P Deuflhard, H.W Engl, and O Scherzer, A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions, Inverse Problems 14 (1948), pp 1081–1106 [8] M.A Diniz-Ehrhardt, J.M Martinez, and S.A Santos, Parallel projection methods and the resolution of ill-posed problems, Comput Math Appl 27 (1994), pp 11–24 [9] H.W Engl, K Kunisch, and A Neubauer, Convergence rates for Tikhonov regularization of nonlinear ill-posed problems, Inverse Problems (1989), pp 523–540 [10] T Hohage, Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and inverse scattering problem, Inverse Problems 13 (1997), pp 1279–1299 [11] Q.N Jin, On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems, Math Comput 69 (2000), pp 1603–1623 [12] B Kaltenbacher, A Neubauer, and O Scherzer, Iterative Regularization Methods for Nonlinear Ill-posed Problems, Walterde Gruyter, Berlin, New York, 2008 [13] T Lu, P Neittaanmaki, and X.C Tai, A parallel splitting up method for partial differential equations and its application to Navier-Stokes equations, RAIRO Math Model Numer Anal 26 (1992), pp 673–708 [14] O Scherzer, H.W Engl, and K Kunisch, Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems, SIAM J Numer Anal 30 (1993), pp 1796–1838 [15] G Zilli and L Bergmaschi, Parallel Newton methods for sparse systems of nonlinear equations, Rend Circ Mat Palermo (2), Suppl 58 (1999), pp 247–257 ... a parallel version of the iteratively regularized Gauss–Newton method for solving a system of ill-posed equations Under certain widely used assumptions, the convergence rate of the parallel method. .. Chung, Parallel iterative regularization methods for solving systems of ill-posed equations, Appl Math Comput 212 (2009), pp 542–550 [2] P.K Anh and C.V Chung, Parallel regularized Newton method for. .. University] at 06:31 11 November 2014 International Journal of Computer Mathematics 2453 One of the most efficient regularization methods for nonlinear ill-posed problems is the iteratively regularized