1. Trang chủ
  2. » Tất cả

A scaled three term conjugate gradient method for unconstrained optimization

16 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 1,44 MB

Nội dung

A scaled three term conjugate gradient method for unconstrained optimization Arzuka et al Journal of Inequalities and Applications (2016) 2016 325 DOI 10 1186/s13660 016 1239 1 R E S E A R C H Open Ac[.]

Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 DOI 10.1186/s13660-016-1239-1 RESEARCH Open Access A scaled three-term conjugate gradient method for unconstrained optimization Ibrahim Arzuka* , Mohd R Abu Bakar and Wah June Leong * Correspondence: arzukaibrahim@yahoo.com Institute for Mathematical Research, Universiti Putra Malaysia, Serdang, Selangor 43400, Malaysia Abstract Conjugate gradient methods play an important role in many fields of application due to their simplicity, low memory requirements, and global convergence properties In this paper, we propose an efficient three-term conjugate gradient method by utilizing the DFP update for the inverse Hessian approximation which satisfies both the sufficient descent and the conjugacy conditions The basic philosophy is that the DFP update is restarted with a multiple of the identity matrix in every iteration An acceleration scheme is incorporated in the proposed method to enhance the reduction in function value Numerical results from an implementation of the proposed method on some standard unconstrained optimization problem show that the proposed method is promising and exhibits a superior numerical performance in comparison with other well-known conjugate gradient methods Keywords: unconstrained optimization; nonlinear conjugate gradient method; quasi-Newton methods Introduction In this paper, we are interested in solving nonlinear large scale unconstrained optimization problems of the form f (x), x ∈ n , () where f : n →  is an at least twice continuously differentiable function A nonlinear conjugate gradient method is an iterative scheme that generates a sequence {xk } of an approximation to the solution of (), using the recurrence xk+ = xk + αk dk , k = , , , , , () where αk >  is the steplength determined by a line search strategy which either minimizes the function or reduces it sufficiently along the search direction and dk is the search direction defined by ⎧ ⎨–g ; k dk = ⎩–gk + βk dk– ; k = , k ≥ , © Arzuka et al 2016 This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 where gk is the gradient of f at a point xk and βk is a scalar known as the conjugate gradient parameter For example, Fletcher and Reeves (FR) [], Polak-Ribiere-Polyak (PRP) [], Liu and Storey (LS) [], Hestenes and Stiefel (HS) [], Dai and Yuan (DY) [] and Fletcher (CD) [] used an update parameter, respectively, given by βkFR = gkT gk , T gk– gk– βkPRP = gkT yk– , T gk– gk– βkLS = –gkT yk– , T dk– gk– βkHS = gkT yk– , T dk– yk– βkDY = gkT gk , T dk– yk– βkCD = – gkT gk , T dk– yk– where yk– = gk – gk– If the objective function is quadratic, with an exact line search the performances of these methods are equivalent For a nonlinear objective function different βk lead to a different performance in practice Over the years, after the practical convergence result of Al-Baali [] and later of Gilbert and Nocedal [] attention of researchers has been on developing on conjugate gradient methods that possesses the sufficient descent condition gkT dk ≤ –cgk  , () for some constant c >  For instance the CG-DESCENT of Hager and Zhang []   βkHZ = max βkN , ηk , () where βkN =    yk–  T y – d gk k– k– T T dk– yk– dk– yk– and ηk = – , dk–  min{gk– , η} which is based on the modification of HS method Another important class of conjugate gradient methods is the so-called three-term conjugate gradient method in which the search direction is determined as a linear combination of gk , sk , and yk as dk = –gk – τ sk + τ yk , () where τ and τ are scalar Among the generated three-term conjugate gradient methods in the literature we have the three-term conjugate methods proposed by Zhang et al [, ] by considering a descent modified PRP and also a descent modified HS conjugate gradient method as  dk+ = –gk+ +  T   T gk+ gk+ dk yk – d yk , k gkT gk gkT gk Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 and  dk+ = –gk+ +  T   T gk+ gk+ sk yk – s yk , k sTk yk sTk yk where sk = xk+ – xk An attractive property of these methods is that at each iteration, the search direction satisfies the descent condition, namely gkT dk = –cgk  for some constant c >  In the same manner, Andrei [] considers the development of a three-term conjugate gradient method from the BFGS updating scheme of the inverse Hessian approximation restarted as an identity matrix at every iteration where the search direction is given by dk+ = –gk+ +   T   yTk gk+ sk gk+ yk  T sTk gk+ – y –  s – yk k yTk sk yTk sk yTk sk yTk sk An interesting feature of this method is that both the sufficient and the conjugacy conditions are satisfied and we have global convergence for a uniformly convex function Motivated by the good performance of the three-term conjugate gradient method, we are interested in developing a three-term conjugate gradient method which satisfies both the sufficient descent condition, the conjugacy condition, and global convergence The remaining part of this paper is structured as follows: Section  deals with the derivation of the proposed method In Section , we present the global convergence properties The numerical results and discussion are reported in Section  Finally, a concluding remark is given in the last section Conjugate gradient method via memoryless quasi-Newton method In this section, we describe the proposed method which would satisfied both the sufficient descent and the conjugacy conditions Let us consider the DFP method, which is a quasiNewton method belonging to the Broyden class [] The search direction in the quasiNewton methods is given by dk = –Hk gk , () where Hk is the inverse Hessian approximation updated by the Broyden class This class consists of several updating schemes, the most famous being the BFGS and the DFP; if Hk is updated by the DFP then Hk+ = Hk + sk sTk Hk yk yTk Hk – , sTk yk yTk Hk yk () such that the secant equation Hk+ yk = sk () is satisfied This method is also known as a variable metric method, developed by Davidon [], Fletcher and Powell [] A remarkable property of this method is that it is a conjugate direction method and one of the best quasi-Newton methods that encompassed the advantage of both the Newton method and the steepest descent method, while avoiding their shortcomings [] Memoryless quasi-Newton methods are other techniques for Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 solving (), where at every step the inverse Hessian approximation is updated as an identity matrix Thus, the search direction can be determined without requiring the storage of any matrix It was proposed by Shanno [] and Perry [] The classical conjugate gradient methods PRP [] and FR [] can be seen as memoryless BFGS (see Shanno []) We proposed our three-term conjugate gradient method by incorporating the DFP updating scheme of the inverse Hessian approximation (), within the frame of a memoryless quasiNewton method where at each iteration the inverse Hessian approximation is restarted as a multiple of the identity matrix with a positive scaling parameter as Qk+ = μk I + yk yTk sk sTk – μ , k sTk yk yTk yk () and thus, the search direction is given by dk+ = –Qk+ gk+ = –μk gk+ – yT gk+ sTk gk+ sk + μk kT yk T sk yk yk yk () Various strategies can be considered in deriving the scaling parameter μk ; we prefer the following which is due to Wolkowicz []: sT sk μk = kT – yk sk  sTk sk yTk sk  – sTk sk yTk yk () The new search direction is then given by dk+ = –μk gk+ – ϕ sk + ϕ yk , () where ϕ = sTk gk+ sTk yk () and ϕ = μk yTk gk+ yTk yk () We present the algorithm of the proposed method as follows 2.1 Algorithm (STCG) In this section, we present the algorithm of the proposed method It has been reported that the line search in conjugate gradient method performs more function evaluations so as to obtain a desirable steplength αk due to poor scaling of the search direction (see Nocedal []) As a consequence, we incorporate the acceleration scheme proposed by Andrei [], so as to have some reduction in the function evaluations The new approximation to the minimum instead of () is determined by xk+ = xk + αk ϑk dk , where ϑk = –rk , rk qk () = αk gkT dk , qk = –αk (gk – gz )dk = –αk yk dk , gz = ∇f (z) and z = xk + αk dk Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 Algorithm  Step  Select an initial point xo and determine f (xo ) and g(xo ) Set = –go and k =  Step  Test the stopping criterion gk  ≤ , if satisfied stop Else go to Step  Step  Determine the steplength αk as follows: Given δ ∈ (, ) and p , p , with  < p < p <  (i) Set α =  (ii) Test the relation f (x + αdk ) – f (xk ) ≤ αδgkT dk Step  Step  Step  Step  Step  () (iii) If () is satisfied, then αk = α and go to Step  else choose a new α ∈ [p α, p α] and go to (ii) Determine z = xk + αk dk , compute gz = ∇f (z) and yk = gk – gz Determine rk = αk gkT dk and qk = –αk yTk dk If qk = , then ϑk = qrk , xk+ = xk + ϑk αk dk else xk+ = xk + αk dk k Determine the search direction dk+ by () where μk , ϕ , and ϕ are computed by (), (), and (), respectively Set k := k +  and go to Step  Convergence analysis In this section, we analyze the global convergence of the propose method, where we assume that gk =  for all k ≥  else a stationary point is obtained First of all, we show that the search direction satisfies the sufficient descent and the conjugacy conditions In order to present the results, the following assumptions are needed Assumption  The objective function f is convex and the gradient g is Lipschitz continuous on the level set   K = x ∈ n |f (x) ≤ f (x ) () Then there exist some positive constants ψ , ψ , and L such that g(x) – g(y) ≤ Lx – y () ψ z ≤ zT G(x)z ≤ ψ z , () and for all z ∈ Rn and x, y ∈ K where G(x) is the Hessian matrix of f Under Assumption , we can easily deduce that ψ sk  ≤ sTk yk ≤ ψ sk  , () ¯ =  G(xk + λsk )sk dλ We begin by showing that the updating ¯ k and G where sTk yk = sTk Gs  matrix () is positive definite Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 Lemma . Suppose that Assumption  holds; then the matrix () is positive definite Proof In order to show that the matrix () is positive definite we need to show that μk is well defined and bounded First, by the Cauchy-Schwarz inequality we have  sTk sk yTk sk  – sTk sk (sTk sk )((sTk sk )(yTk yk ) – (yTk sk ) ) = yTk yk (yTk sk ) (yTk yk ) ≥ , and this implies that the scaling parameter μk is well defined It follows that  < μk = ≤ sTk sk – yTk sk  sTk sk yTk sk  – sTk sk yTk yk   sTk sk sk   ≤  = T yk sk ψ sk  ψ When the scaling parameter is positive and bounded above, then for any non-zero vector p ∈ n we obtain pT yk yT p pT sk sTk p – μk T k T sk yk yk yk T (p p)(yTk yk ) – pT yk yTk p (pT sk ) = μk + yTk yk sTk yk pT Qk+ p = μk pT pI + By the Cauchy-Schwarz inequality and (), we have (pT p)(yTk yk ) – (pT yk )(yTk p) ≥  and yTk sk > , which implies that the matrix () is positive definite ∀k ≥  Observe also that tr(Qk+ ) = tr(μk I) + yTk yk sTk sk – μ k sTk yk yTk yk = (n – )μk + sTk sk sTk yk ≤ n– sk  +  ψ sk  ψ = ψ + n –  ψ () Now,  ≤ < ψ  sTk sk yTk sk  ≤ tr(Qk+ ) ≤ ψ + n –  ψ () Thus, tr(Qk+ ) is bounded On the other hand, by the Sherman-Morrison House-Holder  formula (Q– k+ is actually the memoryless updating matrix updated from μk I using the direct DFP formula), we can obtain Q– k+ =     yk sTk + sk yTk  sTk sk yk yTk I– +  + μk μk μk sTk yk sTk yk sTk yk () Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 We can also establish the boundedness of tr(Q– k+ ) as   tr Q– k+ = tr     sTk yk yk   sk  yk  I – + + μk μk sTk yk μk (sTk yk ) sTk yk ≤ n  L sk   L sk  – + + μk μk ψ sk  μk ψ sk  ≤ (n – ) L L + + ψ ψ ψ = ω, where ω = (n–) ψ + () L ψ + L ψ > , for n ≥   Now, we shall state the sufficient descent property of the proposed search direction in the following lemma Lemma . Suppose that Assumption  holds on the objective function f then the search T direction () satisfies the sufficient descent condition gk+ dk+ ≤ –cgk+  T Proof Since –gk+ dk+ ≥  gk+  tr(Q– k+ ) (see for example Leong [] and Babaie-Kafaki []), then by using () we have T dk+ ≥ cgk+  , –gk+ () where c = min{, ω } Thus, T gk+ dk+ ≤ –cgk+  () Dai-Liao [] extended the classical conjugacy condition from yTk dk+ =  to   yTk dk+ = –t sTk gk+ , () where t ≥  Thus, we can also show that our proposed method satisfies the above conjugacy condition  Lemma . Suppose that Assumption  holds, then the search direction () satisfies the conjugacy condition () Proof By (), we obtain yTk dk+ = –μyTk gk+ – sTk gk+ T yTk gk+ T y s + μ y yk k sTk yk k yTk yk k = –μyTk gk+ – sTk gk+ T yTk gk+ T s y + μ y yk k sTk yk k yTk yk k = –μyTk gk+ – sTk gk+ + μyTk gk+ = –sTk gk+ , Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 where the result holds for t =  The following lemma gives the boundedness of the search direction  Lemma . Suppose that Assumption  holds then there exists a constant p >  such that dk+  ≤ Pgk+ , where dk+ is defined by () Proof A direct result of () and the boundedness of tr(Qk+ ) gives dk+  = Qk+ gk+  ≤ tr(Qk+ )gk+  ≤ Pgk+ , () where P = ( ψψ+n–  )   In order to establish the convergence result, we give the following lemma Lemma . Suppose that Assumption  holds Then there exist some positive constants γ and γ such that for any steplength αk generated by Step  of Algorithm  will satisfy either of the following: f (xk + αk dk ) – f (xk ) ≤ –γ (gkT dk ) , dk  () or f (xk + αk dk ) – f (xk ) ≤ γ gkT dk () Proof Suppose that () is satisfied with αk = , then f (xk + αk dk ) – f (xk ) ≤ δgkT dk , implies that () is satisfied with γ = δ Suppose αk < , and that () is not satisfied Then for a steplength α ≤ f (xk + αdk ) – f (xk ) > δαgkT dk () αk p we have () Now, by the mean-value theorem there exists a scalar τk ∈ (, ) such that f (xk + αdk ) – f (xk ) = αg(xk + τ αdk )T dk From () we have T  (δ – )αgkT dk < α g(xk + τk αdk ) – gk dk = αyTk dk   < L αdk  , () Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 which implies α≥– ( – δ)(gkT dk ) Ldk  () Now, αk ≥ p α ≥ – ( – δ)(gkT dk ) Ldk  () Substituting () in () we have the following: f (xk + αk dk ) – f (xk ) ≤ – δ( – δ)(gkT dk )  T  gk dk Ldk  = –γ (gkT dk ) , dk  f (xk + αk dk ) – f (xk ) ≤ –γ (gkT dk ) dk  where γ = δ( – δ) L Therefore ()  Theorem . Suppose that Assumption  holds Then Algorithm  generates a sequence of approximation {xk } such that lim gk  =  () k→∞ Proof As a direct consequence of Lemma ., the sufficient descent property (), and the boundedness of the search direction () we have f (xk + αk dk ) – f (xk ) ≤ –γ (gkT dk ) dk  ≤ –γ c gk  P gk  = –γ c gk  P () or f (xk + αk dk ) – f (xk ) ≤ γ gkT dk ≤ –γ c gk  () Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 10 of 16 Hence, in either case, there exists a positive constant γ such that f (xk + αk dk ) – f (xk ) ≤ –γ gk  () Since the steplength αk generated by Algorithm  is bounded away from zero, () and () imply that f (xk ) is a non-increasing sequence Thus, by the boundedness of f (xk ) we have    = lim f (xk+ ) – f (xk ) ≤ –γ lim gk  , k→∞ k→∞ and as a result lim gk  =  k→∞ ()  Numerical results In this section, we present the results obtained from the numerical experiment of our proposed method in comparison with the CG-DESCENT (CG-DESC) [], three-term Hestenes-Stiefel (TTHS) [], three-term Polak-Ribiere-Polyak (TTPRP) [], and TTCG [] methods We evaluate the performance of these methods based on iterations and function evaluations By considering some standard unconstrained optimization test problems obtained from Andrei [], we conducted ten numerical experiments for each test function with the size of the variable ranging from  ≤ n ≤ , The algorithms were implemented using Matlab subroutine programming on a PC (Intel(R) core(TM) Duo E . GHz  GB) -bit Operating system The program terminates whenever gk  <  where  = – or a method failed to converges within , iterations The latter requirement is represented by the symbol ‘-’ An Armijo-type line search suggested by Byrd and Nocedal [] was used for all the methods under consideration Table  in the appendices gives the performance of the algorithms in terms of iterations and function evaluations TTPRP solves % of the test problems, TTHS solves % of the test problems, CG-DESCENT solves % of the test problems, and STCG solves % of the test problems, whereas TTCG solves % of the test problems The performance of STCG over TTPRP is that TTPRP needs % and % more, on average, in terms of the number of iterations and function evaluations, respectively, than STCG The improvement of STCG over TTHS is that STCG needs % and % less, on average, in terms of number of iterations and function evaluations, respectively, than TTHS The improvement of STCG over CG-DESCENT algorithms is that CG-DESCENT needs % and % more, on average, in terms of the number of iterations and function evaluations, respectively, than STCG Similarly, the improvement of STCG over TTCG is that STCG needs % and % less, on average, in terms of the number of iterations and function evaluations, respectively, than TTCG In order to further examine the performance of these methods, we employ the performance profile of Dolan and Moré [] Figures - give the performance profile plots of these methods in terms of iterations and function evaluations and the top curve corresponds to the method with the highest win which indicates that the performance of the proposed method is highly encouraging and substantially outperforms any of the other methods considered Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 11 of 16 Figure Performance profiles based on iterations Figure Performance profiles based on function evaluations Conclusion We have presented a new three-term conjugate gradient method for solving nonlinear large scale unconstrained optimization problems by considering a modification of the quasi-Newton memoryless DFP update of the inverse Hessian approximation A remarkable property of the proposed method is that both the sufficient and the conjugacy conditions are satisfied and the global convergence is established under some mild assumption The numerical results show that the proposed method is promising and more efficient than any of the other methods considered Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 12 of 16 Appendix Table Numerical results of TTPRP, TTHS, CG-DESCENT, STCG, and TTCG Test functions Dimension TTPRP TTHS CG-DESC STCG NI NF NI NF NI NF NI TTCG NF NI NF Extended BD1 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 27 28 31 31 31 31 31 32 32 32 73 77 85 85 85 85 85 88 88 88 39 50 51 65 37 52 55 59 56 58 142 207 194 259 144 216 215 249 205 220 28 28 31 31 31 31 31 31 31 31 102 102 124 124 124 124 124 124 124 124 19 19 20 20 20 20 20 21 22 31 31 33 33 33 33 33 34 37 25 26 26 26 28 28 28 28 28 28 133 157 157 157 164 164 164 164 164 164 Extended Rosenbrock 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 44 52 55 60 62 74 82 62 67 75 227 272 285 315 326 401 436 322 355 393 23 40 41 46 22 23 22 39 21 22 156 264 290 323 143 159 143 240 155 158 55 42 33 28 27 27 30 28 29 22 590 349 269 230 220 209 236 213 223 174 125 119 100 91 103 116 111 83 133 134 156 186 136 142 147 191 141 125 157 157 87 129 115 - 828 1,243 1,098 - Diagonal 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 11 11 12 12 12 12 34 - 22 22 24 24 24 24 53 - 4 4 4 4 4 40 40 40 40 40 40 40 40 40 40 4 4 4 4 4 52 52 52 52 52 52 52 52 52 52 3 3 4 4 4 4 4 5 5 5 6 6 6 6 6 18 18 18 18 18 18 18 18 18 18 DENSCHNF 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 25 25 27 27 28 28 29 29 29 29 126 126 136 136 141 141 146 146 146 146 47 49 50 52 53 53 53 54 54 55 403 420 429 446 455 455 455 463 463 472 20 20 21 22 22 22 22 22 22 22 171 171 179 188 188 188 188 188 188 188 6 7 19 19 19 19 19 19 17 18 18 18 31 31 31 31 31 31 15 16 16 16 16 16 16 16 16 16 126 136 135 135 135 135 135 135 135 135 Extended Himmelblau 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 34 36 36 37 38 39 39 40 40 40 135 143 143 147 151 155 155 159 159 159 20 16 15 12 11 12 24 16 13 126 85 76 75 54 78 76 152 136 69 19 19 18 18 19 19 19 19 19 19 114 114 121 121 128 128 128 128 128 128 9 9 9 9 9 15 15 15 15 15 15 15 15 15 15 18 18 18 18 20 20 20 20 20 20 124 124 124 124 137 137 137 137 137 137 Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 13 of 16 Table (Continued) Test functions Dimension TTPRP TTHS CG-DESC STCG NI NF NI NF NI NF NI NF NI TTCG 5 5 5 5 5 5 5 5 5 5 24 27 31 29 28 24 33 26 29 27 5 5 5 5 5 36 35 35 35 35 35 35 35 35 35 - - 43 46 44 46 111 60 37 59 64 55 362 391 371 395 961 516 320 516 548 462 NF DQDRTIC 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 5 5 5 5 5 HIMMELH 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 - - - - - - 7 7 7 7 7 7 7 7 7 7 23 18 23 22 23 28 22 33 29 26 80 54 70 77 83 71 65 90 89 91 Extended BD2 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 30 31 34 34 35 36 36 37 37 37 96 100 110 110 114 118 118 122 122 122 19 22 11 11 12 25 18 10 17 73 87 50 44 40 66 70 45 71 39 9 10 10 10 10 10 10 10 10 37 37 43 43 40 43 43 48 38 38 13 13 13 13 13 13 13 13 13 13 23 23 23 23 23 23 23 23 23 23 36 39 41 43 42 43 36 39 37 37 237 254 264 276 272 273 231 246 237 236 Extended Maratos 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 25 25 27 27 27 120 278 - 135 143 143 147 151 308 681 - 23 24 24 33 94 110 102 140 140 159 60 63 63 82 209 246 229 302 302 350 26 26 26 - 86 86 86 - 37 37 38 38 38 38 38 38 38 38 103 103 104 104 104 104 104 104 104 104 109 107 102 112 121 118 119 96 105 92 934 895 871 887 1,034 949 1,004 814 867 787 NONDIA 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 11 14 11 16 16 16 16 21 23 52 83 119 96 155 167 168 168 234 247 - 12 14 23 26 1,029 - 141 170 337 389 1,555 - 12 18 68 32 77 161 - 37 27 77 41 90 185 - 34 - 441 - - Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 14 of 16 Table (Continued) Test functions Dimension TTPRP TTHS CG-DESC STCG NI NF NI NF NI NF NI TTCG NF NI NF DENSCHNB 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 25 25 27 27 28 28 29 29 29 29 50 50 54 54 56 56 58 58 58 58 30 27 28 30 23 34 31 31 38 29 82 76 67 73 55 86 76 83 93 79 6 6 6 6 6 13 13 13 13 13 13 13 13 13 13 5 6 6 6 6 6 7 7 7 7 13 14 14 14 14 14 14 14 14 14 29 31 31 31 31 31 31 31 31 31 EG2 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 31 87 - 102 321 - 12 18 68 32 77 92 37 27 77 41 90 158 35 31 - 180 172 - 19 68 25 25 89 33 59 96 100 107 361 138 - - Raydan 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 5 5 6 6 6 5 5 6 6 6 5 5 6 6 6 5 5 6 6 6 5 5 6 6 6 5 5 6 6 6 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 41 41 41 41 41 41 41 41 41 41 ENGVAL1 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 49 48 67 81 283 100 - 137 154 273 325 1,486 461 - 30 28 29 33 32 29 23 30 25 27 139 137 145 159 139 173 132 164 160 119 29 30 31 30 23 28 27 29 29 29 181 157 135 134 156 192 175 244 244 244 53 52 49 49 50 51 52 52 52 51 60 61 88 98 100 97 131 94 91 104 54 50 55 57 39 40 43 54 44 39 375 413 402 525 263 271 302 373 318 223 HIMMELBG 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 5 6 7 8 8 5 6 7 8 8 5 6 7 8 8 5 6 7 8 8 5 6 7 8 8 5 6 7 8 8 5 6 7 5 6 7 5 6 6 7 5 6 6 7 Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 15 of 16 Table (Continued) Test functions Dimension TTPRP TTHS CG-DESC STCG NI NF NI NF NI NF NI TTCG NF NI NF Diagonal 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 4 4 21 21 21 21 21 22 22 22 22 22 Extended Tridigonal 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 343 - 465 - 19 21 21 22 23 23 20 23 22 22 24 24 25 29 30 30 34 44 27 38 19 20 21 21 23 23 23 24 24 24 24 24 26 26 28 28 28 29 29 29 22 22 28 28 31 31 32 31 42 46 40 40 46 46 55 56 50 49 63 64 17 20 20 20 21 21 21 21 21 21 51 51 61 61 97 97 97 97 97 97 Extended Quadratic Penalty QP1 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 12 13 11 12 13 25 36 44 32 48 43 26 55 52 39 10 10 11 10 15 11 13 10 10 53 33 53 21 58 25 48 49 51 12 121 107 30 328 52 58 702 85 231 2,500 61 381 2,950 61 433 3,584 12 13 14 15 15 16 16 15 15 18 24 25 32 32 30 43 43 33 Diagonal 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 9 10 10 10 10 10 13 23 23 18 18 20 20 20 20 20 24 36 36 4 4 4 4 4 9 10 12 10 10 12 11 10 10 3 4 4 4 4 7 10 8 8 8 3 3 3 3 3 5 5 5 5 5 4 4 4 4 4 33 33 33 33 33 33 33 33 33 33 Extended Tridigonal 70 180 863 1,362 6,500 11,400 17,000 33,200 42,250 45,000 44 45 42 42 42 42 42 41 41 41 155 157 146 146 146 146 146 143 143 143 13 20 21 20 22 20 23 23 24 25 25 22 43 27 26 27 27 25 51 50 19 20 21 21 23 23 23 24 24 24 24 25 26 26 28 28 28 29 29 29 18 18 17 17 17 17 17 17 17 17 23 23 21 21 22 22 22 22 22 22 17 20 20 20 21 21 21 21 21 21 68 84 84 84 97 97 97 97 97 97 15 99 18 124 21 154 16 135 69 796 188 976 381 1,616 - Competing interests We hereby declare that there are no competing interests with regard to the manuscript Authors’ contributions We all participated in the establishment of the basic concepts, the convergence properties of the proposed method and in the experimental result in comparison of the proposed method with order existing methods Received: 30 May 2016 Accepted: November 2016 Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page 16 of 16 References Fletcher, R, Reeves, CM: Function minimization by conjugate gradients Comput J 7(2), 149-154 (1964) Polak, E, Ribiere, G: Note sur la convergence de méthodes de directions conjuguées ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 3(R1), 35-43 (1969) Liu, Y, Storey, C: Efficient generalized conjugate gradient algorithms, part 1: theory J Optim Theory Appl 69(1), 129-137 (1991) Hestenes, MR: The conjugate gradient method for solving linear systems In: Proc Symp Appl Math VI, American Mathematical Society, pp 83-102 (1956) Dai, Y-H, Yuan, Y: A nonlinear conjugate gradient method with a strong global convergence property SIAM J Optim 10(1), 177-182 (1999) Fletcher, R: Practical Methods of Optimization John Wiley & Sons, New York (2013) Al-Baali, M: Descent property and global convergence of the Fletcher-Reeves method with inexact line search IMA J Numer Anal 5(1), 121-124 (1985) Gilbert, JC, Nocedal, J: Global convergence properties of conjugate gradient methods for optimization SIAM J Optim 2(1), 21-42 (1992) Hager, WW, Zhang, H: A new conjugate gradient method with guaranteed descent and an efficient line search SIAM J Optim 16(1), 170-192 (2005) 10 Zhang, L, Zhou, W, Li, D-H: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence IMA J Numer Anal 26(4), 629-640 (2006) 11 Zhang, L, Zhou, W, Li, D: Some descent three-term conjugate gradient methods and their global convergence Optim Methods Softw 22(4), 697-711 (2007) 12 Andrei, N: On three-term conjugate gradient algorithms for unconstrained optimization Appl Math Comput 219(11), 6316-6327 (2013) 13 Broyden, C: Quasi-Newton methods and their application to function minimisation Mathematics of Computation 21, 368-381 (1967) 14 Davidon, WC: Variable metric method for minimization SIAM J Optim 1(1), 1-17 (1991) 15 Fletcher, R, Powell, MJ: A rapidly convergent descent method for minimization Comput J 6(2), 163-168 (1963) 16 Goldfarb, D: Extension of Davidon’s variable metric method to maximization under linear inequality and equality constraints SIAM J Appl Math 17(4), 739-764 (1969) 17 Shanno, DF: Conjugate gradient methods with inexact searches Math Oper Res 3(3), 244-256 (1978) 18 Perry, JM: A class of conjugate gradient algorithms with a two step variable metric memory Center for Mathematical Studies in Economies and Management Science Northwestern University Press, Evanston (1977) 19 Wolkowicz, H: Measures for symmetric rank-one updates Math Oper Res 19(4), 815-830 (1994) 20 Nocedal, J: Conjugate gradient methods and nonlinear optimization In: Linear and Nonlinear Conjugate Gradient-Related Methods, pp 9-23 (1996) 21 Andrei, N: Acceleration of conjugate gradient algorithms for unconstrained optimization Appl Math Comput 213(2), 361-369 (2009) 22 Leong, WJ, San Goh, B: Convergence and stability of line search methods for unconstrained optimization Acta Appl Math 127(1), 155-167 (2013) 23 Babaie-Kafaki, S: A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization 4OR 11(4), 361-374 (2013) 24 Dai, Y-H, Liao, L-Z: New conjugacy conditions and related nonlinear conjugate gradient methods Appl Math Optim 43(1), 87-101 (2001) 25 Andrei, N: An unconstrained optimization test functions collection Adv Model Optim 10(1), 147-161 (2008) 26 Byrd, RH, Nocedal, J: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization SIAM J Numer Anal 26(3), 727-739 (1989) 27 Dolan, ED, More, JJ: Benchmarking optimization software with performance profiles Math Program 91(2), 201-213 (2002) ...Arzuka et al Journal of Inequalities and Applications (2016) 2016:325 Page of 16 where gk is the gradient of f at a point xk and βk is a scalar known as the conjugate gradient parameter For. .. properties of conjugate gradient methods for optimization SIAM J Optim 2(1), 21-42 (1992) Hager, WW, Zhang, H: A new conjugate gradient method with guaranteed descent and an efficient line search SIAM J... preconditioned conjugate gradient method for unconstrained optimization 4OR 11(4), 361-374 (2013) 24 Dai, Y-H, Liao, L-Z: New conjugacy conditions and related nonlinear conjugate gradient methods Appl Math

Ngày đăng: 19/11/2022, 11:46