David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 10 pot

25 314 0
David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 1 Part 10 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

218 Chapter 8 Basic Descent Methods N = 2 N = 3 N = 4 N = 5 4 4 5 3 3 3 2 2 2 2 1 1 1 1 1 8 1 5 1 3 2 3 1 2 2 5 3 5 2 8 3 8 5 8 Fig. 8.2 Fibonacci search The solution to the Fibonacci difference equation F N =F N −1 +F N −2 (3) is of the form F N =A N 1 +B N 2  (4) where  1 and  2 are roots of the characteristic equation  2 = +1 Explicitly,  1 = 1+ √ 5 2  2 = 1− √ 5 2  (The number  1  1618 is known as the golden section ratio and was considered by early Greeks to be the most aesthetic value for the ratio of two adjacent sides of a rectangle.) For large N the first term on the right side of (4) dominates the second, and hence lim N → F N −1 F N = 1  1 0618 8.2 Line Search by Curve Fitting 219 It follows from (1) that the interval of uncertainty at any point in the process has width d k =  1  1  k−1 d 1  (5) and from this it follows that d k+1 d k = 1  1 =0618 (6) Therefore, we conclude that, with respect to the width of the uncertainty interval, the search by golden section converges linearly (see Section 7.8) to the overall minimum of the function f with convergence ratio 1/ 1 =0618. 8.2 LINE SEARCH BY CURVE FITTING The Fibonacci search method has a certain amount of theoretical appeal, since it assumes only that the function being searched is unimodal and with respect to this broad class of functions the method is, in some sense, optimal. In most problems, however, it can be safely assumed that the function being searched, as well as being unimodal, possesses a certain degree of smoothness, and one might, therefore, expect that more efficient search techniques exploiting this smoothness can be devised; and indeed they can. Techniques of this nature are usually based on curve fitting procedures where a smooth curve is passed through the previously measured points in order to determine an estimate of the minimum point. A variety of such techniques can be devised depending on whether or not derivatives of the function as well as the values can be measured, how many previous points are used to determine the fit, and the criterion used to determine the fit. In this section a number of possibilities are outlined and analyzed. All of them have orders of convergence greater than unity. Newton’s Method Suppose that the function f of a single variable x is to be minimized, and suppose that at a point x k where a measurement is made it is possible to evaluate the three numbers fx k , f  x k , f  x k . It is then possible to construct a quadratic function q which at x k agrees with f up to second derivatives, that is qx =fx k  +f  x k x −x k  + 1 2 f  x k x −x k  2  (7) We may then calculate an estimate x k+1 of the minimum point of f by finding the point where the derivative of q vanishes. Thus setting 0 =q  x k+1  =f  x k  +f  x k x k+1 −x k  220 Chapter 8 Basic Descent Methods x k + 1 x k x f Fig. 8.3 Newton’s method for minimization we find x k+1 =x k − f  x k  f  x k   (8) This process, which is illustrated in Fig. 8.3, can then be repeated at x k+1 . We note immediately that the new point x k+1 resulting from Newton’s method does not depend on the value fx k . The method can more simply be viewed as a technique for iteratively solving equations of the form gx =0 where, when applied to minimization, we put gx ≡f  x. In this notation Newton’s method takes the form x k+1 =x k − gx k  g  x k   (9) This form is illustrated in Fig. 8.4. We now show that Newton’s method has order two convergence: Proposition. Let the function g have a continuous second derivative, and let x ∗ satisfy gx ∗  =0, g  x ∗  =0. Then, provided x 0 is sufficiently close to x ∗ , the sequence x k   k=0 generated by Newton’s method (9) converges to x ∗ with an order of convergence at least two. Proof. For points  in a region near x ∗ there is a k 1 such that g  <k 1 and a k 2 such that g  >k 2 . Then since gx ∗  = 0 we can write x k+1 −x ∗ =x k −x ∗ − gx k  −gx ∗  g  x k  =−gx k  −gx ∗  +g  x k x ∗ −x k /g  x k  8.2 Line Search by Curve Fitting 221 x k + 1 x k x g Fig. 8.4 Newton’s method for solving equations The term in brackets is, by Taylor’s theorem, zero to first-order. In fact, using the remainder term in a Taylor series expansion about x k , we obtain x k+1 −x ∗ = 1 2 g   g  x k  x k −x ∗  2 for some  between x ∗ and x k . Thus in the region near x ∗ , x k+1 −x ∗  k 1 2k 2 x k −x ∗  2  We see that if x k −x ∗ k 1 /2k 2 < 1, then x k+1 −x ∗ < x k −x ∗  and thus we conclude that if started close enough to the solution, the method will converge to x ∗ with an order of convergence at least two. Method of False Position Newton’s method for minimization is based on fitting a quadratic on the basis of information at a single point; by using more points, less information is required at each of them. Thus, using fx k , f  x k , f  x k−1  it is possible to fit the quadratic qx = fx k  +f  x k x −x k  + f  x k−1  −f  x k  x k−1 −x k · x −x k  2 2  which has the same corresponding values. An estimate x k+1 can then be determined by finding the point where the derivative of q vanishes; thus x k+1 =x k −f  x k   x k−1 −x k f  x k−1  −f  x k    (10) (See Fig. 8.5.) Comparing this formula with Newton’s method, we see again that the value fx k  does not enter; hence, our fit could have been passed through 222 Chapter 8 Basic Descent Methods x k x k + 1 x k – 1 x f q Fig. 8.5 False position for minimization either fx k  or fx k−1 . Also the formula can be regarded as an approximation to Newton’s method where the second derivative is replaced by the difference of two first derivatives. Again, since this method does not depend on values of f directly, it can be regarded as a method for solving f  x ≡gx =0. Viewed in this way the method, which is illustrated in Fig. 8.6, takes the form x k+1 =x k −gx k   x k −x k−1 gx k  −gx k−1    (11) We next investigate the order of convergence of the method of false position and discover that it is order  1 1618, the golden mean. Proposition. Let g have a continuous second derivative and suppose x ∗ is such that gx ∗  =0, g  x ∗  =0. Then for x 0 sufficiently close to x ∗ , the sequence x k   k=0 generated by the method of false position (11) converges to x ∗ with order  1 1618. x k x k + 1 x k – 1 x g Fig. 8.6 False position for solving equations 8.2 Line Search by Curve Fitting 223 Proof. Introducing the notation ga b = gb −ga b −a  (12) we have x k−1 −x ∗ =x k −x ∗ −gx k   x k −x k−1 gx k  −gx k−1   =x k −x ∗   gx k−1 x k  −gx k x ∗  gx k−1 x k    (13) Further, upon the introduction of the notation ga b c = ga b −gb c a −c  we may write (13) as x k+1 −x ∗ =x k −x ∗ x k−1 −x ∗   gx k−1 x k x ∗  gx k−1 x k    Now, by the mean value theorem with remainder, we have (see Exercise 2) gx k−1 x k  = g   k  (14) and gx k−1 x k x ∗  = 1 2 g  k (15) where  k and  k are convex combinations of x k , x k−1 and x k , x k−1 , x ∗ , respec- tively. Thus x k+1 −x ∗ = g   k  2g   k  x k −x ∗ x k−1 −x ∗  (16) It follows immediately that the process converges if it is started sufficiently close to x ∗ . To determine the order of convergence, we note that for large k Eq. (16) becomes approximately x k+1 −x ∗ =Mx k −x ∗ x k−1 −x ∗  where M = g  x ∗  2g  x ∗   224 Chapter 8 Basic Descent Methods Thus defining  k =x k −x ∗  we have, in the limit,  k+1 =M k  k−1  (17) Taking the logarithm of this equation we have, with y k =log M k , y k+1 =y k +y k−1  (18) which is the Fibonacci difference equation discussed in Section 7.1. A solution to this equation will satisfy y k+1 − 1 y k →0 Thus logM k+1 − 1 logM k →0 or log M k+1 M k   1 →0 and hence  k+1   1 k →M  1 −1  Having derived the error formula (17) by direct analysis, it is now appropriate to point out a short-cut technique, based on symmetry and other considerations, that can sometimes be used in even more complicated situations. The right side of error formula (17) must be a polynomial in  k and  k−1 , since it is derived from approximations based on Taylor’s theorem. Furthermore, it must be second order, since the method reduces to Newton’s method when x k = x k−1 . Also, it must go to zero if either  k or  k−1 go to zero, since the method clearly yields  k+1 =0in that case. Finally, it must be symmetric in  k and  k−1 , since the order of points is irrelevant. The only formula satisfying these requirements is  k+1 =M k  k−1 . Cubic Fit Given the points x k−1 and x k together with the values fx k−1 , f  x k−1 , fx k , f  x k , it is possible to fit a cubic equation to the points having corresponding values. The next point x k+1 can then be determined as the relative minimum point of this cubic. This leads to x k+1 =x k −x k −x k−1   f  x k  +u 2 −u 1 f  x k  −f  x k−1  +2u 2   (19) where u 1 =f  x k−1  +f  x k  −3 fx k−1  −fx k  x k−1 −x k u 2 =u 2 1 −f  x k−1 f  x k  1/2  which is easily implementable for computations. 8.2 Line Search by Curve Fitting 225 It can be shown (see Exercise 3) that the order of convergence of the cubic fit method is 2.0. Thus, although the method is exact for cubic functions indicating that its order might be three, its order is actually only two. Quadratic Fit The scheme that is often most useful in line searching is that of fitting a quadratic through three given points. This has the advantage of not requiring any derivative information. Given x 1 x 2 x 3 and corresponding values fx 1  = f 1  fx 2  = f 2  fx 3  = f 3 we construct the quadratic passing through these points qx = 3  i=1 f i  j=i x −x j   j=i x i −x j   (20) and determine a new point x 4 as the point where the derivative of q vanishes. Thus x 4 = 1 2 b 23 f 1 +b 31 f 2 +b 12 f 3 a 23 f 1 +a 31 f 2 +a 12 f 3  (21) where a ij =x i −x j b ij =x 2 i −x 2 j . Define the errors  i = x ∗ −x i i=1, 2, 3, 4. The expression for  4 must be a polynomial in  1  2  3 . It must be second order (since it is a quadratic fit). It must go to zero if any two of the errors  1  2  3 is zero. (The reader should check this.) Finally, it must be symmetric (since the order of points is relevant). It follows that near a minimum point x ∗ of f, the errors are related approximately by  4 =M 1  2 + 2  3 + 1  3  (22) where M depends on the values of the second and third derivatives of f at x ∗ . If we assume that  k →0 with an order greater than unity, then for large k the error is governed approximately by  k+2 =M k  k−1  Letting y k =log M k this becomes y k+2 =y k +y k−1 with characteristic equation  3 − −1 = 0 The largest root of this equation is  13 which thus determines the rate of growth of y k and is the order of convergence of the quadratic fit method. 226 Chapter 8 Basic Descent Methods Approximate Methods In practice line searches are terminated before they have converged to the actual minimum point. In one method, for example, a fairly large value for x 1 is chosen and this value is successively reduced by a positive factor <1 until a sufficient decrease in the function value is obtained. Approximate methods and suitable stopping criteria are discussed in Section 8.5. 8.3 GLOBAL CONVERGENCE OF CURVE FITTING Above, we analyzed the convergence of various curve fitting procedures in the neighborhood of the solution point. If, however, any of these procedures were applied in pure form to search a line for a minimum, there is the danger—alas, the most likely possibility—that the process would diverge or wander about meaning- lessly. In other words, the process may never get close enough to the solution for our detailed local convergence analysis to be applicable. It is therefore important to artfully combine our knowledge of the local behavior with conditions guaranteeing global convergence to yield a workable and effective procedure. The key to guaranteeing global convergence is the Global Convergence Theorem of Chapter 7. Application of this theorem in turn hinges on the construction of a suitable descent function and minor modifications of a pure curve fitting algorithm. We offer below a particular blend of this kind of construction and analysis, taking as departure point the quadratic fit procedure discussed in Section 8.2 above. Let us assume that the function f that we wish to minimize is strictly unimodal and has continuous second partial derivatives. We initiate our search procedure by searching along the line until we find three points x 1 x 2 x 3 with x 1 <x 2 <x 3 such that fx 1   fx 2   fx 3 . In other words, the value at the middle of these three points is less than that at either end. Such a sequence of points can be determined in a number of ways—see Exercise 7. The main reason for using points having this pattern is that a quadratic fit to these points will have a minimum (rather than a maximum) and the minimum point will lie in the interval x 1 x 3 . See Fig. 8.7. We modify the pure quadratic fit algorithm so that it always works with points in this basic three-point pattern. The point x 4 is calculated from the quadratic fit in the standard way and fx 4  is measured. Assuming (as in the figure) that x 2 <x 4 <x 3 , and accounting for the unimodal nature of f, there are but two possibilities: 1. fx 4   fx 2  2. fx 2  < fx 4   fx 3 . In either case a new three-point pattern, ¯x 1  ¯x 2  ¯x 3 , involving x 4 and two of the old points, can be determined: In case (1) it is ¯x 1  ¯x 2  ¯x 3  = x 2 x 4 x 3  8.3 Global Convergence of Curve Fitting 227 f (x 1 ) f (x 2 ) x 1 x 2 x 3 f (x 3 ) Fig. 8.7 Three-point pattern while in case (2) it is ¯x 1  ¯x 2  ¯x 3  = x 1 x 2 x 4  We then use this three-point pattern to fit another quadratic and continue. The pure quadratic fit procedure determines the next point from the current point and the previous two points. In the modification above, the next point is determined from the current point and the two out of three last points that form a three-point pattern with it. This simple modification leads to global convergence. To prove convergence, we note that each three-point pattern can be thought of as defining a vector x in E 3 . Corresponding to an x =x 1 x 2 x 3  such that x 1 x 2 x 3  form a three-point pattern with respect to f, we define Ax = ¯x 1  ¯x 2  ¯x 3  as discussed above. For completeness we must consider the case where two or more of the x i i=1 2 3 are equal, since this may occur. The appropriate definitions are simply limiting cases of the earlier ones. For example, if x 1 = x 2 , then x 1 x 2 x 3  form a three-point pattern if fx 2   fx 3  and f  x 2 <0 (which is the limiting case of fx 2  < fx 1 ). A quadratic is fit in this case by using the values at the two distinct points and the derivative at the duplicated point. In case x 1 =x 2 =x 3 x 1 x 2 x 3  forms a three-point pattern if f  x 2  =0 and f  x 2   0. With these definitions, the map A is well defined. It is also continuous, since curve fitting depends continuously on the data. We next define the solution set  ⊂ E 3 as the points x ∗ = x ∗ x ∗ x ∗  where f  x ∗  = 0. Finally, we let Zx =fx 1 +fx 2 +fx 3 . It is easy to see that Z is a descent function for A. After application of A one of the values fx 1  fx 2  fx 3  will be replaced by fx 4 , and by construction, and the assumption that f is unimodal, it will replace a strictly larger value. Of course, at x ∗ = x ∗ x ∗ x ∗  we have Ax ∗  = x ∗ and hence ZAx ∗  = Zx ∗ . Since all points are contained in the initial interval, we have all the requirements for the Global Convergence Theorem. Thus the process converges to the solution. [...]... achieved for some = 1 1 + n n , with 1 + n = 1 Using the relation 1 / 1 + n / n = 1 + n − 1 1 − n n / 1 n , an appropriate bound is lim 1 The minimum is achieved at = n 1+ 1/ / 1+ n− n 4 1 n /2, yielding 1 n 1+ n 2 Combining the above two lemmas, we obtain the central result on the convergence of the method of steepest descent 238 Chapter 8 Basic Descent Methods φ, ψ 1 λ2 λ λn Fig 8 .10 Kantorovich inequality... we assume that the Hessian matrix is bounded above and ¯ below as aI F x AI (Thus f is strongly convex.) We present three analyses: Table 8 .1 Solution to Example Step k f xk 0 1 2 3 4 5 6 0 −2 .15 63625 −2 .17 44062 −2 .17 46440 −2 .17 46585 −2 .17 46595 −2 .17 46595 Solution point x∗ = 1 534965 0 12 20097 1 97 515 6 1 412 954 240 Chapter 8 Basic Descent Methods 1 Exact line search Given a point xk , we have for any... number of ¯ Q, and it is clear that the convergence rate for the proposed method will be worse than for steepest descent applied to the original function We can go further and actually estimate how much slower the proposed method is likely to be If r is large, we have steepest descent rate = r 1 r +1 proposed method rate = r2 − 1 r2 + 1 2 1 − 1/ r 4 2 1 − 1/ r 2 4 Since 1 − 1/ r 2 r 1 − 1/ r, it follows... first criterion and thus the final must satisfy ≥ 1/ A Therefore the inequality of the first part of the criterion implies f xk +1 ≤ f xk − A g xk 2 Subtracting f ∗ from both sides, f xk +1 − f ∗ ≤ f xk − f ∗ − A g xk 2 Finally, using (39) we obtain f xk +1 − f ∗ ≤ 1 − 2 a/ A f xk − f ∗ Clearly 2 a/ A < 1 and hence there is linear convergence Notice if that in fact is chosen very close to 5 and is chosen... diagonal with diagonal 1 2 n In this coordinate system we have xT x 2 = xT Qx xT Q 1 x n i =1 n i =1 xi2 2 i xi 2 n i =1 xi2 / i which can be written as xT x 2 1/ n = n i =1 i i ≡ xT Qx xT Q 1 x i =1 i / i where i = xi2 / n xi2 We have converted the expression to the ratio of two i =1 functions involving convex combinations; one a combination of i ’s; the other a combination of 1/ i ’s The situation is... rule with < 5 and > 1 Note first that the inequality t t2 for 0 t 1 implies by a change of variable that − + 2 A ≤ − /2 2 8.7 The Method of Steepest Descent for 0 1/ A Then using (36) we have that for f x k − g xk ≤ f xk − 2 g xk ≤ f xk − 5 g xk < f xk − g xk 2 41 < 1/ A + 5 2 A g xl 2 2 2 since < 5 This means that the first part of the stopping criterion is satisfied for < 1/ A The second part of the stopping... is A−a A+a 2 = r 1 r +1 2 which clearly shows that convergence is slowed as r increases The ratio r, which is the single number associated with the matrix Q that characterizes convergence, is often called the condition number of the matrix Example Let us take ⎡ ⎤ 0 78 −0 02 −0 12 −0 14 ⎢−0 02 0 86 −0 04 0 06 ⎥ ⎥ Q=⎢ ⎣−0 12 −0 04 0 72 −0 08 ⎦ −0 14 0 06 −0 08 0 74 b = 0 76 0 08 1 12 0 68 For this matrix... combination of i ’s; the other a combination of 1/ i ’s The situation is shown pictorially in Fig 8 .10 The curve in the figure represents the function 1/ Since n i i is a point between 1 i =1 and n , the value of is a point on the curve On the other hand, the value of is a convex combination of points on the curve and its value corresponds to a point in the shaded region For the same vector both functions... with respect to the inequality will hold for the two minima The minimum of the left hand side is f xk +1 The minimum of the right hand side occurs at = 1/ A, yielding the result f xk − f xk +1 1 g xk 2A 2 where g xk 2 ≡ g xk T g xk Subtracting the optimal value f ∗ = f x∗ from both sides produces f xk +1 − f ∗ f xk − f ∗ − 1 g xk 2A 2 (37) In a similar way, for any x there holds fx f xk + g xk T x − xk +... gk 2 T T gk Qgk gk Q 1 gk E xk (33) The proof is by direct computation We have, setting yk = xk − x∗ , E xk − E xk +1 2 = E xk T 2 T k gk Qyk − k gk Qgk T yk Qyk Using gk = Qyk we have E xk − E xk +1 E xk T 2 gk gk 2 gT g 2 − Tk k T g Qg g Qg = k kT 1 k k gk Q gk = T g k gk 2 T gk Q 1 gk T gk Qgk In order to obtain a bound on the rate of convergence, we need a bound on the right-hand side of (33) The . −Ex k +1  Ex k  = 2 g T k g k  2 g T k Qg k  − g T k g k  2 g T k Qg k  g T k Q 1 g k = g T k g k  2 g T k Qg k  g T k Q 1 g k   In order to obtain a bound on the rate of convergence,. equation will satisfy y k +1 − 1 y k →0 Thus logM k +1 − 1 logM k →0 or log M k +1 M k   1 →0 and hence  k +1   1 k →M  1 1  Having derived the error formula (17 ) by direct analysis,. 2) g x k 1 x k  = g   k  (14 ) and g x k 1 x k x ∗  = 1 2 g  k (15 ) where  k and  k are convex combinations of x k , x k 1 and x k , x k 1 , x ∗ , respec- tively. Thus x k +1 −x ∗ = g   k  2g   k  x k −x ∗ x k 1 −x ∗ 

Ngày đăng: 06/08/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan