1. Trang chủ
  2. » Công Nghệ Thông Tin

Introduction to Optimum Design phần 5 pptx

76 323 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 527,66 KB

Nội dung

8.1.3 Convergence of Algorithms The central idea behind numerical methods of optimization is to search for the optimum point in an iterative manner, generating a sequence of designs. It is important to note that the success of a method depends on the guarantee of convergence of the sequence to the optimum point. The property of convergence to a local optimum point irrespective of the starting point is called global convergence of the numerical method. It is desirable to employ such con- vergent numerical methods in practice since they are more reliable. For unconstrained prob- lems, a convergent algorithm must reduce the cost function at each iteration until a minimum point is reached. It is important to note that the algorithms converge to a local minimum point only, as opposed to a global minimum, since they only use the local information about the cost function and its derivatives in the search process. Methods to search for global minima are described in Chapter 18. 8.1.4 Rate of Convergence In practice, a numerical method may take a large number of iterations to reach the optimum point. Therefore, it is important to employ methods having a faster rate of convergence. Rate of convergence of an algorithm is usually measured by the numbers of iterations and func- tion evaluations needed to obtain an acceptable solution. Rate of convergence is a measure of how fast the difference between the solution point and its estimates goes to zero. Faster algorithms usually use second-order information about the problem functions when calcu- lating the search direction. They are known as Newton methods. Many algorithms also approximate second-order information using only the first-order information. They are known as quasi-Newton methods, described in Chapter 9. 8.2 Basic Ideas and Algorithms for Step Size Determination Unconstrained numerical optimization methods are based on the iterative formula given in Eq. (8.1). As discussed earlier, the problem of obtaining the design change Dx is usually decomposed into two subproblems: (1) direction finding and (2) step size determination, as expressed in Eq. (8.3). We need to discuss numerical methods for solving both subproblems. In the following paragraphs, we first discuss the problem of step size determination. This is often called the one-dimensional search (or, line search) problem. Such problems are simpler to solve. This is one reason for discussing them first. Following one-dimensional minimiza- tion methods, two methods are described in Sections 8.3 and 8.4 for finding a “desirable” search direction d in the design space. 8.2.1 Definition of One-Dimensional Minimization Subproblem For an optimization problem with several variables, the direction finding problem must be solved first. Then, a step size must be determined by searching for the minimum of the cost function along the search direction. This is always a one-dimensional minimization problem. To see how the line search will be used in multidimensional problems, let us assume for the moment that a search direction d (k) has been found. Then, in Eqs. (8.1) and (8.3), scalar a k is the only unknown. Since the best step size a k is yet unknown, we replace it by a in Eq. (8.3). Then, using Eqs. (8.1) and (8.3), the cost function f(x) is given as f(x (k+1) ) = f(x (k) + ad (k) ). Now, since d (k) is known, the right side becomes a function of the scalar parameter a only. This process is summarized in the following equations: Design update: (8.9a) xxd kkk+ ( ) () () =+ 1 a 282 INTRODUCTION TO OPTIMUM DESIGN Numerical Methods for Unconstrained Optimum Design 283 Cost function evaluation: (8.9b) where (a) is the new function with a as the only independent variable (in the sequel, we shall drop the overbar for functions of single variable). Note that at a = 0, f(0) = f(x (k) ) from Eq. (8.9b), which is the current value of the cost function. It is important to understand this reduction of a function of n variables to a function of only one variable since this funda- mental step is used in almost all optimization methods. It is also important to understand the geometric significance of Eq. (8.9b). We shall elaborate on these ideas later. If x (k) is not a minimum point, then it is possible to find a descent direction d (k) at the point and reduce the cost function further. Recall that a small move along d (k) reduces the cost func- tion. Therefore, using Eqs. (8.5) and (8.9b), the descent condition for the cost function can be expressed as the inequality: (8.10) Since f(a) is a function of single variable, we can plot f(a) versus a. To satisfy Inequal- ity (8.10), the curve f(a) versus a must have a negative slope at the point a = 0. Such a curve is shown by the solid line in Fig. 8-3. It must be understood that if the search direction is that of descent, the graph of f(a) versus a cannot be the one shown by the dashed curve because any positive a would cause the function f(a) to increase, violating Inequality (8.10). This would also be a contradiction as d (k) is a direction of descent for the cost function. There- fore, the graph of f(a) versus a must be the solid curve in Fig. 8-3 for all problems. In fact, the slope of the curve f(a) at a = 0 is calculated as f ¢(0) = c (k) ·d (k) , which is negative as seen in Eq. (8.8). This discussion shows that if d (k) is a descent direction, then a must always be a positive scalar in Eq. (8.8). Thus, the one-dimensional minimization problem is to find a k = a such that f(a) is minimized. 8.2.2 Analytical Method to Compute Step Size If f(a) is a simple function, then we can use the analytical procedure to determine a k (necessary and sufficient conditions of Section 4.3). The necessary condition is df(a k )/da = 0, and the sufficient condition is d 2 f(a k )/da 2 > 0. We shall illustrate the analytical line search ffa () < () 0 f ff f kkk xxd + ( ) () () () =+ () = () 1 aa f (a) f (0) tan –1 | c · d | a = a k a FIGURE 8-3 Graph of f(a) versus a. 284 INTRODUCTION TO OPTIMUM DESIGN procedure with Example 8.2. Note that differentiation of f(x (k+1) ) in Eq. (8.9b) with respect to a, using the chain rule of differentiation and setting it to zero, gives (8.11) Since the dot product of two vectors is zero in Eq. (8.11), the gradient of the cost func- tion at the new point is orthogonal to the search direction at the kth iteration, i.e., c (k+1) is normal to d (k) . The condition in Eq. (8.11) is important for two reasons: (1) it can be used directly to obtain an equation in terms of step size a whose smallest root gives the exact step size, and (2) it can be used to check the accuracy of the step size in a numerical procedure to calculate a and thus it is called the line search termination criterion. Many times numer- ical line search methods will give an approximate or inexact value of the step size along the search direction. The line search termination criterion is useful for determining the accuracy of the step size; i.e., for checking c (k+1) ·d (k) = 0. df d fd d f kTkk kkkk xx x x xdcd + () + () + () + () () + () () () = ∂ () ∂ () =— () ◊= ◊= 111 11 0 aa EXAMPLE 8.2 Analytical Step Size Determination Let a direction of change for the function (a) at the point (1, 2) be given as (-1, -1). Compute the step size a k to minimize f(x)in the given direction. Solution. For the given point x (k) = (1, 2), f(x (k) ) = 22, and d (k) = (-1, -1). We first check to see if d (k) is a direction of descent using Inequality (8.8). The gradient of the function at (1, 2) is given as c (k) = (10, 10) and c (k) ·d (k) = 10(-1) + 10(-1) =-20< 0. Therefore, (-1, -1) is a direction of descent. The new point x (k+1) using Eq. (8.9a) is given as (b) Substituting these equations into the cost function of Eq. (a), we get (c) Therefore, along the given direction (-1, -1), f(x) becomes a function of the single variable a. Note from Eq. (c) that f(0) = 22, which is the cost function value at the current point, and that f ¢(0) =-20 < 0, which is the slope of f(a) at a = 0 (also recall that f ¢(0) = c (k) ·d (k) ). Now using the necessary and sufficient conditions of optimality for f(a), we obtain (d) Therefore, a k = 10 – 7 minimizes f(x) in the direction (-1, -1). The new point is df d df d kk a aa a =-= = =>14 20 0 10 7 14 0 2 2 ;; ff k x + () () =- () +- () - () +- () +=-+= () 1 22 2 31 21 2 22 7 7 20 22aaaaaaa x x xx k kk 1 2 1 1 1 2 1 1 2 1 1 12 È Î Í ˘ ˚ ˙ = È Î Í ˘ ˚ ˙ + - - È Î Í ˘ ˚ ˙ =- = - + () + () + () aaa,;or fxxxxx () =+ ++32 27 1 2 12 2 2 Numerical Methods for Unconstrained Optimum Design 285 8.2.3 Concepts Related to Numerical Methods to Compute Step Size In Example 8.2, it was possible to simplify expressions and obtain an explicit form for the function f(a). Also, the functional form of f(a) was quite simple. Therefore, it was possible to use the necessary and sufficient conditions of optimality to find the minimum of f(a) and analytically calculate the step size a k . For many problems, it is not possible to obtain an explicit expression for f(a). Moreover, even if the functional form of f(a) is known, it may be too complicated to lend itself to analytical solution. Therefore, a numerical method must be used to find a k to minimize f(x) in the known direction d (k) . The numerical line search process is itself iterative, requiring several iterations before a minimum point is reached. Many line search techniques are based on comparing function values at several points along the search direction. Usually, we must make some assumptions on the form of the line search function to compute step size by numerical methods. For example, it must be assumed that a minimum exists and that it is unique in some interval of interest. A function with this property is called the unimodal function. Figure 8-4 shows the graph of such a function that decreases continuously until the minimum point is reached. Comparing Figs. 8-3 and 8-4, we observe that f(a) is a unimodal function in some interval. Therefore, it has a unique minimum. Most one-dimensional search methods assume the line search function to be a unimodal function. This may appear to be a severe restriction on the methods; however, it is not. For functions that are not unimodal, we can think of locating only a local minimum point that is closest to the starting point, i.e., closest to a = 0. This is illustrated in Fig. 8-5, where the function f(a) is not unimodal for 0 £ a £ a 0 . Points A, B, and C are all local minima. If we restrict a to lie between 0 and , however, there is only one local minimum point A because the function f(a) is unimodal for 0 £ a £ . Thus, the assumption of unimodality is not as restrictive as it appears. The line search problem then is to find a in an interval 0 £ a £ at which the function f(a) has a global minimum. This statement of the problem, however, requires some modifi- cation. Since we are dealing with numerical methods, it is not possible to locate the exact minimum point a*. In fact, what we determine is the interval in which the minimum lies, i.e., some lower and upper limits a l and a u for a*. The interval (a l , a u ) is called the interval of uncertainty and is designated as I = a u - a l . Most numerical methods iteratively reduce the interval of uncertainty until it satisfies a specified tolerance e, i.e., I < e. Once this stopping criterion is satisfied, a* is taken as 0.5(a l + a u ). Methods based on the preceding philosophy a a a (e) Substituting the new design (- 3 – 7 , 4 – 7 ) into the cost function f(x) we find the new value of the cost function as 54 – 7 . This is a substantial reduction from the cost function value of 22 at the previous point. Note that Eq. (d) for calculation of step size a can also be obtained by directly using the condition given in Eq. (8.11). Using Eq. (b), the gradient of f at the new design point in terms of a is given as (f) Using the condition of Eq. (8.11), we get 14a - 20 = 0 which is same as Eq. (d). c k xxxx + () =+ + () =- - () 1 1212 6224 108106,,aa x x k 1 2 1 1 2 10 7 1 1 3 7 4 7 È Î Í ˘ ˚ ˙ = È Î Í ˘ ˚ ˙ + Ê Ë ˆ ¯ - - È Î Í ˘ ˚ ˙ = - È Î Í Í Í ˘ ˚ ˙ ˙ ˙ + () 286 INTRODUCTION TO OPTIMUM DESIGN are called interval reducing methods. In this chapter, we shall only present methods based on this idea. The basic procedure for these methods can be divided into two phases. In phase one, the location of the minimum point is bracketed and the initial interval of uncertainty is established. In the second phase, the interval of uncertainty is refined by eliminating regions that cannot contain the minimum. This is done by computing and comparing function values in the interval of uncertainty. We shall describe the two phases for these methods in more detail in the following subsections. It is important to note that the performance of most optimization methods depends heavily on the step size calculation procedure. Therefore, it is not surprising that numerous proce- dures have been developed and evaluated for step size calculation. In the sequel, we describe two rudimentary methods to give the students a flavor of the calculations needed to evaluate a step size. In Chapter 9, some more advanced methods based on the concept of an inaccu- rate line search are described and discussed. 8.2.4 Equal Interval Search As mentioned earlier, the basic idea of any interval reducing method is to reduce succes- sively the interval of uncertainty to a small acceptable value. To clearly discuss the ideas, we start with a very simple-minded approach called the equal interval search method. The idea is quite elementary as illustrated in Fig. 8-6. In the interval 0 £ a £ , the function f(a) is evaluated at several points using a uniform grid in Phase I. To do this, we select a small number d and evaluate the function at the a values of d, 2d, 3d, , qd, (q + 1)d, and so on a f (a) a* a a – FIGURE 8-4 Unimodal function f(a). A B C f (a) a* a = a a = a 0 – a FIGURE 8-5 Nonunimodal function f(a) for 0 £ a £ a 0 (unimodal for 0 £ a £ ). a Numerical Methods for Unconstrained Optimum Design 287 as shown in Fig. 8-6(A). We compare values of the function at the two successive points, say q and (q + 1). Then, if the function at the point q is larger than that at the next point (q + 1), i.e., f(qd) > f((q + 1)d) the minimum point has not been surpassed yet. However, if the function has started to increase, i.e., (8.12) then the minimum has been surpassed. Note that once Eq. (8.12) is satisfied for points q and (q + 1), the minimum can be between either the points (q - 1) and q or the points q and (q + 1). To account for both possibilities, we take the minimum to lie between the points (q - 1) and (q + 1). Thus, lower and upper limits for the interval of uncertainty are estab- lished as (8.13) Establishment of the lower and upper limits on the minimum value of a indicates end of Phase I. In Phase II, we restart the search process from the lower end of the interval of uncer- tainty a = a l with some reduced value for the increment in d, say rd, where r << 1. Then, the preceding process of Phase I is repeated from a = a l with the reduced d and the minimum is again bracketed. Now, the interval of uncertainty I is reduced to 2rd. This is illustrated in Fig. 8-6(B). The value of the increment is further reduced, to say r 2 d, and the process is adadaad lu ul qqI=- () =+ () =-=11 2,, fq f qdd () <+ ()() 1 f (a) f (a) d a a a* 2d – (q – 1)d (q + 1)d qd d rd a l a a* a u (A) Phase I (B) Phase II FIGURE 8-6 Equal interval search process. (A) Phase I: Initial bracketing of minimum. (B) Phase II: Reducing the interval of uncertainty. 288 INTRODUCTION TO OPTIMUM DESIGN repeated, until the interval of uncertainty is reduced to an acceptable value e. Note that the method is convergent for unimodal functions and can be easily coded into a computer program. The efficiency of a method such as the equal interval search depends on the number of function evaluations needed to achieve the desired accuracy. Clearly, this depends on the initial choice for the value of d. If d is very small, the process may take many function eval- uations to initially bracket the minimum. An advantage of using a smaller d, however, is that the interval of uncertainty at the end of the Phase I is fairly small. Subsequent improvements for the interval of uncertainty require fewer function evaluations. It is usually advantageous to start with a larger value of d and quickly bracket the minimum point. Then, the process is continued until the accuracy requirement is satisfied. 8.2.5 Alternate Equal Interval Search A slightly different computational procedure can be followed to reduce the interval of uncertainty in Phase II once the minimum has been bracketed in Phase I. This procedure is a precursor to the more efficient golden sections search presented in the next section. The procedure is to evaluate the function at two new points, say a a and a b in the interval of uncertainty. The points a a and a b are located at a distance of I/3 and 2I/3 from the lower limit a l , respectively, where I = a u - a l . That is, This is shown in Fig. 8-7. Next, the function is evaluated at the two new points a a and a b . Let these be designated as f(a a ) and f(a b ). Now, the following two conditions must be checked: 1. If f(a a ) < f(a b ), then the minimum lies between a l and a b . The right one-third interval between a b and a u is discarded. New limits for the interval of uncertainty are a¢ l = a l and a¢ u = a b (the prime on a is used to indicate revised limits for the interval of uncertainty). Therefore, the reduced interval of uncertainty is I¢=a¢ u - a¢ l = a b - a l . The procedure is repeated with the new limits. 2. If f(a a ) < f(a b ), then the minimum lies between a a and a u . The interval between a l and a a is discarded. The procedure is repeated with a¢ l = a a and a¢ u = a u (I¢=a¢ u - a¢ l ). aa aa a al bl u III=+ =+ =- 1 3 2 3 1 3 ; f (a) a l a a a b a u a (a u – a l )/ 3 FIGURE 8-7 An alternate equal interval search process. With the preceding calculations, the interval of uncertainty is reduced to I¢=2I/3 after every set of two function evaluations. The entire process is continued until the interval of uncer- tainty is reduced to an acceptable value. 8.2.6 Golden Section Search Golden section search is an improvement over the alternate equal interval search and is one of the better methods in the class of interval reducing methods. The basic idea of the method is still the same: evaluate the function at predetermined points, compare them to bracket the minimum in Phase I, and then converge on the minimum point in Phase II. The method uses fewer function evaluations to reach the minimum point compared with other similar methods. The number of function evaluations is reduced during both the phases, the initial bracketing phase as well as the interval reducing phase. Initial Bracketing of Minimum—Phase I In the equal interval methods, the initially selected increment d is kept fixed to bracket the minimum initially. This can be an inefficient process if d happens to be a small number. An alternate procedure is to vary the increment at each step, i.e., multiply it by a constant r > 1. This way initial bracketing of the minimum is rapid; however, the length of the initial interval of uncertainty is increased. The golden section search procedure is such a variable interval search method. In the method the value of r is not selected arbitrarily. It is selected as the golden ratio, which can be derived as 1.618 in several different ways. One derivation is based on the Fibonacci sequence defined as (a) Any number of the Fibonacci sequence for n > 1 is obtained by adding the previous two numbers, so the sequence is given as 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89. . . . The sequence has the property, (b) That is, as n becomes large, the ratio between two successive numbers F n and F n-1 in the Fibonacci sequence reaches a constant value of 1.618 or This golden ratio has many other interesting properties that will be exploited in the one-dimensional search procedure. One property is that 1/1.618 = 0.618. Figure 8-8 illustrates the process of initially bracketing the minimum using a sequence of larger increments based on the golden ratio. In the figure, starting at q = 0, we evaluate f(a) at a = d, where d > 0 is a small number. We check to see if the value f(d) is smaller than the value f(0). If it is, we then take an increment of 1.618d in the step size (i.e., the increment is 1.618 times the previous increment d). This way we evaluate the function at the follow- ing points and compare them: q q q q j j j j j j == ==+== () ==+ () == () ==+== () ◊ ◊ ◊ = = = Â Â Â 0 1 1 618 2 618 1 618 2 2 618 1 618 1 618 5 236 1 618 3 5 236 1 168 9 472 1 618 0 1 0 1 2 0 2 3 3 0 3 ; ; ; . ; . ad ad d d d ad ddd adddd 512+ () . F F n n n- ÆÆ• 1 1 618. as FFFFFn nn n01 12 11 23===+ = ; ; , , , Numerical Methods for Unconstrained Optimum Design 289 290 INTRODUCTION TO OPTIMUM DESIGN In general, we continue to evaluate the function at the points (8.14) Let us assume that the function at a q-1 is smaller than that at the previous point a q-2 and the next point a q , i.e., (8.15) Therefore, the minimum point has been surpassed. Actually the minimum point lies between the previous two intervals, i.e., between a q and a q-2 , as in the equal interval search. Therefore, upper and lower limits on the interval of uncertainty are (8.16) Thus, the initial interval of uncertainty is calculated as (8.17) Reduction of Interval of Uncertainty—Phase II The next task is to start reducing the inter- val of uncertainty by evaluating and comparing functions at some points in the established interval of uncertainty I. The method uses two function values within the interval I, just as in the alternate equal interval search of Fig. 8-7. However, the points a a and a b are not located at I/3 from either end of the interval of uncertainty. Instead, they are located at a distance of 0.382I (or 0.618I) from either end. The factor 0.382 is related to the golden ratio as we shall see in the following. To see how the factor 0.618 is determined, consider two points symmetrically located from either end as shown in Fig. 8-9(A)—points a a and a b are located at a distance of tI from either end of the interval. Comparing functions values at a a and a b , either the left (a l , a a ) or the right (a b , a u ) portion of the interval gets discarded because the minimum cannot lie there. Let us assume that the right portion gets discarded as shown in Fig. 8-9(B), so a¢ l and a¢ u are I ul j j q j j q qq qq =-= () - () = () + () = () + () = () == - - ÂÂ aa d d d d dd 1 618 1 618 1 618 1 618 1 618 1 1 618 2 618 1 618 00 2 1 11 aa d aa d uq j j q lq j j q == () == () = - = - ÂÂ 1 618 1 618 0 2 0 2 .; . ff ff qq qq aa aa - () < () () < () 12 1 and ad q j j q q= () = = Â 1 618 0 1 2 0 . ; , , , 0 q = 0 2.618d 5.236d 9.472da* a d 12 3 · · · f (a) FIGURE 8-8 Initial bracketing of the minimum point in the golden section method. Numerical Methods for Unconstrained Optimum Design 291 the new lower and upper bounds on the minimum. The new interval of uncertainty is I¢=tI. There is one point in the new interval at which the function value is known. It is required that this point be located at a distance of tI¢ from the left end; therefore, tI¢=(1 - t)I. Since I¢=tI, this gives the equation t 2 + t - 1 = 0. The positive root of this equation is Thus the two points are located at a distance of 0.618I or 0.382I from either end of the interval. The golden section search can be initiated once the initial interval of uncertainty is known. If the initial bracketing is done using the variable step increment (with a factor of 1.618, which is 1/0.618), then the function value at one of the points a q-1 is already known. It turns out that a q-1 is automatically the point a a . This can be seen by multiplying the initial inter- val I in Eq. (8.17) by 0.382. If the preceding procedure is not used to initially bracket the minimum, then the points a a and a b will have to be calculated by the golden section procedure. Algorithm for One-Dimensional Search by Golden Sections Find a to minimize f(a). Step 1. For a chosen small number d, let q be the smallest integer to satisfy Eq. (8.15) where a q , a q-1 , and a q-2 are calculated from Eq. (8.14). The upper and lower bounds on a* (the optimum value for a) are given by Eq. (8.16). Step 2. Compute f(a b ), where a b = a l + 0.618I (the interval of uncertainty I = a u - a l ). Note that, at the first iteration, a a = a l + 0.382I = a q-1 , and so f(a a ) is already known. Step 3. Compare f(a a ) and f(a b ), and go to (i), (ii), or (iii). (i) If f(a a ) < f(a b ), then minimum point a* lies between a l and a b , i.e., a l £ a* £ a b . The new limits for the reduced interval of uncertainty are a¢ l = a l and a¢ u = a b . Also, a¢ b = a a . Compute f(a¢ a ), where a¢ a = a¢ l + 0.382(a¢ u - a¢ l ) and go to Step 4. (ii) If f(a a ) > f(a b ), then minimum point a* lies between a a and a u , i.e., a a £ a* £ a u . Similar to the procedure in Step 3(i), let a¢ l = a a and a¢ u = a u , so that a ¢ a = a b . Compute f(a¢ b ), where a¢ b = a¢ l + 0.618(a¢ u - a¢ l ) and go to Step 4. (iii) If f(a a ) = f(a b ), let a l = a a and a u = a b and return to Step 2. Step 4. If the new interval of uncertainty I¢=a¢ u - a¢ l is small enough to satisfy a stopping criterion (i.e., I¢<e), let a* = (a¢ u + a¢ l )/2 and stop. Otherwise, delete the primes on a ¢ l , a¢ a , and a¢ b and return to Step 3. Example 8.3 illustrates the golden sections method for step size calculation. t =-+ () =1 5 2 0 618 I tI tI' I' tI a a a l a’ i a b a' b a u a' u (1 – t)I (1 – t)I (1 – t)I' (A) (B) FIGURE 8-9 Golden section partition. [...]... 1.3 854 38 [0. 454 824] 1.386031 [0. 454 823] 1.386398 [0. 454 823] 1.386991 [0. 454 824] 0.00 155 3 17 1.386031 [0. 454 823] 1.386398 [0. 454 823] 1.386624 [0. 454 823] 1.386991 [0. 454 823] 0.000960 a = 0 .5( 1.386398 + 1.386624) = 1.38 651 1; f(a*) = 0. 454 823 Note: The new calculation for each iteration is shown as boldfaced and shaded; the arrows indicate direction of transfer of data 292 INTRODUCTION TO OPTIMUM DESIGN. .. the problems using Solver 8.78 Exercise 8 .52 8.80 Exercise 8 .54 8.81 Exercise 8 .55 8.82 Exercise 8 .56 8.83 Exercise 8 .57 8.84 Exercise 8 .58 8. 85 Exercise 8 .59 8.86 Exercise 8.60 304 8.79 Exercise 8 .53 8.87 Exercise 8.61 INTRODUCTION TO OPTIMUM DESIGN 9 More on Numerical Methods for Unconstrained Optimum Design Upon completion of this chapter, you will be able to: Use some alternate procedures for step... (1, 1) 1 2 8 .53 f(x1, x2) = 12.096x2 + 21 .50 4x2 - 1.7321x1 - x2; starting design (1, 1) 1 2 8 .54 f(x1, x2) = 6.983x2 + 12.415x2 - x1; starting design (2, 1) 1 2 8 .55 f(x1, x2) = 12.096x2 + 21 .50 4x2 - x2; starting design (1, 2) 1 2 8 .56 f(x1, x2) = 25x2 + 20x2 - 2x1 - x2; starting design (3, 1) 1 2 8 .57 f(x1, x2, x3) = x2 + 2x2 + 2x2 + 2x1x2 + 2x2x3; starting design (1, 1, 1) 1 2 3 2 2 8 .58 f ( x1 ,... 8-3 Optimum Solution for Example 8.6 with the Conjugate Gradient Method: 2 2 2 f(x1, x2, x3) = x1 + 2x2 + 2x3 + 2x1x2 + 2x2x3 Starting values of design variables: Optimum design variables: Optimum cost function value: Norm of the gradient at optimum: Number of iterations: Number of function evaluations: 298 INTRODUCTION TO OPTIMUM DESIGN 2, 4, 10 -6. 455 0E-10, -5. 8410E-10, 1.3 150 E-10 6. 852 0E-20 3. 051 2E- 05. .. Exercise 8 .53 8.69 Exercise 8 .54 8.70 Exercise 8 .55 8.71 Exercise 8 .56 8.72 Exercise 8 .57 8.73 Exercise 8 .58 8.74 Exercise 8 .59 8. 75 Exercise 8.60 8.76 Exercise 8.61 8.77 Write a computer program to implement the conjugate gradient method (or, modify the steepest descent program given in Appendix D) Solve Exercises 8 .52 to 8.61 using your program Numerical Methods for Unconstrained Optimum Design 303... 2 4 b1 = [ c (1) c ( 0 ) ] = (7. 952 63.3) = 0.0 156 33 d (1) = -c (1) + b 1d (0) ẩ 4 .50 0 ẩ-12 ẩ 4.31241 4.438 + (0.0 156 33)-40 = 3.81268 = -4.828 -48 -5. 57838 ẻ ẻ ẻ (d) (e) (f) 5 Step size in the direction d(1) is calculated as a = 0.3 156 6 The design is updated as x (2) ẩ 4.31241 ẩ 1. 456 6 ẩ0.0 956 -2.348 + a 3.81268 = -1.1447 = -5. 57838 0.62 05 2.381 ẻ ẻ ẻ (g) Calculating... x*; T is a vector tangent to the curve C at the point x*; u is any unit vector; and c is the gradient vector at x* According to the above property, vectors c and T are normal to each other, i.e., their dot product is zero, c ã T = 0 310 INTRODUCTION TO OPTIMUM DESIGN T u c q t C s x* Surface f (x) = const FIGURE 9-3 Gradient vector for the surface f(x) = constant at the point x* Proof To show this,... sufcient conditions to solve for the optimum step length In general, a numerical one-dimensional search will have to be used to calculate a Using the analytic approach to solve for optimum a, we get 294 INTRODUCTION TO OPTIMUM DESIGN df (a ) = 0; da 32a - 8 = 0 or a 0 = 0. 25 d 2 f (a ) = 32 > 0 da 2 (c) (d) Therefore, the sufciency condition for a minimum for f(a) is satised 6 Updating the design (x(0) +... f(x) = 25x1 + x2 at the 2 (0) point x = (0.6, 4) Solution Figure 9-4 shows in the x1 - x2 plane the contours of value 25 and 100 for the function f The value of the function at (0.6, 4) is f(0.6, 4) = 25 The gradient of the function at (0.6, 4) is given as 312 INTRODUCTION TO OPTIMUM DESIGN x2 t = (4, 15) d c = (30, 8) 30 8 (0.6, 4) x1 f = 25 f = 100 2 2 FIGURE 9-4 Contours of function f = 25x1 + x2... = 25 and 100 c = f (0.6, 4) = ( f x1 , f x 2 ) = (50 x1 , 2 x 2 ) = (30, 8) c = 30 Ơ 30 + 8 Ơ 8 = 31.048 35 (a) (b) Therefore, a unit vector along the gradient is given as C = c c = (0.9662 35, 0. 257 663) (c) Using the given function, a vector tangent to the curve at the point (0.6, 4) is given as t = (-4, 15) (d) 2 This vector is obtained by differentiating the equation for the curve 25x1 + x2 = 25 at . 1.3 854 38 1.386031 1.386398 1.386991 0.00 155 3 [0. 454 824] [0. 454 823] [0. 454 823] [0. 454 824] 17 1.386031 1.386398 1.386624 1.386991 0.000960 [0. 454 823] [0. 454 823] [0. 454 823] [0. 454 823] a = 0 .5( 1.386398. variables: 2, 4, 10 Optimum design variables: -6. 455 0E-10, -5. 8410E-10, 1.3 150 E-10. Optimum cost function value: 6. 852 0E-20. Norm of the gradient at optimum: 3. 051 2E- 05. Number of iterations: 4 Number. is summarized in the following equations: Design update: (8.9a) xxd kkk+ ( ) () () =+ 1 a 282 INTRODUCTION TO OPTIMUM DESIGN Numerical Methods for Unconstrained Optimum Design 283 Cost function evaluation: (8.9b) where

Ngày đăng: 13/08/2014, 18:20

w