1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Design and Optimization of Thermal Systems Episode 3 Part 3 doc

25 392 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 237,47 KB

Nội dung

522 Design and Optimization of Thermal Systems Therefore, the rst two numbers in the series are unity and the nth number is the sum of the preceding two numbers. The Fibonacci series may thus be written as n: 012345 6 7 8 9 10 z F n :11235813213455 89 z It can be seen from this series that the numbers increase rapidly as n increases. The fact that, for n q 2, each number is a sum of the last two numbers is used advantageously to distribute the trial runs or experiments. The method starts by choosing the total number of runs n. This choice is based on the reduction ratio, as discussed later. The initial range of values L o is assumed to be given. Then the Fibonacci search places the rst two runs at a distance d 1  (F n–2 /F n )L o from either end of the initial interval. For n  5, this implies placing the runs at d 1  F 3 /F 5  (3/8)L o from the two ends of the range. The simulation of the system is carried out at these two values of the design variable and the corresponding objective function determined. The values obtained are used to eliminate regions from further consideration, as discussed earlier and shown in Figure 9.3. The remaining interval of width L is now considered and runs are car- ried out at a distance of d 2 from each end of this interval, where d 2  (F n–3 /F n–1 )L. The location of one of the runs coincides with that for one of the previous runs, due to the nature of the series, and only one additional simulation is needed for the second set of points. Again, regions are eliminated from further consideration and points for the next iteration are placed at distance d 3 from the two ends of the new interval, where d 3  (F n–4 /F n–2 )L, L being the width of this interval. Thus, the region of uncertainty is reduced. This process is continued until the nth run is reached. This run is placed just to the right of an earlier simulation near the middle of the interval left, and thus the region is further halved to yield the nal interval of uncertainty L f . The following simple example illustrates this procedure. Example 9.1 For a heating system, the objective function U(x) is the heat delivered per unit energy consumed. The independent variable x represents the temperature setting and has an initial range of 0 to 8. A maximum in U is desired to operate the system most efciently. The objective function is given as U(x)  7  17x  2x 2 . Obtain the optimum using the Fibonacci search method. Solution Let us choose the total number of runs as ve. Then the rst two runs are made at d 1  (F 3 /F 5 )L o  (3/8)8  3 from either end, i.e., at x  3 and 5. The value at x  5 is found to be larger than that at x  3. Therefore, for a maximum in U(x), the region 0  x  3 is eliminated, leaving the domain from 3 to 8. The next two points are located at d 2  (F 2 /F 4 )L  (2/5)5  2 from either end of the new interval of width L  5. Thus, the two points are located at x  6 and at x  5. This latter location has Search Methods 523 already been simulated. The results from the run at x  6 indicate that the objective function is smaller than that at x  5. Therefore, the region beyond x  6 is elimi- nated, leaving the domain from x  3 to 6 for future consideration. The next two points are located at d 3  (F 1 /F 3 )L  (1/3)3  1 from the two ends of the domain, i.e., at x  5 (which is already available) and at x  4. Thus, simulation is carried out at x  4, and the objective function is found to be greater than that at x  5. The region beyond x  5 is eliminated, leaving the domain 3  x  5. The fth and nal run is now made at a point just to the right of x  4 to determine if the function is increasing or decreasing. The value of the function is found to be higher at this point, indicating an increasing function with increasing x. Therefore, the region 3  x  4 is eliminated, giving 4  x  5 as the nal region of uncertainty. If x  4.5 is chosen as the setting for optimal U, the maximum heat delivered per unit energy consumed is obtained as 43.125. The value of U at x  0 is 7 and that at x  8 it is 15. Therefore, substantial savings are obtained by optimizing. Figure 9.7 shows the various steps in the determination of the nal interval of uncertainty. The initial range is reduced to one-eighth of its value in just ve runs. Since F n  8 for n  5, this also indicates that the reduction ratio is F n , a statement that can be proved more rigorously by taking additional examples as well as by mathematics. Thus, this search method converges very rapidly to the optimum and only a few runs are often adequate for obtaining the desired accuracy level. 9.2.4 GOLDEN SECTION AND OTHER SEARCH METHODS The golden section search method is derived from the Fibonacci search and, though not as efcient, is often more convenient to use. It is based on the fact that the ratio of two successive Fibonacci numbers is approximately 0.618 for n > 8, i.e., F n–1 /F n  0.618. This ratio has been known for a long time and was of interest to the FIGURE 9.7 Use of Fibonacci method to reduce the interval of uncertainty in Example 9.1. U x 45 40 35 30 1 0 1234 56 7 8 L f ε 4 3 2 5 524 Design and Optimization of Thermal Systems ancient Greeks as an aesthetic and desirable ratio of lengths in their construc- tions. The ratio of the height to the base of the Great Pyramid is also 0.618. The reciprocal of this ratio is 1.618, which has also been used as a number with magi- cal properties. The term for the method itself comes from Euclid, who called the ratio the golden mean and pointed out that a length divided in this ratio results in the same ratio between the smaller and larger segments (Vanderplaats, 1984; Dieter, 2000). The golden section search uses the ratio 0.618 to locate the trial runs or exper- iments in the search for the optimum. The rst two runs are located at 0.618 L o from the two ends of the initial range. As before, an interval is eliminated by inspection of the values of the objective function obtained at these points. The new interval of length L is then considered and the next two runs are located at 0.618 L from the two ends of this interval. The result for one of the points is known from the previous calculations, and only one more simulation is needed. Again, an interval is eliminated and the domain in which the optimum lies is reduced. This procedure is continued until the optimum is located within an interval of desired uncertainty. The nal run may be made close to a previous run, at a location close to the middle of the interval, in order to reduce the uncertainty by approximately half, as done earlier for the Fibonacci search. Therefore, the total number of runs n need not be decided a priori in this method. This allows us to employ additional runs near the optimum if the curve is very steep there, or to use fewer points if the curve is at. In the Fibonacci search, we are committed to the total number of runs and cannot change it based on the characteristics of the optimum. In the golden section search, the trial runs are always located at 0.618 L from the two ends of the interval of width L at a given search step. This makes it somewhat less efcient than the Fibonacci search, par- ticularly for small values of n. Similarly, other search strategies have been developed to extract the optimum design. Several of these are combinations of the various methods presented here. For instance, an exhaustive search may be used to determine if the function is unimodal and to determine the subinterval in which the global optimum lies. This may be followed by more efcient methods such as the Fibonacci search. An unsystematic search, though generally very inefcient, is nevertheless used in some cases because of the inherent simplicity and because the physical nature of the problem may guide the user to the narrow domain in which the optimum lies. In general, information available on the system is very valuable in the search for the optimum because it can be used to narrow the range, determine the acceptable level of uncertainty in the variables, and choose the most appropriate strategy. 9.2.5 COMPARISON OF DIFFERENT ELIMINATION METHODS The reduction ratio R, dened in Equation (9.3), gives the ratio of the initial inter- val of uncertainty to the interval obtained after n runs. Therefore, it is a measure of the efciency of the method. It can also be used to select the number of runs needed to obtain a desired uncertainty in locating the optimum. The reduction Search Methods 525 ratios for the various methods presented here for the optimization of a single- variable problem are given in Table 9.1. Here, the effect of the separation E between pairs of runs on the reduction ratio is neglected. If E is retained, the nal interval can be shown to be L L F f o n E, for Fibonacci search (9.7) L L f o nn  ¤ ¦ ¥ ³ µ ´ 2 1 1 2 22// E , for sequential dichhotomous search (9.8) when the second point of the pair is always located to the right of the rst point at a separation of E (Stoecker, 1989). Thus, the reduction ratios given in Table 9.1 are obtained when E is neglected. The corresponding results are also shown graphi- cally in Figure 9.8. Fibonacci Sequential dichotomous Uniform exhaustive Reduction ratio, R 1,000 100 10 1 0 4 8 Number of runs, n 12 16 20 FIGURE 9.8 Reduction ratio R as a function of the number of runs n for different elimi- nation search methods. TABLE 9.1 Reduction Ratios for Single-Variable Search Methods Reduction Ratio Search Method General Formula n  5 n  12 Uniform exhaustive (n  1)/2 3 6.5 Uniform dichotomous (n  2)/2 3.5 7.0 Sequential dichotomous 2 n/2 5.66 64 Fibonacci F n 8 233 Golden section 6.86 199 526 Design and Optimization of Thermal Systems It is clearly seen that the Fibonacci search is an extremely efcient method and is, therefore, widely used. It is particularly valuable in multivariable optimization problems, which are based on alternating single-variable searches, and in the optimi- zation of large and complicated systems that require substantial computing time and effort for each simulation run. For small and relatively simple systems, the exhaustive search provides a convenient, though not very efcient, approach to optimization. Example 9.2 Formulate the optimization problem given in Example 8.6 and Example 5.3 in terms of the maximum temperature T o as the independent variable and solve it by the uniform exhaustive search and Fibonacci search methods to reduce the interval of uncertainty to 0.1 of its initial value. Solution The initial interval of uncertainty in T o is from 40nC to 100nC, or 60nC. This is to be reduced to an interval of 6nC by the use of two elimination methods. Using the reduction ratios given in Table 9.1, we have n n   1 2 10 or 19 for the uniform exhaustive seearch method and F n  10 or n  6 for the Fibonacci method The objective function is given by the equation U  35A  208V  f (T o ) and the dependence of A and V on T o is given by the equations A T V T o o     5833 3 290 2 20 50 20 . [()] Therefore, T o may be varied over the given domain of 40nC to 100nC and the objec- tive function determined using these equations. This problem thus illustrates the use of results from the model as one proceeds with the optimization. For compli- cated thermal systems, the results will generally require numerical simulation to obtain the desired results. From the preceding calculation of the required value of n, we may choose n as 20 for uniform an exhaustive search, for convenience and to ensure that at least a tenfold reduction in the interval of uncertainty is achieved. The value for n is taken as 6 for the Fibonacci search since this gives a reduction ratio of 13. For the uniform exhaustive Search Methods 527 search, the width of each subinterval is 60/21, and 20 computations are carried out at uniformly distributed points. The point where the minimum value of U occurs, as well as the two points on either side of this point, yields the following results: T o AVU 51.43 25.68 1.59 1229.75 54.29 26.34 1.46 1225.37 57.14 27.04 1.35 1226.46 Therefore, the minimum lies in the interval 51.43 to 57.14. If the value at the mid- point, T o  54.29nC, is chosen, the cost is 1225.37. These values are close to those obtained in Example 8.6 by using the Lagrange multiplier method. The Fibonacci search method is more involved because decisions on eliminat- ing regions have to be taken. Six runs are made, with 5/13, 3/8, 2/5, and 1/3 of the interval of uncertainty taken at successive steps to locate two points at equal distances from the boundaries. The rst step requires two calculations and the next three require only one calculation each because points are repeated. The nal cal- culation is taken at a point just to the right of a point near the middle of the interval of uncertainty after ve runs. The results obtained are summarized as T o A V U Action Taken 63.08 28.62 1.16 1243.01 76.92 33.11 0.88 1341.72 Eliminate region beyond 76.92 53.85 26.24 1.48 1225.67 Eliminate region beyond 63.08 49.23 25.19 1.71 1237.56 Eliminate region 40 to 49.23 58.46 27.38 1.30 1228.58 Eliminate region beyond 58.46 53.90 26.25 1.47 1225.62 Eliminate region 49.23 to 53.85 The last point is just to the right of 53.85, which is close to the middle of the region 49.23 to 58.46 left after ve runs. Therefore, the nal region of uncertainty is from 53.85 to 58.46, which has a width of 4.61nC. The optimum design may be taken as a point in this region. The results agree with the earlier results from the Lagrange multiplier and the uniform exhaustive search methods. Therefore, only six runs are needed to reduce the interval of uncertainty to less than one-tenth of its initial value. The Fibonacci method is very efcient and is extensively used, though the programming is more involved than for the exhaustive search method. 9.3 UNCONSTRAINED SEARCH WITH MULTIPLE VARIABLES Let us now consider the search for an optimal design when the system is governed by two or more independent variables. For ease of visualization and discussion, we will largely consider only two variables, later extending the techniques to a 528 Design and Optimization of Thermal Systems larger number of variables that arise in more complicated systems. However, the complexity of the problem rises sharply as the number of variables increases and, therefore, attention is generally directed at the most important variables, usually restricting these to two or three. In addition, many practical thermal systems can be well characterized in terms of two or three predominant variables. Examples of this include the length and diameter of a heat exchanger, uid ow rate and evaporator temperature in a refrigeration system, dimensions of a cooling tower and the energy rejected by it, dimensions of a combustion chamber and the fuel ow rate, and so on. In order to graphically depict the iterative approach to the optimum design, a convenient method is the use of contours or lines of constant values of the objective function. Figure 9.9 shows a typical contour plot where each contour represents a particular value of the objective function and the maximum or minimum is indicated by the innermost contour. This plot is similar to the ones used in topology to represent different heights or elevations in mountains. The peak represents a maximum and the valley represents a minimum. Increas- ing height on the mountain is thus similar to advancing toward the center of the contour plot. Such a graphical representation works well for a two-variable problem since the plane of the gure is adequate to show the movement toward the peak or the valley. However, a three-dimensional representation is needed for three variables, with each contour replaced by a surface. This becomes quite involved for visualization and the complexity increases with increasing number of variables. However, the extension of the mathematical treatment to a larger number of variables is straightforward and can be employed for more compli- cated problems. The methods presented here for multivariable, unconstrained optimization are based on moving the calculation in the direction of increasing objective y x 1 4 8 2 3 9 6 5 7 FIGURE 9.9 Lattice search method in a two-variable space. Search Methods 529 function for a maximum and in the direction of decreasing objective function for a minimum. Therefore, the procedure for determining a maximum is simi- lar to climbing toward the peak of a mountain or hill, so these methods are known as hill-climbing techniques. The three methods discussed in detail here are lattice search, univariate search, and steepest ascent. Elimination meth- ods, which reduce the interval of uncertainty by eliminating regions, may also be combined with these techniques, particularly with an univariate search, to obtain the optimum. 9.3.1 LATTICE SEARCH This search method is based on calculating the objective function U in the neigh- borhood of a chosen starting point and then moving this point to the location that has the largest value of U, if the search is for a maximum. Thus, the calculation moves in the direction of increasing value of the objective function for locating a maximum. The maximum is reached when the value at the central point is higher than the values at its neighboring points. Though the search for a maximum in U is considered here, a similar procedure may be followed for a minimum, moving the calculation in the direction of decreasing value of the objective function. A grid lattice is superimposed on the design domain, as shown in Figure 9.9 in terms of the contour plots on a two-dimensional space. The starting point may be chosen based on available information on the location of the maximum; oth- erwise, a point away from the boundaries of the region may be selected, such as point 1 in the gure. The objective function is evaluated at all the neighboring points, 2–9. If the maximum value of the objective function turns out to be at point 9, then this point becomes the central point for the next set of calculations. Since the values at points 1, 2, 8, and 9 are known, only the values at the remain- ing ve points, 10 through 14, are needed. Again, the trial point is moved to the location where the objective function is the largest. This process is continued until the maximum value appears at the central point itself. Clearly, this is not a very efcient approach and involves exhaustive search in the neighborhood of a central point, which is gradually moved toward the opti- mum. However, it is more efcient than using an exhaustive search over the entire region since only a portion of the region is involved in a lattice search and the pre- viously calculated values are used at each step. The efciency of a lattice search, compared to an exhaustive search, is expected to be even higher for a larger num- ber of variables and ner grids. It is also obvious that the convergence to the opti- mum depends on the grid. It is best to start with a coarse grid, employing only a few grid points across the region. Once the maximum is found with this grid, the grid may be rened and the previous maximum taken as the starting point. Fur- ther grid renement may be used as the calculations approach the optimum. The method is fairly robust and versatile. It can even be used for discontinuous func- tions and for discrete values, as long as the objective function can be evaluated. 530 Design and Optimization of Thermal Systems The approach can be extended easily to a problem with more than two variables. However, the number of points in the neighborhood of the central point, including this point, rises sharply as the number of variables increases, being 3 2 for two, 3 3 for three, 3 4 for four variables, and so on. 9.3.2 UNIVARIATE SEARCH An univariate search involves optimizing the objective function with respect to one variable at a time. Therefore, the multivariable problem is reduced to a series of single-variable optimization problems, with the process converging to the optimum as the variables are alternated. This procedure is shown graphically in Figure 9.10. A starting point is chosen based on available information on the system or as a point away from the boundaries of the region. First, one of the variables, say x, is held constant and the function is optimized with respect to the other variable y. Point A represents the optimum thus obtained. Then y is held constant at the value at point A and the function is optimized with respect to x to obtain the optimum given by point B. Again, x is held constant at the value at point B and y is varied to obtain the optimum, given by point C. This process is continued, alternating the variable, which is changed while keeping the others constant, until the optimum is attained. This is indicated by the change in the objective function, from one step to the next, becoming less than a chosen convergence criterion or tolerance. Therefore, the two-variable problem is reduced to two single-variable prob- lems applied alternately. The basic procedure can easily be extended to three or y x Starting point EF H G I DC BA FIGURE 9.10 Various steps in the univariate search method. Search Methods 531 more independent variables. In solving the single-variable problem, the search methods presented earlier, such as Fibonacci and golden section searches, may be used. This provides a very useful method for optimizing thermal systems, partic- ularly those that have discrete values for the design variables and those that have to be simulated for each trial run. Efcient search methods, rather than exhaus- tive searches, are of interest in such cases. Calculus methods may also be used if continuous, differentiable functions are involved, as illustrated in the following example. There are certain circumstances where an univariate search may fail, such as those where ridges and very sharp changes occur in the objective function (Stoecker, 1989). However, by varying the starting point, interval of search, and method for single variable search, such difculties can often be overcome. Example 9.3 The objective function U, which represents the cost of a fan and duct system, is given in terms of the design variables x and y, where x represents the fan capacity and y the duct length, as U x xy y 2 6 4 3 Both x and y are real and positive. Using the univariate search, obtain the optimum value of U and the corresponding values of x and y. Is this optimum a minimum or a maximum? Solution Calculus methods may be used for the two single-variable optimization problems that are obtained in the univariate search. If y is kept constant, the value of x at the optimum is given by t t    ¤ ¦ ¥ ³ µ ´ U x x xy x y 2 6 4 0 12 2 13 i.e., / Similarly, if x is held constant, the value of y at the optimum is given by t t     ¤ ¦ ¥ ³ µ ´ U yxy y x 4 30 4 3 2 i.e., 1/2 Since the only information available on x and y is that these are real and greater than 0, let us choose x  y  0.5 as the starting point. If a solution is not obtained, the starting point may be varied. First x is held constant and y is varied to obtain an optimum value of U. Then y is held constant and x is varied to obtain an optimum value of U. In both cases, the preceding equations are used. [...]... 532 Design and Optimization of Thermal Systems The results obtained are tabulated as x y U 0.5 1. 633 9.840 1.944 1. 633 6.788 1.944 0.828 5.598 2. 438 0.828 5.456 2. 438 0.740 5.428 2. 532 0.740 5.4 23 2. 532 0.726 5.4 23 2.548 0.726 5.422 2.548 0.7 23 5.422 2.550 0.7 23 5.422 2.550 0.7 23 5.422 For each step, one of the variables is held constant, as indicated, and the optimum is obtained in terms of the... is repeated The results obtained are shown in the following table x1 x2 U 2.0 2.061 32 .30 3 1. 735 2.261 29 .33 8 1. 735 2.266 29 .36 4 1.524 2.466 27.614 1.524 2.470 27. 637 1 .35 2 2.670 26.668 1 .35 2 2.675 26.690 1.211 2.875 26.2 43 1.211 2.879 26.262 1.0 93 3.079 26.174 1.0 93 3.0 83 26.192 G 0 –18.294 0 –10. 935 0 –6.275 0 3. 205 0 –0.128 0 Next Move Increment x2 Return to constraint Increment Return Increment... each point obtained by changing x by x and y by y, where y is obtained from the preceding relationship 536 Design and Optimization of Thermal Systems between x and y The starting point is taken as x for different values of x are y 0.5 The results obtained No of Iterations x y U 0.5 3 2.0 0.699 5.625 0.1 20 2.5 0. 731 5.4 23 0.05 40 2.5 0. 731 5.4 23 0.01 205 2.55 0.7 23 5.422 0.005 410 2.55 0.722 5.422 x... different values of r, are given in the following table: r x y xy U 0 .3 2.15 3. 86 8 .30 28.55 0.5 2 .33 4.20 9.79 31 .86 1.0 2 .39 4.58 10.96 34 .32 10.0 2.46 4.84 11.90 36 .29 100.0 2.48 4. 83 11.99 36 .48 Different subinterval sizes were used in the exhaustive search to obtain the desired accuracy in the results It is seen that at small values of r, the constraint xy 12 is not satisfied, and the optimum value... U and stopping at the minimum value This becomes the new trial point and the process is repeated The results obtained in terms of trial points, with the same starting point as the first approach, are x y U 0.5 0.5 0.995 0.951 7.245 1.490 1 .34 0 6. 139 1.985 0.721 5.615 2.09 0.844 5.528 2.245 0.718 5.475 2.295 0.782 5.4 53 2 .38 5 0.717 5. 438 2.41 0.752 5. 431 2.47 0.716 5.427 2.48 0. 733 5.424 2.54 0. 733 ... direction of the greatest rate of change of U, the number of trial runs needed to reach the optimum is expected to be relatively small and the method to be very efficient However, it does require the evaluation of gradients in Search Methods 533 U y Starting point x Starting point x (a) (b) FIGURE 9.11 Steepest ascent method, shown in terms of (a) the climb toward the peak of a hill and (b) in terms of constant... space One of the variables is held constant and the two constraint equations are solved to determine the other two variables Once on the constraints, the move is taken 546 Design and Optimization of Thermal Systems as tangential to both constraints Therefore, the increments in the three variables are linked by the equations G1 G1 x x1 1 G1 x x2 2 G1 x x3 3 0 (9.26) G2 G2 x x1 1 G2 x2 x2 G2 x3 x3 0 (9.27)... x3) 0 and G 2(x1, x2, x3) 0 are the two equality constraints Therefore, if the increment in one of the variables, say x1, is chosen, the other two, x2 and x3, may be calculated from the preceding equations The change in the objective function U(x1, x2, x3) is given by the equation U x x1 1 U U x x2 2 U x x3 3 (9.28) The step size x1 is chosen, increments x2 and x3 calculated from Equation (9.26) and. .. 2 (9.19) 540 Design and Optimization of Thermal Systems V Increasing r Allowable ragion x 0 1 2 3 4 5 FIGURE 9. 13 The penalty function method for an acceptable domain defined by inequality constraints where the maximum values in the ranges are used to satisfy the given inequalities Figure 9. 13 shows the penalty function for different values of the penalty parameter r The feasible domain and the minimum... determination of the optimum is rarely needed in practical problems because the variables are generally adjusted for the final design on the basis of con* venience and available standard system parts In the preceding example, x1 may * be taken as 1.1 and x 2 as 3. 1 for defining the optimum This example illustrates the hemstitching procedure for finding the optimum of a constrained problem The evaluation of the . Optimization of Thermal Systems The results obtained are tabulated as xyU 0.5 1. 633 9.840 1.944 1. 633 6.788 1.944 0.828 5.598 2. 438 0.828 5.456 2. 438 0.740 5.428 2. 532 0.740 5.4 23 2. 532 0.726 5.4 23 2.548. 0.844 5.528 2.245 0.718 5.475 2.295 0.782 5.4 53 2 .38 5 0.717 5. 438 2.41 0.752 5. 431 2.47 0.716 5.427 2.48 0. 733 5.424 2.54 0. 733 5.4 23 2.54 0. 733 5.4 23 Again, convergence near the optimum is quite. given in the following table: rxyxyU 0 .3 2.15 3. 86 8 .30 28.55 0.5 2 .33 4.20 9.79 31 .86 1.0 2 .39 4.58 10.96 34 .32 10.0 2.46 4.84 11.90 36 .29 100.0 2.48 4. 83 11.99 36 .48 Different subinterval sizes

Ngày đăng: 06/08/2014, 15:21

TỪ KHÓA LIÊN QUAN