1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Design and Optimization of Thermal Systems Episode 3 Part 1 pot

25 362 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

473 8 Lagrange Multipliers 8.1 INTRODUCTIONTO CALCULUSMETHODS We are all quite familiar, from courses in mathematics, with the determination of the maximum or minimum of a function by the use of calculus. If the function is continuous and differentiable, its derivative becomes zero at the extremum. For a function y(x), this condition is written as dy dx  0 (8.1) where x is the independent variable. The basis for this property may be explained in terms of the extrema shown in Figure8.1. As the maximum at point A is approached, the value of the function y(x) increases and just beyond this point, it decreases, resulting in zero gradient at A. Similarly, the value of the function decreases up to the minimum at point B and increases beyond B, giving a zero slope at B. In order to determine whether the point is a maximum or a minimum, the second derivative is calculated. Since the slope goes from positive to negative, through zero, at the maximum, the second derivative is negative. Similarly, the slope increases at a minimum and, thus, the second derivative is positive. These conditions may be written as (Keisler, 1986) For a maximum: dy dx 2 2 0 (8.2) For a minimum: dy dx 2 2 0 (8.3) These conditions apply for nonlinear functions y(x) and, therefore, calculus meth- ods are useful for thermal systems, which are generally governed by nonlinear expressions. However, both the function and its derivative must be continuous for the preceding analysis to apply. Thus, by setting the gradient equal to zero, the locations of the extrema may be obtained and the second derivative may then be used to determine the nature of each extremum. There are cases where both the rst and the sec- ond derivatives are zero. This indicates an inection point, as sketched in Figure8.1(c), a saddle point, or a at curve, as in a ridge or valley. It must be noted that the conditions just mentioned indicate only a local extremum. There may be several such local extrema in the given domain. Since our interest lies in the overall maximum or minimum in the entire domain for optimizing the system, we would 474 Design and Optimization of Thermal Systems seek the global extremum, which is usually unique and represents the largest or smallest value of the objective function. The following simple example illustrates the use of the preceding procedure for optimization. Example 8.1 Apply the calculus-based optimization technique just given to the minimization of cost C for hot rolling a given amount of metal. This cost is expressed in terms of the mass ow rate  m of the material as Cm m 35 14 8 14 22 . . . .   where the rst term on the right-hand side represents equipment costs, which increase as the ow rate increases, and the second term represents the operating costs, which go down as  m increases. Solution The extremum is given by dC dm mm      ( . )( . ) ( . )( . ) . 35 14 148 22 49 04 32  mm 04 32 32 56 0 .  Therefore,  m  ¤ ¦ ¥ ³ µ ´  32 56 49 1 692 136 . . . /. The second derivative is obtained as dC dm mm 2 2 06 42 1 96 104 19     A y(x) x (a) Maximum (b) Minimum (c) Inflection point B C FIGURE 8.1 Sketches showing a maximum, a minimum, and an inection point in a function y(x) plotted against the independent variable x. Lagrange Multipliers 475 which is positive because the ow rate  m is positive. This implies that the optimi- zation technique has yielded a minimum of the objective function C, as desired. Therefore, minimum cost is obtained at  m  1.692, and the corresponding value of C is 11.962. The preceding discussion and the simple example serve to illustrate the use of calculus for optimization of an unconstrained problem with a single independent variable. However, such simple problems are rarely encountered when dealing with the optimization of practical thermal systems. Usually, several independent variables are involved and constraints may have to be satised. This consider- ably complicates the application of calculus to extract the optimal solution. In addition, the use of calculus methods requires that any constraints in the problem must be equality constraints. This limitation is often circumvented by convert- ing inequality constraints into equality ones, as outlined in Chapter 7. In many practical circumstances, the objective function is not readily available in the form of continuous and differentiable functions, such as the one given in Example 8.1. However, curve tting of numerical and experimental data may be used in some cases to yield continuous expressions that characterize the given system and that can then be used to obtain the optimum. Calculus methods, whenever applicable, provide a fast and convenient method to determine the optimum. They also indicate the basic considerations in optimi- zation and the characteristics of the problem under consideration. In addition, some of the ideas and procedures used for these methods are employed in other techniques. Therefore, it is important to understand this optimization method and the basic concepts introduced by this approach. This chapter presents the Lagrange multiplier method, which is based on the differentiation of the objec- tive function and the constraints. The physical interpretation of this approach is brought out and the method is applied to both constrained and unconstrained optimization. The sensitivity of the optimum to changes in the constraints is dis- cussed. Finally, the application of this method to thermal systems is considered. 8.2 THE LAGRANGEMULTIPLIERMETHOD This is the most important and useful method for optimization based on calculus. It can be used to optimize functions that depend on a number of independent variables and when functional constraints are involved. As such, it can be applied to a wide range of practical circumstances provided the objective function and the constraints can be expressed as continuous and differentiable functions. In addi- tion, only equality constraints can be considered in the optimization process. 8.2.1 B ASIC APPROACH The mathematical statement of the optimization problem was given in the preced- ing chapter as U(x 1 , x 2 , x 3 , , x n ) l Optimum (8.4) 476 Design and Optimization of Thermal Systems subject to the constraints G 1 (x 1 , x 2 , x 3 , , x n )  0 G 2 (x 1 , x 2 , x 3 , , x n )  0 % G m (x 1 , x 2 , x 3 , , x n )  0 (8.5) where U is the objective function that is to be optimized and G i  0, with i vary- ing from 1 to m, represents the m equality constraints. As mentioned earlier, if inequality constraints arise in the problem, these must be converted into equality constraints in order to apply this method. In addition, in several cases, inequality constraints simply dene the acceptable domain and are not used in the optimiza- tion process. Nevertheless, the solution obtained is checked to ensure that these constraints are satised. The method of Lagrange multipliers basically converts the preceding problem of nding the minimum or maximum into the solution of a system of algebraic equations, thus providing a convenient scheme to determine the optimum. The objective function and the constraints are combined into a new function Y, known as the Lagrange expression and dened as Y(x 1 , x 2 , , x n )  U(x 1 , x 2 , , x n ) L 1 G 1 (x 1 , x 2 , , x n ) L 2 G 2 (x 1 , x 2 , , x n )  z L m G m (x 1 , x 2 , , x n )(8.6) where the L’s are unknown parameters, known as Lagrange multipliers . Then, according to this method, the optimum occurs at the solution of the system of equations formed by the following equations: t t  t t  t t  Y x Y x Y x n12 00 0$ (8.7a) t t  t t  t t  YY Y m LL L 12 00 0$ (8.7b) When these differentiations are applied to the Lagrange expression, we nd that the optimum is obtained by solving the following system of equations: t t  t t  t t  t t  t t  U x G x G x G x U x m m 1 1 1 1 2 2 11 2 0LL L L $ 11 1 2 2 2 22 1 1 0 t t  t t  t t  t t  t t G x G x G x U x G m m n LL L $ % xx G x G x nn m m n  t t  t t LL 2 2 0$ Lagrange Multipliers 477 G 1 (x 1 , x 2 , x 3 , . . . , x n )  0 G 2 (x 1 , x 2 , x 3 , . . . , x n )  0 (8.8) % G m (x 1 , x 2 , x 3 , . . . , x n )  0 If the objective function U and the constraints G i are continuous and differen- tiable, a system of algebraic equations is obtained. Since there are m equations for the constraints and n additional equations are derived from the Lagrange expres- sion, a total of m  n simultaneous equations are obtained. The unknowns are the m multipliers, corresponding to the m constraints, and the n independent vari- ables. Therefore, this system may be solved by the methods outlined in Chapter 4 to obtain the values of the independent variables, which dene the location of the optimum, as well as the multipliers. Analytical methods for solving a system of algebraic equations may be employed if linear equations are obtained and/or when the number of equations is small, typically up to around ve. For nonlinear equations and for larger sets, numerical methods are generally more appropriate. The optimum value of the objective function is then determined by substituting the values obtained for the independent variables into the expression for U. The optimum is often represented by asterisks, i.e., x 1 * , x 2 * , , x n * , and U * . 8.2.2 PHYSICAL INTERPRETATION In order to understand the physical reasoning behind the method of Lagrange mul- tipliers, let us consider a problem with only two independent variables x and y and a single constraint G(x, y )  0. Then the optimum is obtained by solving the equations t t  t t  t t  t t   U x G x U y G y Gxy L L 0 0 0(,) (8.9) The rst two equations can be written in vector notation as U L G  0(8.10) where  is the gradient vector. The gradient of a scalar quantity F(x,y ) is dened as  t t  t t F FF xy ij (8.11) where i and j are unit vectors in the x and y directions, respectively. Therefore, F is a vector with the two partial derivatives tF/tx and tF/ty as the 478 Design and Optimization of Thermal Systems two components in these directions. For example, if the temperature T in a region is given as a function of x and y, the rate of change of T in the two coordinate directions is the components of the gradient vector T. This vector is used effectively in heat conduction to represent the heat ux vector q, which is given as q kT from Fouriers Law, k being the thermal conductivity. This heat ux vector is used to determine the rate of heat transfer in different coordinate direc- tions (Gebhart, 1971). GradientVector Let us consider the gradient vector further in order to provide a graphical rep- resentation for the method of Lagrange multipliers. This discussion will also be useful in other optimization schemes that are based on the gradient vector. From the denition of F and from calculus, the magnitude and direction of the gradi- ent vector, as well as a unit vector n in its direction, may be calculated as || (/)(/) tan / / tt tt tt tt Đ â ă ả FF F Q F F xy y x 22 1 áá ã tt tt tt tt , (/) (/) (/) (/) n ijFF FF xy xy 22 (8.12) where | F| is the magnitude of the gradient vector and Q is the inclination with the x-axis. Let us now consider a F constant curve in the x-y plane, as shown in Figure8.2 for three values c 1 , c 2 , and c 3 of this constant. Then, from the chain rule in calculus, d x dx y dyF FF t t t t (8.13) For F constant, dF 0. If this condition is used to represent movement along the curve, we get dx dy y x tt tt Ô Ư Ơ à F F / / (8.14) Therefore, the tangential vector T shown in Figure8.2, may be obtained by using a differential element dT is given by this relationship between dx and dy. Therefore, ddxdydy y x dyTij ij tt tt Ô Ư Ơ à F F / / (8.15) Lagrange Multipliers 479 The unit vector t along the tangential direction may be obtained, as done previ- ously for the gradient vector, by dividing the vector by its magnitude. Thus, t ij   tt tt   tt tt ¤ ¦ ¥ ³ µ ´  dy y x dy dy y x d F F F F / / / / ( 2 yy yx xy ) (/) (/) (/) (/) 2 22  t t t t tt tt FF FF ij (8.16) Thus it is seen that the two vectors n and t may be represented as n  c i  dj and t d i  cj (8.17) where c and d represent the respective components given in the preceding equations. The relationship given by Equation (8.17) applies for vectors that are normal to each other. This is shown graphically in Figure8.3(a). Mathematically, if a dot product of two vectors that are perpendicular to each other is taken, the result should be zero. Applying the dot product to n and t, we get n . t  (c i  d j) . ( d i  cj ) cd  cd  0(8.18) since i and j are independent of each other. This conrms that the two vec- tors t and n are perpendicular. Therefore, the gradient vector F is normal to the constant F curve, as shown in Figure8.3(b). This information is useful in FIGURE 8.2 Contours of constant F shown on an x-y plane for different values of the constant. Also shown is the tangent vector T, which is tangential to one such contour. Tangent vector T y dy φ = c 3 φ = c 4 φ = c 2 φ = c 1 dφ = 0 dx x 480 Design and Optimization of Thermal Systems understanding the basic characteristics of the Lagrange multiplier method and for developing other optimization techniques, as seen in later chapters. If three independent variables are considered, a surface is obtained for a con- stant value of F. Then, the gradient vector F is normal to this surface. Similar considerations apply for a larger number of independent variables. The gradient F may be written for n independent variables as  t t  t t  t t  t t F FFF F xx x x n1 1 2 2 3 3 iii i n $ (8.19) where i 1 , i 2 , , i n are unit vectors in the n directions representing the n inde- pendent variables x 1 , x 2 , , x n , respectively. Therefore, these unit vectors are independent of each other. Though it is difcult to visualize the gradient vector for more than three independent variables, the mathematical treatment of the problem is the same as that given previously for two independent variables. Again, the n and t unit vectors may be determined and their dot product taken to show that n . t  0, indicating that F is perpendicular to the F constant contours or surfaces. Because of this property, the gradient vector represents the direction in which the dependent variable F changes at the fastest rate, this rate being given by the magnitude of the gradient. In addition, the direction in which F increases is the same as the direction of the vector F . These proper- ties are useful in many optimization strategies, particularly in gradient-based search methods. LagrangeMultiplierMethodfor Unconstrained Optimization Let us rst consider the unconstrained problem for two independent variables x and y. Then the Lagrange multiplier method yields the location of the optimum FIGURE 8.3 (a) Unit vectors t and n are perpendicular to each other; (b) gradient vector F is normal to the F = constant contour. y φ(x, y) = constant d c –d c n t (a) x y (b) x Δ φ Lagrange Multipliers 481 as the solution to the equation  t t  t t U U x U y ij0 (8.20) Therefore, the gradient vector, which is normal to the constant U contour, is zero, implying that the rate of change in U is zero as one moves away from the point where this equation is satised. This indicates a stationary point, or extremum, as shown qualitatively in Figure8.4 for one or two independent variables. The point may be a minimum or a maximum. It may also be a saddle point, ridge, or valley (see Figure8.1). Additional information is needed to determine the nature of the stationary point, as discussed later. Since Equation (8.20) is a vector equation, each component may be set equal to zero, giving rise to the following two equations: t t  t t  U x U y 00and (8.21) which may be solved to obtain x and y at the optimum, denoted as x * and y * . The optimal value U * is then calculated from the expression for U. The number of equations obtained is equal to the number of independent variables and the opti- mum may be determined by solving these equations. LagrangeMultiplierMethodfor Constrained Optimization The optimum for a problem with a single constraint is obtained by solving the equations U L G  0 and G  0(8.22) FIGURE 8.4 The minimum and maximum in an unconstrained problem, as given by U = 0.      = 0 (a)     = 0 (b) [...]... the optimization problem may be formulated as Q hA T Minimize (8. 53) with A f1 (L1, L2), h f 2 (L1, L2, T), f 3 (L1, L2, T ) C (8.54) 496 Design and Optimization of Thermal Systems where A is the surface area; T is the temperature difference from the ambient; L1 and L2 are dimensions, such as diameter and height of the cylindrical shell of a heat exchanger; C is a constant, and f1, f 2; and f 3 are... the location of the optimum is given by the solution of the equations C V 2T 2 6T V3 0 and C T 4TV 3 V2 2 T2 0 Since both T and V are positive quantities, we have T 3 V3 and 4 3 V V3 3 V2 2 (3 / V )3 0 These equations give V * 1. 6 930 and T * 0. 618 2 When these are substituted in the expression for C, we obtain C* 5 .17 63 Now the second derivatives may be obtained to ascertain the nature of the critical... solution of the resulting n 1 equations Consider, for instance, the simple optimization problem given by U 2x2 1 5x2 G and x1 x 2 12 0 (8.46) Then, the method of Lagrange multipliers yields the following equations: U x1 G x1 U x2 0, G x2 0, G x1 0, x1 x 2 U* 36 .4 93, 0 (8.47) 12 (8.48) These equations lead to 4x1 x2 0, 5 Therefore, x* 1 2.466, x* 2 4.866, 2.027 (8.49) It can be shown that if either x1 or... variables and are given in most calculus textbooks, such as Keisler (19 86) and Kaplan (2002), and in books on optimization, such as Fox (19 71) , Beightler et al (19 79), and Chong and Zot (20 01) For the case of two independent variables, x1 and x2, with U(x1, x2) and its first two derivatives continuous, these conditions are given as 2U If 0, with S 0, the stationary point is a minimum 2 x1 U 0, with... ascertain the nature of the critical point Thus, 2C V2 18 T V4 2C T2 4V 4 T3 2C V T 4T 6 V3 Substituting the values of V and T at the stationary point, we calculate these three second derivatives as 1 .35 44, 23. 70 23, and 1. 236 4, respectively This gives S 30 .57 Therefore, S 0 and 2C/ V 2 0, indicating that the minimum cost has been obtained 8 .3. 3 CONVERSION OF CONSTRAINED TO UNCONSTRAINED PROBLEM It is evident... coefficient Sc – 2.027 This gives the effect of relaxing the constraint on the optimum value of U For instance, if x1 x2 13 , instead of 12 , U* can be calculated to be 38 .4 93, an increase of 2.0 The slight difference in the change Lagrange Multipliers 4 93 in U* from the calculated value of Sc is the result of the nonlinear equations that make Sc a function of x1 and not a constant The following example... 2 .1 Then, it can easily be shown that r * 0.694 m and A* 9.078 m2 Therefore, A/ V (9.078 – 8.7 93) /0 .1 2.85, which is close to the sensitivity coefficient Sc, that is given by – and is, thus, equal to 2.928 at the optimum point Again, the slight difference between Sc and A/ V is due to the dependence of on the variables 494 Design and Optimization of Thermal Systems 8.5 APPLICABILITY TO THERMAL SYSTEMS. .. Q hA T (2 10 L1/2) T 1/ 4 L 1 (L2) T (2L 10 L3/2) T 5/4 since the surface area A is L2 for a square The problem may be treated as unconstrained by substituting T in terms of L from the given constraint Thus, T 5.6/L and this is substituted in the preceding equation to yield Q (2 L 10 L3/ 2 )(5.6 L 1 )5/ 4 8. 61 2 L1/ 4 10 L1/ 4 For Q to be a minimum, Q L 8. 61 2 L 5/ 4 4 10 L 3/ 4 4 0 ... convenient and efficient In addition, they form part of several other optimization strategies and are useful in understanding the nature of the optimum Therefore, effort is often made to obtain expressions that facilitate the use of calculus methods 8.5 .1 USE OF CURVE FITTING Curve fitting is certainly the most useful method of representing the results from numerical and experimental modeling in the form of. .. unconstrained problem Differentiating A with respect to r and setting the derivative equal to zero to obtain the radius for the optimum, we get dA dr 4 r 2V r2 0 Therefore, r* V 2 1/ 3 h* V r2 4V 1/ 3 If V is taken as 2 m3, r* 1 /3 h* 0.6 83 m, 1 .36 6 m, A* 8.7 93 m2 The second derivative is calculated to determine the nature of the optimum Thus, A r2 2 4 4V r3 Since r is positive, the second derivative is also . chapter as U(x 1 , x 2 , x 3 , , x n ) l Optimum (8.4) 476 Design and Optimization of Thermal Systems subject to the constraints G 1 (x 1 , x 2 , x 3 , , x n )  0 G 2 (x 1 , x 2 , x 3 , , x n ). unity. y (a) x Δ U G = 0 U = constant Δ G y (b) x 4.0 3. 0 2.0 1. 0 4. 03. 0 3 2 1 2. 01. 00.0 U = x + y = 4 U*= 2 G = xy – 1 = 0 484 Design and Optimization of Thermal Systems These equations are solved analytically. Keisler (19 86) and Kaplan (2002), and in books on optimization, such as Fox (19 71) , Beightler et al. (19 79), and Chong and Zot (20 01) . For the case of two independent variables, x 1 and x 2 ,

Ngày đăng: 06/08/2014, 15:21

TỪ KHÓA LIÊN QUAN