1. Trang chủ
  2. » Công Nghệ Thông Tin

Introduction to Optimum Design phần 6 potx

76 340 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 684,58 KB

Nội dung

1. The method should not be used as a black box approach for engineering design problems. The selection of move limits is a trial and error process and can be best achieved in an interactive mode. The move limits can be too restrictive resulting in no solution for the LP subproblem. Move limits that are too large can cause oscillations in the design point during iterations. Thus performance of the method depends heavily on selection of move limits. 2. The method may not converge to the precise minimum since no descent function is defined, and line search is not performed along the search direction to compute a step size. Thus progress toward the solution point cannot be monitored. 3. The method can cycle between two points if the optimum solution is not a vertex of the feasible set. 4. The method is quite simple conceptually as well as numerically. Although it may not be possible to reach the precise optimum with the method, it can be used to obtain improved designs in practice. 10.4 Quadratic Programming Subproblem As observed in the previous section, the SLP is a simple algorithm to solve general constrained optimization problems. However, the method has some limitations, the major one being the lack of robustness. To correct the drawbacks, a method is presented in the next section where a quadratic programming (QP) subproblem is solved to determine a search direction. Then a step size is calculated by minimizing a descent function along the search direction. In this section, we shall define a QP subproblem and discuss a method for solving it. 10.4.1 Definition of QP Subproblem To overcome some of the limitations of the SLP method, other methods have been devel- oped to solve for design changes. Most of the methods still utilize linear approximations of Eqs. (10.19) to (10.21) for the nonlinear optimization problem. However, the linear move limits of Eq. (10.24) are abandoned in favor of a step size calculation procedure. The move limits of Eq. (10.24) play two roles in the solution process: (1) they make the linearized sub- problem bounded, and (2) they give the design change without performing line search. It turns out that these two roles of the move limits of Eq. (10.24) can be achieved by defining and solving a slightly different subproblem to determine the search direction and then per- forming a line search for the step size to calculate the design change. The linearized sub- problem can be bounded if we require minimization of the length of the search direction in addition to minimization of the linearized cost function. This can be accomplished by com- bining these two objectives. Since this combined objective is a quadratic function in terms of the search direction, the resulting subproblem is called a QP subproblem. The subprob- lem is defined as (10.25) subject to the linearized constraints of Eqs. (10.20) and (10.21) (10.26) The factor of with the second term in Eq. (10.25) is introduced to eliminate the factor of 2 during differentiations. Also, the square of the length of d is used instead of the length 1 2 Nd e Ad b TT =£; minimize f TT =+cd dd 1 2 358 INTRODUCTION TO OPTIMUM DESIGN Numerical Methods for Constrained Optimum Design 359 of d. Note that the QP subproblem is strictly convex and therefore its minimum (if one exists) is global and unique. It is also important to note that the cost function of Eq. (10.25) repre- sents an equation of a hypershere with center at -c. Example 10.6 demonstrates how to define a quadratic programming subproblem at a given point. EXAMPLE 10.6 Definition of a QP Subproblem Consider the constrained optimization problem: (a) subject to equality and inequality constraints as (b) Linearize the cost and constraint functions about a point (1, 1) and define the QP subproblem. Solution. Figure 10-10 shows a graphical representation for the problem. Note that the constraints are already written in the normalized form. The equality constraint is shown as h = 0 and boundary of the inequality constraint as g = 0. The feasible region for the inequality constraint is identified and several cost function contours are shown. Since the equality constraint must be satisfied the optimum point must lie on the two curves h = 0. Two optimum solutions are identified as hxxx gxxxx () =+ + = () =- - £ 1 2 12 1 2 2 10 0 1 4 10 0., . minimize fxxxxxx () =+ - -2158 4 1 3 2 2 12 1 x 2 x 1 g = 0 h = 0 h = 0 –2 –2 2 2 B A 4 4 6 6 8 8 –4 –4 –6 –6 –8 –8 25 75 125 200 500 800 FIGURE 10-10 Graphical representation of Example 10.6. 360 INTRODUCTION TO OPTIMUM DESIGN Point A: Point B: The gradients of the cost and constraint functions are (c) The cost and constraint function values and their gradients at (1, 1) are (d) Using move limits of 50 percent, the linear programming subproblem of Eqs. (10.19) to (10.21) is defined as: (e) subject to (f) The QP subproblem of Eqs. (10.25) and (10.26) is defined as: (g) subject to (h) To compare the solutions, the preceding LP and QP subproblems are plotted in Figs. 10-11 and 10-12, respectively. In these figures, the solution must satisfy the linearized equality constraint, so it must lie on the line C–D. The feasible region for the linearized inequality constraint is also shown. Therefore, the solution for the subproblem must lie on the line G–C. It can be seen in Fig. 10-11 that with 50 percent move limits, the linearized subproblem is infeasible. The move limits require the changes to lie in the square HIJK, which does not intersect the line G–C. If we relax the move limits to 100 percent, then point L gives the optimum solution: . Thus, we again see that the design change with the linearized subproblem is affected by the move limits. With the QP subproblem, the constraint set remains the same but there is no need for the move limits as seen in Fig. 10-12. The cost function is quadratic in variables. The optimum solution is at point G: d 1 =-0.5, d 2 =-1.5, =-28.75. Note that the direction determined by the QP subproblem is unique, but it depends on the move limits with the LP subproblem. The two directions determined by LP and QP sub- problems are in general different. f dd f 1 2 3 2 10 18=- =- =-,., 3 3 05 025 12 1 2 dd d d+=- - £, minimize fdd dd=- + () ++ () 622 1 2 12 1 2 2 2 3 3 05 025 05 05 05 05 12 1 2 1 2 dd d d d d+=- - £ - ££ - ££, , , minimize fdd=- +622 12 c =— = - () —= () —= - () fhg622 31 1 05,, ,, ,. fh g1 1 5 1 1 3 0 1 1 0 25 0,, , , , . () = () =π ()() =- < () violation inactive —= - - - () —= + () —= - () fxx xx hxxx g x684308 2 1 2 1 2 221 121 2 ,, ,,, xx*,, *=- () () =12 78f xx*, , *=- () () =12 74f Numerical Methods for Constrained Optimum Design 361 d 2 d 1 22 –18 –44 3 3 4 4 2 2 1 1 –2 –2 –1–3 –3 –4 –4 F C G E D K J I H L –1 FIGURE 10-11 Solution of the linearized subproblem for Example 10.6 at the point (1,1). d 2 d 1 10 –10 –28.75 –50 –70 3 3 4 4 2 2 1 1 –2 –2 –1–3 –3 –4 –4 F C G E D –1 FIGURE 10-12 Solution of the quadratic programming subproblem for Example 10.6 at the point (1,1). 10.4.2 Solution of QP Subproblem QP problems are encountered in many real-world applications. In addition, many general nonlinear programming algorithms require solution of a quadratic programming subproblem at each design cycle. Therefore it is extremely important to solve a QP subproblem efficiently 362 INTRODUCTION TO OPTIMUM DESIGN so that large-scale optimization problems can be treated. Thus, it is not surprising that sub- stantial research effort has been expended in developing and evaluating many algorithms for solving QP problems (Gill et al., 1981; Luenberger, 1984). Also, many good programs have been developed to solve such problems. In the next chapter, we shall describe a method for solving general QP problems that is a simple extension of the Simplex method of linear pro- gramming. If the problem is simple, we can solve it using the KKT conditions of optimality given in Theorem 4.6. To aid the KKT solution process, we can use a graphical representa- tion of the problem to identify the possible solution case and solve that case only. We present such a procedure in Example 10.7. EXAMPLE 10.7 Solution of QP Subproblem Consider the problem of Example 10.2 linearized as: minimize =-d 1 - d 2 subject to -d 1 £ 1, -d 2 £ 1. Define the quadratic programming subproblem and solve it. Solution. The linearized cost function is modified to a quadratic function as follows: (a) The cost function corresponds to an equation of a circle with center at (-c 1 , -c 2 ) where c i are components of the gradient of the cost function; i.e., at (1, 1). The graphical solution for the problem is shown in Fig. 10-13 where triangle ABC represents the feasible set. Cost function contours are circles of different radii. The optimum solu- tion is at point D where d 1 = 1 and d 2 = 1. Note that the QP subproblem is strictly convex and thus, has a unique solution. A numerical method must generally be used fdd dd=- - () ++ () 12 1 2 2 2 05. 1 3 1 1 3 2 2 3 dd+£, f d 2 d 1 6 6 5 5 4 4 3 3 2 2 1 1–1–2 –2 A B C D –0.8 0.1 1.0 2.0 5.0 8.0 –1 FIGURE 10-13 Solution of quadratic programming subproblem for Example 10.7 at the point (1, 1). 10.5 Constrained Steepest Descent Method As noted at the beginning of this chapter, numerous methods have been proposed and eval- uated for constrained optimization problems since 1960. Some methods have good perfor- mance for equality constrained problems only, whereas others have good performance for inequality constrained problems only. An overview of some of these methods is presented later in Chapter 11. In this section, we focus only on a general method, called the constrained steepest descent method, that can treat equality as well as inequality constraints in its com- putational steps. It also requires inclusion of only a few of the critical constraints in the cal- culation of the search direction at each iteration; that is, the QP subproblem of Eqs. (10.25) and (10.26) may be defined using only the active and violated constraints. This can lead to efficiency of calculations for larger scale engineering design problems, as explained in Chapter 11. The method has been proved to be convergent to a local minimum point start- ing from any point. This is considered as a model algorithm that illustrates how most opti- mization algorithms work. In addition, it can be extended for more efficient calculations, as explained in Chapter 11. Here, we explain the method and illustrate its calculations with simple numerical examples. A descent function and a step size determination procedure for the method are described. A step-by-step procedure is given to show the kind of calculations needed to implement the method for numerical calculations. It is important to understand these steps and calculations to effectively use optimization software and diagnose errors when something goes wrong with an application. Note that when there are either no constraints or no active ones, minimization of the qua- dratic function of Eq. (10.25) gives d =-c (using the necessary condition, ∂ /∂d = 0). This is just the steepest descent direction of Section 8.3 for the unconstrained problems. When f Numerical Methods for Constrained Optimum Design 363 to solve the subproblem. However, since the present problem is quite simple, it can be solved by writing the KKT necessary conditions of Theorem 4.6 as follows: (b) (c) (d) (e) (f) where u 1 , u 2 , and u 3 are the Lagrange multipliers for the three constraints and s 2 1 , s 2 2 , and s 2 3 are the corresponding slack variables. Note that the switching conditions u i s i = 0 give eight solution cases. However, only one case can give the optimum solution. The graphical solution shows that only the first inequality is active at the optimum, giving the case as: s 1 = 0, u 2 = 0, u 3 = 0. Solving this case, we get the direction vector d = (1, 1) with =-1 and u = (0, 0, 0), which is the same as the graphical solution. f us u s i ii i i =≥≥=000123 2 ,,,,, () += () +=ds ds 12 2 23 2 10 10; 1 3 20 12 1 2 dd s+- () += ∂ ∂ =- + + - = ∂ ∂ =- + + - = L d duu L d duu 1 112 2 213 1 1 3 01 1 3 0, Ldd ddudd s ud s ud s =- - () ++ () ++- () + Ê Ë ˆ ¯ + + () + + () 12 1 2 2 2 112 1 2 21 2 2 32 3 2 05 1 3 2 11 . there are constraints, their effect must be included in calculating the search direction. The search direction must satisfy all the linearized constraints. Since the search direction is a mod- ification of the steepest descent direction to satisfy constraints, it is called the constrained steepest descent direction. The steps of the resulting constrained steepest descent algorithm (CSD) will be clear once we define a suitable descent function and a related line search pro- cedure to calculate the step size along the search direction. It is important to note that the CSD method presented in this section is the most introductory and simple interpretation of more powerful sequential quadratic programming (SQP) methods. All features of the algo- rithms are not discussed here to keep the presentation of the key ideas simple and straight- forward. It is noted, however, that the methods work equally well when initiated from feasible or infeasible points. 10.5.1 Descent Function Recall that in unconstrained optimization methods the cost function is used as the descent function to monitor progress of algorithms toward the optimum point. For constrained prob- lems, the descent function is usually constructed by adding a penalty for constraint violations to the current value of the cost function. Based on this idea, many descent functions can be formulated. In this section, we shall describe one of them and show its use. One of the properties of a descent function is that its value at the optimum point must be the same as that for the cost function. Also, it should be such that a unit step size is admis- sible in the neighborhood of the optimum point. We shall introduce Pshenichny’s descent function (also called the exact penalty function) because of its simplicity and success in solving a large number of engineering design problems (Pshenichny and Danilin, 1982; Belegundu and Arora, 1984a,b). Other descent functions shall be discussed in Chapter 11. Pshenichny’s descent function F at any point x is defined as (10.27) where R > 0 is a positive number called the penalty parameter (initially specified by the user), V(x) ≥ 0 is either the maximum constraint violation among all the constraints or zero, and f(x) is the cost function value at x. As an example, the descent function at the point x (k) during the kth iteration is calculated as (10.28) where F k and V k are the values of F(x) and V(x) at x (k) as (10.29) and R is the most current value of the penalty parameter. As explained later with examples, the penalty parameter may change during the iterative process. Actually, it must be ensured that it is greater than or equal to the sum of all the Lagrange multipliers of the QP subprob- lem at the point x (k) . This is a necessary condition given as (10.30) where r k is the sum of all the Lagrange multipliers at the kth iteration: (10.31) rv u ki k i p i k i m =+ () = () = ÂÂ 11 Rr k ≥ FF k = () = () () () xx k k k VV; F k =+fRV kk F xx x () = () + () fRV 364 INTRODUCTION TO OPTIMUM DESIGN Since the Lagrange multiplier v i (k) for an equality constraint is free in sign, its absolute value is used in Eq. (10.31). u i (k) is the multiplier for the ith inequality constraint. The parameter V k ≥ 0 related to the maximum constraint violation at the kth iteration is determined using the calculated values of the constraint functions at the design point x (k) as (10.32) Since the equality constraint is violated if it is different from zero, the absolute value is used with each h i in Eq. (10.32). Note that V k is always nonnegative, i.e., V k ≥ 0. If all con- straints are satisfied at x (k) , then V k = 0. Example 10.8 illustrates calculations for the descent function. Vhhhggg kpm = {} max 0; 12 12 , , , ; , , , Numerical Methods for Constrained Optimum Design 365 EXAMPLE 10.8 Calculation of Descent Function A design problem is formulated as follows: (a) subject to four inequalities (b) Taking the penalty parameter R as 10,000, calculate the value of the descent function at the point x (0) = (40, 0.5). Solution. The cost and constraint functions at the given point x (0) = (40, 0.5) are evaluated as (c) (d) (e) (f) (g) Thus, the maximum constraint violation is determined using Eq. (10.32) as (h) Using Eq. (10.28), the descent function is calculated as (i) F 00 0 8000 10 000 0 5611 13 611=+ = + ()() =fRV ,. , V 0 40 0 5 0 5611= {} =max 0; 0.333, 0.5611, ,. . g 4 05 0=- < () . inactive g 3 40 0=- < () inactive g 2 1 40 40 0 5 3600 0 5611=- - () = () . . violation g 1 40 60 0 5 1 0 333= () -= () . . violation ff 0 2 40 0 5 40 320 40 0 5 8000= () = () + ()( ) =,. . x x xx x xx 1 2 11 2 12 60 10 1 3600 000-£ - - () £-£-£,,, minimize fx xxx () =+ 1 2 12 320 366 INTRODUCTION TO OPTIMUM DESIGN 10.5.2 Step Size Determination Before the constrained steepest descent algorithm can be stated, a step size determination procedure is needed. The step size determination problem is to calculate a k for use in Eq. (10.4) that minimizes the descent function F of Eq. (10.27). In most practical implementa- tions of the algorithm, an inaccurate line search that has worked fairly well is used to deter- mine the step size. We shall describe that procedure and illustrate its use with examples in Chapter 11. In this section we assume that a step size along the search direction can be cal- culated using the golden section method described in Chapter 8. However, it is realized that the method can be inefficient; therefore, inaccurate line search should be preferred in most constrained optimization methods. In performing the line search for minimum of the descent function F, we need a notation to represent the trial design points, and values of the descent, cost, and constraint functions. The following notation shall be used at iteration k: a j : jth trial step size x i (k,j) : ith design variable value at the jth trial step size f k,j : cost function value at the jth trial point F k,j : descent function value at the jth trial point V k,j : maximum constraint function value at the jth trial point R k : penalty parameter value that is kept fixed during line search as long as the necessary condition of Eq. (10.30) is satisfied. Example 10.9 illustrates calculations for the descent function during golden section search. EXAMPLE 10.9 Calculation of Descent Function for Golden Section Search For the design problem defined in Example 10.8, the QP subproblem has been defined and solved at the starting point x (0) = (40, 0.5). The search direction is determined as d (0) = (26.60, 0.45) and the Lagrange multipliers for the constraints are determined as u = (4880, 19 400, 0, 0). Let the initial value of the penalty parameter be given as R 0 = 1. Calculate the descent function value at the two points during initial bracket- ing of the step size in the golden section search using d = 0.1. Compare the descent function values. Solution. Since we are evaluating the step size at the starting point, k = 0, and j will be taken as 0, 1, and 2. Using the calculations given in Example 10.8 at the starting point, we get (a) To check the necessary condition of Eq. (10.30) for the penalty parameter, we need to evaluate r 0 using Eq. (10.31) as follows: (b) The necessary condition of Eq. (10.30) is satisfied if we select the penalty parameter R as R = max(R 0 , r 0 ): (c) R = () =max , ,1 24 280 24 280 ru i i m 0 0 1 4880 19 400 0 0 24 280==+++= () = Â ,, fV 00 00 8000 0 5611 ,, ,.== Numerical Methods for Constrained Optimum Design 367 Thus, the descent function value at the starting point is given as (d) Now let us calculate the descent function at the first trial step size d = 0.1, i.e., a 1 = 0.1. Updating the current design point in the search direction, we get (e) Various functions for the problem are calculated at x (0,1) as (f) The constraint violation parameter is given as (g) Thus, the descent function at the trial step size of a 1 = 0.1 is given as (note that the value of the penalty parameter R is not changed during step size calculation) (h) Since F 0,1 <F 0,0 (21,454 < 21,624), we need to continue the initial bracketing process in the golden section search procedure. Following that procedure, the next trial step size is given as a 2 = d + 1.618d = 0.1 + 1.618(0.1) = 0.2618. The trial design point is obtained as (i) Following the foregoing procedure, various quantities are calculated as (j) (k) (l) Since F 0,2 <F 0,1 (21,182 < 21,454), the minimum for the descent function has not been surpassed yet. Therefore we need to continue the initial bracketing process. The next trial step size is given as (m) Following the foregoing procedure, F 0,3 can be calculated and compared with F 0,2 . Note that the value of the penalty parameter R is calculated at the beginning of the line search and then kept fixed during all subsequent calculations for step size determination. ad d d 3 2 1 618 1 618 0 1 1 618 0 1 2 618 0 1 0 5236=+ + = + () + () = F 02 02 02 11 416 3 24 280 0 4022 21 182 ,, , ,. , . ,=+ = + ()() =fRV V 01 0 0 2594 0 4022 46 70 0 618 0 4022 , max ; . , . , . , . .= {} = fgggg====-=-11 416 3 0 2594 0 4022 46 70 0 618 1234 ,., . , . , ., . x 01 40 05 0 2618 25 6 045 46 70 0 618 , . . . . . . () = È Î Í ˘ ˚ ˙ + () È Î Í ˘ ˚ ˙ = È Î Í ˘ ˚ ˙ F 01 01 01 9233 8 24 280 0 5033 21 454 ,, , .,. ,=+ = + ()() =fRV V 01 0 0 3015 0 5033 42 56 0 545 0 5033 , max ; . , . , . , . .= {} = fgg g g====-=-9233 8 0 3015 0 5033 42 56 0 545 123 4 ., . , . , . , . x 01 40 05 01 25 6 045 42 56 0 545 , . . . . . . () = È Î Í ˘ ˚ ˙ + () È Î Í ˘ ˚ ˙ = È Î Í ˘ ˚ ˙ F 00 00 00 8000 24 280 0 5611 21 624 ,, , ,. ,=+ = + ()() =fRV [...]... (1.0)(25 .6) = 65 .6 ( ( ( x 21,0 ) = x 20 ) + t0 d20 ) = 0.5 + (1.0)(0.45) = 0.95 (i) The cost and constraint functions at the trial design point are calculated as 2 f1,0 = f (65 .6, 0.95) = (65 .6) + 320 (65 .6) (0.95) = 24, 2 46 g1 (65 .6, 0.95) = 65 .6 - 1 = 0.151 > 0 (violation) 60 (0.95) (j) 65 .6( 65 .6 - 0.95) = -0.1781 < 0 (inactive) 360 0 g3 (65 .6, 0.95) = -65 .6 < 0 (inactive) g2 (65 .6, 0.95) = 1 - g4 (65 .6, ... 2.3 at the point (R, H) = (6, 15) cm 10 .60 Exercise 2.4 at the point R = 2 cm, N = 100 10 .61 Exercise 2.5 at the point (W, D) = (100, 100) m 10 .62 Exercise 2.9 at the point (r, h) = (6, 16) cm 10 .63 Exercise 2.10 at the point (b, h) = (5, 10) m 10 .64 Exercise 2.11 at the point, width = 5 m, depth = 5 m, and height = 5 m 10 .65 Exercise 2.12 at the point D = 4 m and H = 8 m 10 .66 Exercise 2.13 at the point... process Once Solver has found the solution, the design variable cells, D3 to D6, dependent variable cells, C18 to C25, and the constraint function cells, B31 to B 36 and D31 to D 36 are updated using the optimum values of the design variables Solver also generates three reports in separate worksheets, “Answer, Sensitivity, Limits” (as explained in Chapter 6) The Lagrange multipliers and constraint activity... - 3) + (x 2 - 3) 2 (a) subject to x1 + x 2 £ 4, 3 86 INTRODUCTION TO OPTIMUM DESIGN x1 - 3 x 2 = 1, x1 , x 2 ≥ 0 (b) Solution The cost function for the problem can be expanded as f(x) = x2 - 6x1 + x 2 1 2 - 6x2 + 18 We shall ignore the constant 18 in the cost function and minimize the following quadratic function expressed in the form of Eq (11.2): È x1 ˘ q(x) = [ -6 - 6] Í ˙ + 0.5[ x1 Î x2 ˚ È2 0˘ È... 380 INTRODUCTION TO OPTIMUM DESIGN g1 = 1 1 2 (-4.5) + (-4.5) - 1.0 = 0 (active) 18 36 (d) g2 = 1 [-(-4.5) + 60 (-4.5)] = -2 .65 5 < 0 (inactive) 100 (e) g3 = -4.5 - 1.0 = -1.45 < 0 (inactive) 10 (f) 1 g4 = - (-4.5) - 1.0 = 1.25 > 0 (violated) 2 (g) 1 (-4.5) - 1.0 = -1.45 < 0 (inactive) 10 (h) g5 = g6 = -(-4.5) = 4.5 > 0 (violated) (i) Therefore, we see that g1 is active (also e - active); g4 and g6 are... INTRODUCTION TO OPTIMUM DESIGN (b) (c) (d) (e) (f) (g) (h) Note that the lower and upper limits on the design variables have been specified arbitrarily in the present example In practice, appropriate values for the given design problem will have to be specified based on the available plate sizes It is important to note that constraints of Eqs (b) to (g) can be written explicitly in terms of the design variables... under the “Subject to the Constraints” FIGURE 10- 16 Solver dialog box and spreadsheet for plate girder design problem 372 INTRODUCTION TO OPTIMUM DESIGN heading The constraints include not only those identified in the constraints section of the spreadsheet but also the bounds on the design variables Solution Once the problem has been defined in the “Solver Dialog Box,” clicking Solve button initiates the... 0 ) = 40 + 0.5(25 .6) = 52.8 ( ( ( x 21,1) = x 20 ) + t1d20 ) = 0.5 + 0.5(0.45) = 0.725 (n) The cost and constraint functions at the new trial design point are calculated as 2 f1,1 = f (52.8, 0.725) = (52.8) + 320(52.8)(0.725) = 15, 037 g1 (52.8, 0.725) = 392 INTRODUCTION TO OPTIMUM DESIGN 52.8 - 1 = 0.2138 > 0 (violation) 60 (0.725) (o) 52.8(52.8 - 0.725) = 0.2 362 > 0 (violation) 360 0 g3 (52.8, 0.725)... The KKT conditions are now reduced to finding X as a solution of the linear system in Eq (11. 16) subject to the constraints of Eqs (11.11) to (11.13) In the new variables Xi the complementary slackness conditions of Eqs (11.11) and (11.12), reduce to X i X n + m + i = 0; i = 1 to (n + m) (11.19) and the nonnegativity conditions of Eq (11.13) reduce to X i ≥ 0; i = 1 to (2 n + 2 m + 2 p) (11.20) 11.2.4... chapter is restated as: find x = (x1, , xn), a design variable vector of dimension n, to minimize a cost function f = f(x) subject to equality constraints hi(x) = 0, i = 1 to p and inequality constraints gi(x) £ 0, i = 1 to m 11.1 Potential Constraint Strategy To evaluate the search direction in numerical methods for constrained optimization, one needs to know the cost and constraint functions and their . 0 h = 0 h = 0 –2 –2 2 2 B A 4 4 6 6 8 8 –4 –4 6 6 –8 –8 25 75 125 200 500 800 FIGURE 10-10 Graphical representation of Example 10 .6. 360 INTRODUCTION TO OPTIMUM DESIGN Point A: Point B: The gradients. vari- able cells, D3 to D6, dependent variable cells, C18 to C25, and the constraint function cells, B31 to B 36 and D31 to D 36 are updated using the optimum values of the design variables. Solver. size determination. ad d d 3 2 1 61 8 1 61 8 0 1 1 61 8 0 1 2 61 8 0 1 0 52 36= + + = + () + () = F 02 02 02 11 4 16 3 24 280 0 4022 21 182 ,, , ,. , . ,=+ = + ()() =fRV V 01 0 0 2594 0 4022 46 70 0 61 8 0 4022 , max

Ngày đăng: 13/08/2014, 18:20

TỪ KHÓA LIÊN QUAN