1. Trang chủ
  2. » Công Nghệ Thông Tin

APPLIED NUMERICAL METHODS USING MATLAB phần 8 pot

51 478 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 51
Dung lượng 396,16 KB

Nội dung

MATLAB BUILT-IN ROUTINES FOR OPTIMIZATION 353 Usage of the MATLAB 6.x built-in function “fmincon()” [xo,fo,.] = fmincon(’ftn’,x0,A,b,Aeq,beq,l,u,’nlcon’,options,p1,p2,.) ž Input Arguments (at least four input arguments ’ftn’,x0,A and b required) ’ftn’ : an objective function f(x) to be minimized, usually defined in an M-file, but can be defined as an inline function, which will remove the necessity of quotes( ’’). x0 : an initial guess x 0 of the solution A,b : a linear inequality constraints Ax ≤ b;tobegivenas[] if not applied. Aeq,beq: a linear equality constraints A eq x = b eq ; to be given as [] if not applied. l,u : lower/upper bound vectors such that l ≤ x ≤ u; to be given as [] if not applied, set l(i) = -inf/u(i) = inf if x(i) is not bounded below/above. ’nlcon’: a nonlinear constraint function defined in an M-file, supposed to return the two output arguments for a given x; the first one being the LHS (vector) of inequality constraints c(x) ≤ 0 and the second one being the LHS (vector) of equality constraints c eq (x) = 0; to be given as [] if not applied. options: used for setting the display parameter, the tolerances for x o and f(x o ), and so on; to be given as [] if not applied. For details, type ‘ help optimset’ into the MATLAB command window. p1,p2,.: the problem-dependent parameters to be passed to the objective function f(x) and the nonlinear constraint functions c(x), c eq (x). ž Output Arguments xo : the minimum point (x o ) reached in the permissible region satisfying the constraints fo : the minimized function value f(x o ) %nm732_1 to solve a constrained optimization problem by fmincon() clear, clf ftn=’((x(1) + 1.5)^2 + 5*(x(2) - 1.7)^2)*((x(1)-1.4)^2 + .6*(x(2) 5)^2)’; f722o = inline(ftn,’x’); x0 = [0 0.5] %initial guess A = []; B = []; Aeq = []; Beq = []; %no linear constraints l = -inf*ones(size(x0)); u = inf*ones(size(x0)); % no lower/upperbound options = optimset(’LargeScale’,’off’); %just [] is OK. [xo_con,fo_con] = fmincon(f722o,x0,A,B,Aeq,Beq,l,u,’f722c’,options) [co,ceqo] = f722c(xo_con) % to see how constraints are. 354 OPTIMIZATION Min f(x)(7.3.4) s.t. Ax ≤ b,A eq x = b eq , c(x) ≤ 0, c eq (x) = 0 and l ≤ x ≤ u (7.3.5) A part of its usage can be seen by typing ‘ help fmincon’ into the MATLAB command window as summarized in the above box. We make the MATLAB program “ nm732_1.m”, which uses the routine “fmincon()” to solve the problem presented in Example 7.3. Interested readers a re welcomed to run it and observe the result to check if it agrees with that of Example 7.3. There are two more MATLAB built-in routines to be introduced in this section. One is "fminimax(’ftn’,x0,A,b,Aeq,beq,l,u,’nlcon’,options,p1, )", which is focused on minimizing the maximum among several components of the vector/matrix-valued objective function f(x) = [f 1 (x) ···f N (x)] T subject to some constraints as described below. Its usage is almost the same as that of “ fmincon()”. Min x {Max n {f n (x)}} (7.3.6) s.t. Ax ≤ b,A eq x = b eq , c(x) ≤ 0, c eq (x) = 0, and l ≤ x ≤ u (7.3.7) The other is the constrained linear least-squares (LLS) routine "lsqlin(C,d,A,b,Aeq,beq,l,u,x0,options,p1, )", whose job is to solve the problem Min x ||Cx − d|| 2 (7.3.8) s.t.Ax ≤ b,A eq x = b eq and l ≤ x ≤ u (7.3.9) In order to learn the usage and function of this routine, we make the MATLAB program “ nm732_2.m”,whichusesboth“fminimax()”and“lsqlin()”tofind a second-degree polynomial approximating the function (7.3.3) and compares the results with that of applying the routine “ lsqnonlin()” introduced in the previous section for verification. From the plotting result depicted in Fig. 7.14, note the following. ž We attached no constraints to the “fminimax()” routine, so it yielded the approximate polynomial curve minimizing the maximum deviation from f(x). ž We attached no constraints to the constrained linear least-squares routine “ lsqlin()” either, so it yielded the approximate polynomial curve minimizing the sum (integral) of squared deviation from f(x),whichis MATLAB BUILT-IN ROUTINES FOR OPTIMIZATION 355 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −0.2 0.2 0.4 0.6 0.8 0 1 least squares fminimax Chebyshev f ( x ) = 1 1 + 8 x 2 Figure 7.14 Approximation of a curve by a second-degree polynomial function based on the minimax, least-squares, and Chebyshev methods. the same as the (unconstrained) least squares solution obtained by using the routine “ lsqnonlin()”. ž Another MATLAB built-in routine “lsqnonneg()” gives us a nonnegative LS (NLS) solution to the problem (7.3.8). %nm732_2: uses fminimax() for a vector-valued objective ftn f(x) clear, clf f = inline(’1./(1+8*x.*x)’,’x’); f73221 = inline(’abs(polyval(a,x) - fx)’,’a’,’x’,’fx’); f73222 = inline(’polyval(a,x) - fx’,’a’,’x’,’fx’); N = 2; % the degree of approximating polynomial a0 = zeros(1,N + 1); %initial guess of polynomial coefficients xx = -2+[0:200]’/50; %intermediate points fx = feval(f,xx); % and their function values f(xx) ao_m = fminimax(f73221,a0,[],[],[],[],[],[],[],[],xx,fx) %fminimax sol for n = 1:N+1, C(:,n) = xx.^(N + 1 - n); end ao_ll = lsqlin(C,fx) %linear LS to minimize (Ca - fx)^2 with no constraint ao_ln = lsqnonlin(f73222,a0,[],[],[],xx,fx) %nonlinear LS c2 = cheby(f,N,-2,2) %Chebyshev polynomial over [-2,2] plot(xx,fx,’:’, xx,polyval(ao_m,xx),’m’, xx,polyval(ao_ll,xx),’r’) hold on, plot(xx,polyval(ao_ln,xx),’b’, xx,polyval(c2,xx),’ ’) axis([-2 2 -0.4 1.1]) 7.3.3 Linear Programming (LP) The linear programming (LP) scheme implemented by the MATLAB built-in routine "[xo,fo] = linprog(f,A,b,Aeq,Beq,l,u,x0,options)" is designed to solve an LP problem, which is a constrained minimization problem as follows. Min f(x) = f T x (7.3.10a) subject to Ax ≤ b,A eq x = b eq , and l ≤ x ≤ u (7.3.10b) 356 OPTIMIZATION %nm733 to solve a Linear Programming problem. % Min f*x=-3*x(1)-2*x(2) s.t. Ax <= b, Aeq = beq and l <= x <= u x0 = [0 0]; %initial point f = [-3 -2]; %the coefficient vector of the objective function A = [3 4; 2 1]; b = [7; 3]; %the inequality constraint Ax <= b Aeq = [-3 2]; beq = 2; %the equality constraint Aeq*x = beq l = [0 0]; u = [10 10]; %lower/upper bound l <= x <= u [xo_lp,fo_lp] = linprog(f,A,b,Aeq,beq,l,u) cons_satisfied = [A; Aeq]*xo_lp-[b; beq] %how constraints are satisfied f733o=inline(’-3*x(1)-2*x(2)’, ’x’); [xo_con,fo_con] = fmincon(f733o,x0,A,b,Aeq,beq,l,u) It produces the solution (column) vector x o and the minimized value of the objective function f(x o ) as its first and second output arguments xo and fo, where the objective function and the constraints excluding the constant term are linear in terms of the independent (decision) variables. It works for such linear optimization problems as (7.3.10) more efficiently than the general constrained optimization routine “ fmincon()”. The usage of the routine “ linprog()” is exemplified by the MATLAB pro- gram “ nm733.m”, which uses the routine for solving an LP problem described as Min f(x) = f T x = [−3 −2][x 1 x 2 ] T =−3x 1 − 2x 2 (7.3.11a) s.t. Ax =   −32 34 21    x 1 x 2  = ≤ ≤   2 7 3   = b and l =  0 0  ≤ x =  x 1 x 2  ≤  10 10  = u (7.3.11b) −0.5 0 0.5 0.5 1 1 1.5 1.5 2 2 2.5 2.5 0 x 1 = 0 x 2 = 0 f T x = −3 x 1 − 2 x 2 = −5 f T x = −3 x 1 − 2 x 2 = −4 −3 x 1 + 2 x 2 = 2 3 x 1 + 4 x 2 = 7 −2 x 1 + x 2 = 3 x 2 x 1 Figure 7.15 The objective function, constraints, and solutions of an LP problem. PROBLEMS 357 Table 7.3 The Names of MATLAB Built-In Minimization Routines in MATLAB 5.x/6.x Unconstrained Minimization Constrained Minimization Minimization Methods Bracketing Non-Gradient- Based Gradient- Based Linear Nonlinear Linear LS Nonlinear LS Minimax MATLAB 5.x fmin fmins fminu lp constr leastsq conls minimax MATLAB 6.x fminbnd fminsearch fminunc linprog fmincon lsqnonlin lsqlin fminimax The program also applies the general constrained minimization routine “fmin- con() ” to solve the same problem for cross-check. Readers are welcome to run the program and see the results. >> nm733 xo_lp = [0.3333 1.5000], fo_lp = -4.0000 cons_satisfied = -0.0000 % <= 0(inequality) -0.8333 % <= 0(inequality) -0.0000 % = 0(equality) xo_con = [0.3333 1.5000], fo_con = -4.0000 In this result, the solutions obtained by using the two routines “linprog()”and “ fmincon()” agree with each other, satisfying the inequality/equality constraints and it can be assured by Fig. 7.15. In Table 7.3, the names of MATLAB built-in minimization routines in MAT- LAB version 5.x and 6.x are listed. PROBLEMS 7.1 Modification of Golden Search Method In fact, the golden search method explained in Section 7.1 requires only one function evaluation per iteration, since one point of a new interval coincides with a point of the previous interval so that only one trial point is updated. In spite of this fact, the MATLAB routine “ opt_gs()”imple- menting the method performs the function evaluations twice per iteration. An improvement may be initiated by modifying the declaration type as [xo,fo] = opt_gs1(f,a,e,fe,r1,b,r,TolX,TolFun,k) so that anyone could use the new routine as in the following program, where its input argument list contains another point ( e)aswellasthenew end point ( b) of the next interval, its function value (fe), and a parameter ( r1) specifying if the point is the left one or the right one. Based on this idea, how do you revise the routine “ opt_gs()” to cut down the number of function evaluations? 358 OPTIMIZATION %nm7p01.m to perform the revised golden search method f701 = inline(’x.*(x-2)’, ’x’); a=0;b=3;r=(sqrt(5)-1)/2; TolX = 1e-4; TolFun = 1e-4; MaxIter=100; h=b-a;rh=r*h; c=b-rh;d=a+rh; fc = f701(c); fd = f701(d); if fc < fd, [xo,fo] = opt_gs1(f701,a,c,fc,1 - r,d,r,TolX,TolFun,MaxIter) else [xo,fo] = opt_gs1(f701,c,d,fd,r,b,r, TolX,TolFun,MaxIter) end 7.2 Nelder–Mead, Steepest Descent, Newton, SA, GA and fminunc(), fmin- search() Consider a two-variable objective function f(x) = x 4 1 − 12x 2 1 − 4x 1 + x 4 2 − 16x 2 2 − 5x 2 (P7.2.1) − 20 cos(x 1 − 2.5) cos(x 2 − 2.9) whose gradient vector function is g(x) =∇f(x) =  4x 3 1 − 24x 1 − 4 + 20 sin(x 1 − 2.5) cos(x 2 − 2.9) 4x 3 2 − 32x 2 − 5 + 20 cos(x 1 − 2.5) sin(x 2 − 2.9)  (P7.2.2) You have the MATLAB functions f7p02(), g7p02() defining the objective function f(x) and its gradient function g(x).Youalsohaveapartofthe MATLAB program which plots a mesh/contour-type graphs for f (x). Note that this gradient function has nine zeros as listed in Table P7.2.1. Table P7.2.1 Extrema (Maxima/Minima) and Saddle Points of the Function (P7.2.1) Points Signs of ∂ 2 f/∂x 2 i Points Signs of ∂ 2 f/∂x 2 i (1) [0.6965 −0.1423] −, − M (6) [−1.6926 −0.1183] (2) [2.5463 −0.1896] (7) [−2.6573 −2.8219] +, + m (3) [2.5209 2.9027] +, + G (8) [−0.3227 −2.4257] (4) [−0.3865 2.9049] (9) [2.5216 −2.8946] +, + m (5) [−2.6964 2.9031] (a) From the graphs (including Fig. P7.2) which you get by running the (unfinished) program, determine the characteristic of each of the nine points, that is, whether it is a local maximum(M)/minimum(m), the global minimum(G) or a saddle point(S) which is a minimum with respect to one variable and a maximum with respect to another variable. Support your judgment by telling the signs of the second derivatives of f (x) with respect to x 1 and x 2 . PROBLEMS 359 4 3 2 1 0 −1 −2 −3 −4 −4 −3 −2 −101234 5 4 3 2 9 1 8 7 6 Figure P7.2 The contour, extrema and saddle points of the objective function (P7.2.1). %nm7p02 to minimize an objective ftn f(x) by the Newton method f = ’f7p02’; g = ’g7p02’; l=[-4-4];u=[44]; x1 = l(1):.25:u(1); x2 = l(2):.25:u(2); [X1,X2] = meshgrid(x1,x2); for m = 1:length(x1) for n = 1:length(x2), F(n,m) = feval(f,[x1(m) x2(n)]); end end figure(1), clf, mesh(X1,X2,F) figure(2), clf, contour(x1,x2,F,[-125 -100 -75 -50 -40 -30 -25 -20 0 50]) function y = f7p02(x) y = x(1)^4 - 12*x(1)^2 - 4*x(1) + x(2)^4 - 16*x(2)^2 - 5*x(2) -20*cos(x(1) - 2.5)*cos(x(2) - 2.9); function [df,d2f] = g7p02(x) % the 1st/2nd derivatives df(1) = 4*x(1)^3 - 24*x(1)-4+20*sin(x(1) - 2.5)*cos(x(2) - 2.9);%(P7.2.2) df(2) = 4*x(2)^3 - 32*x(2)-5 + 20*cos(x(1) - 2.5)*sin(x(2) - 2.9);%(P7.2.2) d2f(1) = 12*x(1)^2 - 24 + 20*cos(x(1) - 2.5)*cos(x(2) - 2.9); %(P7.2.3) d2f(2) = 12*x(2)^2 - 32 + 20*cos(x(1) - 2.5)*cos(x(2) - 2.9); %(P7.2.3) ∂ 2 f/∂x 2 1 = 12x 2 1 − 24 + 20 cos(x 1 − 2.5) cos(x 2 − 2.9) ∂ 2 f/∂x 2 2 = 12x 2 1 − 32 + 20 cos(x 1 − 2.5) cos(x 2 − 2.9) (P7.2.3) (b) Apply the Nelder–Mead method, the steepest descent method, the New- ton method, the simulated annealing (SA), genetic algorithm (GA), and the MATLAB built-in routines fminunc(), fminsearch() to minimize the objective function (P7.2.1) and fill in Table P7.2.2 with the number and character of the point reached by each method. 360 OPTIMIZATION Table P7.2.2 Points Reached by Several Optimization Routines Initial Point Reached Point x 0 Nelder Steepest Newton fminunc fminsearch SA GA (0, 0) (5)/m (1, 0) (3)/G (1, 1) (9)/m (0, 1) (3)/G (−1, 1) (5)/m (−1, 0) ≈(3)/G (−1, −1) (3)/G (0, −1) (9)/m (1, −1) (9)/m (2, 2) (3)/G (−2, −2) (7)/m (c) Overall, the point reached by each minimization algorithm depends on the starting point—that is, the initial value of the independent variable as well as the characteristic of the a lgorithm. Fill in the blanks in the following sentences. Most algorithms succeed to find the global minimum if only they start from the initial point ( , ), ( , ), ( , ), or ( , ). An algorithm most possibly goes to the closest local minimum (5) if launched from ( , ) or ( , ), and it may go to the closest local minimum (7) if launched from ( , ) or ( , ). If launched from ( , ), it may go to one of the two closest local minima (7) and (9) and if launched from ( , ), it most possibly goes to the closest local minimum (9). But, the global optimization techniques SA and GA seem to work fine almost regardless of the starting point, although not always. 7.3 Minimization of an Objective Function Having Many Local Minima/ Maxima Consider the problem of minimizing the following objective function Min f(x) = sin(1/x)/((x −0.2) 2 + 0.1)(P7.3.1) which is depicted in Fig. P7.3. The graph shows that this function has infinitely many local minima/maxima around x = 0 and the global mini- mum about x = 0.2. (a) Find the solution by using the MATLAB built-in routine “ fminbnd()”. Is it plausible? (b) With nine different values of the initial guess x 0 = 0.1, 0.2, ,0.9, use the four MATLAB routines “ opt_Nelder()”, “opt_steep()”, “fmin- unc() ”, and “fminsearch()” to solve the problem. Among those 36 tryouts, how many times have you got the right solution? PROBLEMS 361 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 −10 5 −5 0 Figure P7.3 The graph of f(x) = sin(1/x)/((x − 0.2) 2 + 0.1) having many local minima/maxima. (c) With the values of the parameters set to l = 0, u = 1, q = 1, ε f = 10 −9 , k max = 1000 and the initial guess x 0 = 0.1, 0.2, ,0.9, use the SA (simulated annealing) routine “ sim_anl()” to solve the problem. You can test the performance of the routine and your luck by running the routine four times for the same problem and finding the probability of getting the right solution. (d) With the values of the parameters set to l = 0, u = 1, N p = 30, N b = 12, P c = 0.5, P m = 0.01, η = 1, k max = 1000 and the initial guess x 0 = 0.1, 0.2, ,0.9, use the GA (genetic algorithm) routine “ genetic()” to solve the problem. As in (c), you can run the routine four times for the same problem and find the probability of getting the right solution in order to test the performance of the routine and your luck. 7.4 Linear Programming Method Consider the problem of maximizing a linear objective function Max f(x) = f T x = [ 32−1 ][ x 1 x 2 x 3 ] T (P7.4.1a) subject to the constraints Ax =   3 −20 −3 −40 −2 −10     x 1 x 2 x 3   = ≥ ≥   −2 −7 −3   = b and l =   0 0 0   ≤ x =   x 1 x 2 x 3   ≤   10 10 10   = u (P7.4.1b) Jessica is puzzled with this problem, which is not a minimization but a maximization. How do you suggest her to solve it? Make the program that uses the MATLAB built-in routines “ linprog()”and“fmincon()” to solve this problem and run it to get the solutions. 362 OPTIMIZATION 7.5 Constrained Optimization and Penalty Method Consider the problem of minimizing a nonlinear objective function Min x f(x) =−3x 1 − 2x 2 + M(3x 1 − 2x 2 + 2) 2 (P7.5.1a) (M : a large positive number) subject to the constraints  34 −2 −1  x 1 x 2  ≤ 7 ≥−3 and l =  0 0  ≤ x =  x 1 x 2  ≤  10 10  = u (P7.5.1b) (a) With the two values of the weighting factor M = 20 and 10,000 in the objective function (P7.5.1a), apply the MATLAB built-in routine “ fmincon()” to find the solutions to the above constrained minimiza- tion problem. In order to do this job, you might have to make the vari- able parameter M passed to the objective function (defined in an M-file) either through “ fmincon()” or directly by declaring the parameter as global both in the main program and in the M-file defining (P7.5.1a). In case you are going to have the parameter passed through “ fmincon()” to the objective function, you should have the parameter included in the input argument list of the objective function as function f=f7p05M(x,M) f = -3*x(1)-2*x(2)+M*(3*x(1)-2*x(2)+2).^2; Additionally, you should give empty matrices ([]) as the ninth input argument (for a nonlinear inequality/equality constraint function ‘ nonl- con ’) as well as the 10th one (for ‘options’) and the value of M as the 11th one of the routine “ fmincon()”. xo = fmincon(’f7p05M’,x0,A,b,[],[],l,u,[],[],M) For reference, type ‘help fmincon’ into the MATLAB command window. (b) Noting that the third (squared) term of the objective function (P7.5.1a) has its minimum value of zero for 3x 1 − 2x 2 + 2 = 0 and, thus, it actu- ally represents the penalty (Section 7.2.2) imposed for not satisfying the equality constraint 3x 1 − 2x 2 + 2 = 0 (P7.5.2) tell which of the solutions obtained in (a) is more likely to satisfy this constraint and support your answer by comparing the values of the left-hand side of this equality for the two solutions. [...]... 1.12 1. 18 0. 58 0. 58 0.76 0.64 0.53 0.53 1.36 0.79 −1.21 −1.21 −1.12 −0. 58 −0. 58 −0.76 3 −1.76 −1.76 −1.44 −1.65 1 −0.00 −0.00 1 1/3 1 1 xo fo 1 1 o c −0.00 −2.04 −1.34 −0.62 −1.34 0.29 −0.00 −0.00 — 1.21 1.21 1.15 −1.26 — — 0. 58 0. 58 0.71 1.70 — — 0.53 0.53 1. 08 0.46 — −1. 18 −1.21 −1.21 −1.15 1.26 −0.64 −0. 58 −0. 58 −0.71 −1.70 −1.76 −1.76 −1.54 −1 .84 −0.00 −3 .82 −1.39 −22.1 −0.70 −1.21 −0. 58 −1.76... αn vn (8. 5.5) n=1 can be written as N x(k) = λk αn vn n (8. 5.6) n=1 which was illustrated by Eq (E8.4.6) in Example 8. 4 On the other hand, as illustrated by (E8.3.9) in Example 8. 3(b), the solution of a homogeneous continuoustime state equation N x (t) = Ax(t) with x(0) = αn vn n=1 (8. 5.7) 386 MATRICES AND EIGENVALUES can be written as N x(t) = eλn t αn vn (8. 5 .8) n=1 Equations (8. 5.6) and (8. 5 .8) imply... · · · + a1 λ + a0 = 0 (8. 1.2) Applied Numerical Methods Using MATLAB , by Yang, Cao, Chung, and Morris Copyright  2005 John Wiley & Sons, Inc., ISBN 0-471-6 983 3-4 371 372 MATRICES AND EIGENVALUES and then substitute the λi ’s, one by one, into Eq (8. 1.1) to solve it for the eigenvector vi ’s This is, however, not always so simple, especially if some root (eigenvalue) of Eq (8. 1.2) has multiplicity... Eq.(E8.4.1) matrix composed of eigenvectors (E8.4.2) eigenvalues on its diagonal through similarity transformation (E8.4.3) having the eigenvalues on the diagonal SIMILARITY TRANSFORMATION AND DIAGONALIZATION 377 Then, we get L= λ1 0 0 λ2 = Ap = V −1 AV = −0.4 0 , 0 0.5 −0.4 0 0 0.5 V = [v1 v2 ] = and Bp = V −1 B = −0.9 285 0.3714 −0 .89 44 −0.4472 (E8.4.2) 2.6759 −2.77 78 (E8.4.3) so that we can write the diagonalized... %Eq. (8. 4.9a) c2 = 1/sqrt(1 + t2*t2); s2 = t2*c2; %Eq. (8. 4.9b,c) c = sqrt((1 + c2)/2); s = s2/2/c; %Eq. (8. 4.9d,e) cc = c*c; ss = s*s; end LAMBDA = A; LAMBDA(p,:) = A(p,:)*c + A(q,:)*s; %Eq. (8. 4.7b) LAMBDA(:,p) = LAMBDA(p,:)’; LAMBDA(q,:) = -A(p,:)*s + A(q,:)*c; %Eq. (8. 4.7c) LAMBDA(:,q) = LAMBDA(q,:)’; LAMBDA(p,q) = 0; LAMBDA(q,p) = 0; %Eq. (8. 4.7a) LAMBDA(p,p) = A(p,p)*cc +A(q,q)*ss + A(p,q)*s2; %Eq. (8. 4.7d)... transformation process don’t get larger, since Eqs (8. 4.7b) and (8. 4.7c) implies 2 2 2 2 vpn + vqn = (apn c + aqn s)2 + (−apn s + aqn c)2 = apn + aqn (8. 4.10) This so-called Jacobi method is cast into the routine “eig_Jacobi()” The MATLAB program “nm841.m” uses it to find the eigenvalues/eigenvectors of a matrix and compares the result with that of using the MATLAB built-in routine “eig()” for cross-check... + Bu(t) (E8.3.1) and us (t) = 1 ∀ t ≥ 0 with the initial state x(0) and the input u(t) SIMILARITY TRANSFORMATION AND DIAGONALIZATION 375 we use the modal matrix obtained as (E8.2.2) in Example 8. 2 to make a substitution of variable √ 1 1/ 2 x1 (t) w1 (t) √ x(t) = V w(t), = (E8.3.2) x2 (t) w2 (t) 0 −1/ 2 which converts Eq (E8.3.1) into V w (t) = AV w(t) + Bus (t) (E8.3.3) We premultiply (E8.3.3) by... matrix (8. 4.5) diagonal Noting that the similarity transformation (8. 4.4) changes only the pth rows/columns and the qth rows/columns as vpq = vqp = aqp (c2 − s 2 ) + (aqq − app )sc 1 = aqp cos 2θ + (aqq − app ) sin 2θ 2 (8. 4.7a) 384 MATRICES AND EIGENVALUES vpn = vnp = apn c + aqn s vqn = vnp = −apn s + aqn c for the p th row/column with n = p, q for the q row/column with n = p, q th (8. 4.7b) (8. 4.7c)... 1] = w1 [n] 2.6759 u [n] + −2.77 78 s w2 [n] −0.4w1 [n] + 2.6759 0.5w2 [n] − 2.77 78 (E8.4.4) Without the input term on the right-hand side of Eq (E8.4.1), we would have obtained w1 [n + 1] λ1 = w2 [n + 1] 0 0 λ2 n+1 λ1 w1 [n] = n+1 w2 [n] λ2 w1 [0] w2 [0] with w[0] = V −1 x[0] (E8.4.5) x[n] = V w[n] = [v1 v2 ] w1 [0]λn 1 w2 [0]λn 2 = w1 [0]λn v1 + w2 [0]λn v2 1 2 (E8.4.6) As time goes by (i.e., as n... on the main diagonal Here is an example to illustrate the diagonalization Example 8. 2 Diagonalization Using the Modal Matrix Consider the matrix given in the previous example A= 0 1 0 −1 (E8.2.1) We can use the eigenvectors (E8.1.3) (obtained in Example 8. 1) to construct the modal matrix as √ 1 1/ 2 √ V = [v1 v2 ] = (E8.2.2) 0 −1/ 2 and use this matrix to make a similarity transformation √ −1 1 1 1/ . 0.53 1. 08 0.46 — 1 −1.21 −1.21 −1.12 −1. 18 −1.21 −1.21 −1.15 1.26 1 −0. 58 −0. 58 −0.76 −0.64 −0. 58 −0. 58 −0.71 −1.70 c o 3 −1.76 −1.76 −1.44 −1.65 — −1.76 −1.76 −1.54 −1 .84 — −0.00 −0.00 −3 .82 1 −0.00. −1.33 −1 .84 −0.65 1 −0.00 0.00 0.29 −3 .82 −1.41 −0.00 −0.00 −0.00 −0.00 0.00 −0.00 −0.00 −0.00 −22.1 −16.4 1.21 1.21 1.12 1. 18 — 1.21 1.21 1.15 −1.26 — x o 1 0. 58 0. 58 0.76 0.64 — 0. 58 0. 58 0.71. a N−1 λ N−1 +···+a 1 λ + a 0 = 0 (8. 1.2) Applied Numerical Methods Using MATLAB  , by Yang, Cao, Chung, and Morris Copyr ight  2005 John Wiley & Sons, I nc., ISBN 0-471-6 983 3-4 371 372 MATRICES AND

Ngày đăng: 09/08/2014, 12:22

TỪ KHÓA LIÊN QUAN

w