Ebook Fundamental numerical methods and data analysis Part 2

148 244 0
Ebook Fundamental numerical methods and data analysis Part 2

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

(BQ) Part 2 book Fundamental numerical methods and data analys has contents: Numerical solution of differential and integral equations; least squares, fourier analysis, and related approximation norms; probability theory and statistics; sampling distributions of moments, statistical tests, and procedures.

5 Numerical Solution of Differential and Integral Equations • • • The aspect of the calculus of Newton and Leibnitz that allowed the mathematical description of the physical world is the ability to incorporate derivatives and integrals into equations that relate various properties of the world to one another Thus, much of the theory that describes the world in which we live is contained in what are known as differential and integral equations Such equations appear not only in the physical sciences, but in biology, sociology, and all scientific disciplines that attempt to understand the world in which we live Innumerable books and entire courses of study are devoted to the study of the solution of such equations and most college majors in science and engineering require at least one such course of their students These courses generally cover the analytic closed form solution of such equations But many of the equations that govern the physical world have no solution in closed form Therefore, to find the answer to questions about the world in which we live, we must resort to solving these equations numerically Again, the literature on this subject is voluminous, so we can only hope to provide a brief introduction to some of the basic methods widely employed in finding these solutions Also, the subject is by no means closed so the student should be on the lookout for new techniques that prove increasingly efficient and accurate 121 Numerical Methods and Data Analysis 5.1 The Numerical Integration of Differential Equations When we speak of a differential equation, we simply mean any equation where the dependent variable appears as well as one or more of its derivatives The highest derivative that is present determines the order of the differential equation while the highest power of the dependent variable or its derivative appearing in the equation sets its degree Theories which employ differential equations usually will not be limited to single equations, but may include sets of simultaneous equations representing the phenomena they describe Thus, we must say something about the solutions of sets of such equations Indeed, changing a high order differential equation into a system of first order differential equations is a standard approach to finding the solution to such equations Basically, one simply replaces the higher order terms with new variables and includes the equations that define the new variables to form a set of first order simultaneous differential equations that replace the original equation Thus a third order differential equation that had the form f '''(x) + αf"(x) + βf'(x) + γf(x) = g(x) , (5.1.1) could be replaced with a system of first order differential equations that looked like y' ( x ) + αz' ( x ) + βf ' ( x ) + γf ( x ) = g ( x ) z ' ( x ) = y( x ) f ' ( x ) = z( x )      (5.1.2) This simplification means that we can limit our discussion to the solution of sets of first order differential equations with no loss of generality One remembers from beginning calculus that the derivative of a constant is zero This means that it is always possible to add a constant to the general solution of a first order differential equation unless some additional constraint is imposed on the problem These are generally called the constants of integration These constants will be present even if the equations are inhomogeneous and in this respect differential equations differ significantly from functional algebraic equations Thus, for a problem involving differential equations to be fully specified, the constants corresponding to the derivative present must be given in advance The nature of the constants (i.e the fact that their derivatives are zero) implies that there is some value of the independent variable for which the dependent variable has the value of the constant Thus, constants of integration not only have a value, but they have a "place" where the solution has that value If all the constants of integration are specified at the same place, they are called initial values and the problem of finding a solution is called an initial value problem In addition, to find a numerical solution, the range of the independent variable for which the solution is desired must also be specified This range must contain the initial value of the independent variable (i.e that value of the independent variable corresponding to the location where the constants of integration are specified) On occasion, the constants of integration are specified at different locations Such problems are known as boundary value problems and, as we shall see, these require a special approach So let us begin our discussion of the numerical solution of ordinary differential equations by considering the solution of first order initial value differential equations The general approach to finding a solution to a differential equation (or a set of differential equations) is to begin the solution at the value of the independent variable for which the solution is equal to the initial values One then proceeds in a step by step manner to change the independent variable and move across the required range Most methods for doing this rely on the local polynomial approximation of the 122 @ Differential and Integral Equations solution and all the stability problems that were a concern for interpolation will be a concern for the numerical solution of differential equations However, unlike interpolation, we are not limited in our choice of the values of the independent variable to where we can evaluate the dependent variable and its derivatives Thus, the spacing between solution points will be a free parameter We shall use this variable to control the process of finding the solution and estimating this error Since the solution is to be locally approximated by a polynomial, we will have constrained the solution and the values of the coefficients of the approximating polynomial This would seem to imply that before we can take a new step in finding the solution, we must have prior information about the solution in order to provide those constraints This "chicken or egg" aspect to solving differential equations would be removed if we could find a method that only depended on the solution at the previous step Then we could start with the initial value(s) and generate the solution at as many additional values of the independent variable as we needed Therefore let us begin by considering one-step methods a One Step Methods of the Numerical Solution of Differential Equations Probably the most conceptually simple method of numerically integrating differential equations is Picard's method Consider the first order differential equation y'(x) = g(x,y) (5.1.3) Let us directly integrate this over the small but finite range h so that ∫ y y0 dy = ∫ x 0+ h x0 g ( x , y) dx , (5.1.4) which becomes y( x ) = y + ∫ x0 +h x0 g( x , y) dx , (5.1.5) Now to evaluate the integral and obtain the solution, one must know the answer to evaluate g(x,y) This can be done iteratively by turning eq (5.1.5) into a fixed-point iteration formula so that g[ x , y ( k −1) ( x )] dx   ( k −1) ( k −1)  y (x) = y (x + h) y (k ) (x + h) = y + ∫ x0 +h x0 (5.1.6) A more inspired choice of the iterative value for y( k-1)(x) might be y(k-1)(x) = ½[y0 +y(k-1)(x0+h)] (5.1.7) However, an even better approach would be to admit that the best polynomial fit to the solution that can be achieved for two points is a straight line, which can be written as y(x) = y0 + a(x-x0) = {[y(k-1)(x0+h)](x-x0) + [y0(x0 )](x0+h-x)}/h (5.1.8) While the right hand side of equation (5.1.8) can be used as the basis for a fixed point iteration scheme, the iteration process can be completely avoided by taking advantage of the functional form of g(x,y) The linear 123 Numerical Methods and Data Analysis form of y can be substituted directly into g(x,y) to find the best value of a The equation that constrains a is then simply ah = ∫ x0 +h x0 g[ x, (ax + y )] dx (5.1.9) This value of a may then be substituted directly into the center term of equation (5.1.8) which in turn is evaluated at x = x0+h Even should it be impossible to evaluate the right hand side of equation (5.1.9) in closed form any of the quadrature formulae of chapter can be used to directly obtain a value for a However, one should use a formula with a degree of precision consistent with the linear approximation of y To see how these various forms of Picard's method actually work, consider the differential equation y'(x) = xy , (5.1.10) subject to the initial conditions y(0) = (5.1.11) Direct integration yields the closed form solution y = ex /2 (5.1.12) The rapidly varying nature of this solution will provide a formidable test of any integration scheme particularly if the step size is large But this is exactly what we want if we are to test the relative accuracy of different methods In general, we can cast Picard's method as z y( x ) = + ∫ zy(z) dz , (5.1.13) where equations (5.1.6) - (5.1.8) represent various methods of specifying the behavior of y(z) for purposes of evaluating the integrand For purposes of demonstration, let us choose h = which we know is unreasonably large However, such a large choice will serve to demonstrate the relative accuracy of our various choices quite clearly Further, let us obtain the solution at x = 1, and The naive choice of equation (5.1.6) yields an iteration formula of the form y( x + h ) = + ∫ x0 +h x0 zy ( k −1) ( x + h ) dz + + [h ( x + h ) / 2] y ( k −1) ( x + h ) (5.1.14) This may be iterated directly to yield the results in column (a) of table 5.1, but the fixed point can be found directly by simply solving equation (5.1.14) for y(∞)(x0+h) to get y(∞)(x0+h) = (1-hx0-h2/2)-1 (5.1.15) For the first step when x0 = 0, the limiting value for the solution is However, as the solution proceeds, the iteration scheme clearly becomes unstable 124 @ Differential and Integral Equations Table 5.1 Results for Picard's Method i (a) y(1) 1.0 1.5 1.75 1.875 1.938 1.969 (b) y(1) 1.0 1.5 1.625 1.6563 1.6641 1.6660 (c) y(1) (d) yc(1) ∞ 2.000 5/3 7/4 1.6487 i y(2) 4.0 7.0 11.5 18.25 28.375 43.56 y(2) 1.6666 3.0000 4.5000 5.6250 6.4688 7.1015 y(2) yc(2) ∞ ∞ 9.0000 17.5 7.3891 Estimating the appropriate value of y(x) by averaging the values at the limits of the integral as indicated by equation (5.1.7) tends to stabilize the procedure yielding the iteration formula y ( k ) ( x + h ) = + 12 ∫ x0 +h x0 z[ y( x ) + y ( k −1) ( x + h ) dz = + [h ( x + h ) / 2][ y( x ) + y ( k −1) ( x + h )] / , (5.1.16) the application of which is contained in column (b) of Table 5.1 The limiting value of this iteration formula can also be found analytically to be + [h(x0+h/2)y(x0)]/2 (∞) (5.1.17) y (x0+h) = [1 ─ h(x0+h/2)/2] , which clearly demonstrates the stabilizing influence of the averaging process for this rapidly increasing solution 125 Numerical Methods and Data Analysis Finally, we can investigate the impact of a linear approximation for y(x) as given by equation (5.1.8) Let us assume that the solution behaves linearly as suggested by the center term of equation (5.1.8) This can be substituted directly into the explicit form for the solution given by equation (5.1.13) and the value for the slope, a, obtained as in equation (5.1.9) This process yields a = y(x0)(x0+h/2)/[1-(x0h/2)-(h2/3)] , (5.1.18) which with the linear form for the solution gives the solution without iteration The results are listed in table 5.1 in column (c) It is tempting to think that a combination of the right hand side of equation (5.1.7) integrated in closed form in equation (5.1.13) would give a more exact answer than that obtained with the help of equation (5.1.18), but such is not the case An iteration formula developed in such a manner can be iterated analytically as was done with equations (5.1.15) and (5.1.17) to yield exactly the results in column (c) of table 5.1 Thus the best one can hope for with a linear Picard's method is given by equation (5.1.8) with the slope, a, specified by equation (5.1.9) However, there is another approach to finding one-step methods The differential equation (5.1.3) has a full family of solutions depending on the initial value (i.e the solution at the beginning of the step) That family of solutions is restricted by the nature of g(x,y) The behavior of that family in the neighborhood of x=x0+h can shed some light on the nature of the solution at x = x0+h This is the fundamental basis for one of the more successful and widely used one-step methods known as the Runge-Kutta method The RungeKutta method is also one of the few methods in numerical analysis that does not rely directly on polynomial approximation for, while it is certainly correct for polynomials, the basic method assumes that the solution can be represented by a Taylor series So let us begin our discussion of Runge-Kutta formulae by assuming that the solution can be represented by a finite taylor series of the form y n +1 = y n + hy' n +(h / 2!) y"n + " + (h k / k!) y (nk ) (5.1.19) Now assume that the solution can also be represented by a function of the form yn+1 = yn + h{α0g(xn,yn)+α1g[(xn+µ1h),(yn+b1h)] +α2g[(xn+µ2h),(yn+b2h)]+ " +αkg[(xn+µkh),(yn+bkh)]} (5.1.20) This rather convoluted expression, while appearing to depend only on the value of y at the initial step (i.e yn ) involves evaluating the function g(x,y) all about the solution point xn, yn (see Figure 5.1) By setting equations (5.1.19) and (5.1.20) equal to each other, we see that we can write the solution in the from yn+1 = yn + α0t0 + α1t1 + " + αktk , (5.1.21) where the tis can be expressed recursively by t = hg ( x n , y n )   t = hg[( x n + µ1 h ), ( y n + λ 1, t )]   t = hg[( x n + µ h ), ( y n + λ 2,0 t + λ 2,1 t )]   # #  t k = hg[( x n + µ k h ), ( y n + λ k ,0 t + λ k ,1 t + " + λ k ,k −1 t k −1 )]  126 (5.1.22) @ Differential and Integral Equations Now we must determine k+1 values of α, k values of µ and k×(k+1)/2 values of λi,j But we only have k+1 terms of the Taylor series to act as constraints Thus, the problem is hopelessly under-determined Thus indeterminency will give rise to entire families of Runge-Kutta formulae for any order k In addition, the algebra to eliminate as many of the unknowns as possible is quite formidable and not unique due to the undetermined nature of the problem Thus we will content ourselves with dealing only with low order formulae which demonstrate the basic approach and nature of the problem Let us consider the lowest order that provides some insight into the general aspects of the Runge-Kutta method That is k=1 With k=1 equations (5.1.21) and (5.1.22) become y n +1 = y n + α t + α t   t = hg ( x n y n )  t = hg[( x n + µh ), ( y n + λt )]  (5.1.23) Here we have dropped the subscript on λ as there will only be one of them However, there are still four free parameters and we really only have three equations of constraint Figure 5.1 show the solution space for the differential equation y' = g(x,y) Since the initial value is different for different solutions, the space surrounding the solution of choice can be viewed as being full of alternate solutions The two dimensional Taylor expansion of the RungeKutta method explores this solution space to obtain a higher order value for the specific solution in just one step 127 Numerical Methods and Data Analysis If we expand g(x,y) about xn, yn, in a two dimensional taylor series, we can write g[( x n + µh ), ( y n + λt )] = g ( x n , y n ) + µh + 12 λ2 t 02 ∂g( x n , y n ) ∂g( x n , y n ) 2 ∂g( x n , y n ) + λt + 2µ h ∂x ∂y ∂x ∂ g(x n , y n ) ∂ g(x n , y n ) + µλ t +"+ ∂x∂y ∂y (5.1.24) (5.1.25) Making use of the third of equations (5.1.23), we can explicitly write t1 as  ∂g( x n , y n ) ∂g( x n , y n )  t = hg( x n , y n ) + h µ + λg ( x n , y n )  ∂x ∂y   2  ∂ g( x n , y n ) ∂ g( x n , y n ) ∂ g( x n , y n )  2 + λ g (x n , y n ) + 2µλg( x n , y n )  h µ ∂x∂y  ∂x ∂y  Direct substitution into the first of equations (5.1.23) gives  ∂g( x n , y n ) ∂g( x n , y n )  y n +1 = y n + h (α + α1 )g ( x n , y n ) + h µ + λg ( x n , y n )  ∂x ∂y   (5.1.26) 2   g ( x , y ) g ( x , y ) g ( x , y ) ∂ ∂ ∂ n n n n n n + λ2 g ( x n , y n ) + 2µλg( x n , y n )  h α µ 2 ∂x∂y  ∂x ∂y  We can also expand y' in a two dimensional taylor series making use of the original differential equation (5.1.3) to get y ' = g ( x , y) ∂g( x , y) ∂g( x , y) ∂g ( x, y) ∂g ( x, y) + y' = + g ( x , y) y" = ∂x ∂y ∂x ∂y y' ' ' = ∂y" ∂y" ∂ g( x , y) ∂g ( x, y) ∂g( x, y) ∂ g ( x , y) + y' = + • + g ( x , y ) ∂x ∂y ∂x ∂y ∂x∂y ∂x 2  ∂g( x, y)  ∂ g ( x , y) ∂ g ( x , y) + g ( x , y) + g ( x , y)  + g ( x , y )  ∂y∂x ∂y  ∂y             (5.1.27) Substituting this into the standard form of the Taylor series as given by equation (5.1.19) yields  ∂g ( x , y) ∂ g ( x , y)  ∂g ( x , y)  h  ∂ g ( x , y)  y n +1 = y n + hg ( x , y) + h  g ( x , y ) + + + λg ( x , y)  ∂y   ∂x ∂y   ∂x  ∂ g ( x , y) ∂g ( x , y)  ∂g ( x , y) ∂g ( x , y)     + 2g ( x , y ) + + g ( x , y)     ∂x∂y ∂y  ∂x ∂y    (5.1.28) Now by comparing this term by term with the expansion shown in equation (5.1.26) we can conclude that the free parameters α0, α1, µ, and λ must be constrained by 128 @ Differential and Integral Equations (α + α ) =   α1µ = 12   α1λ =  (5.1.29) As we suggested earlier, the formula is under-determined by one constraint However, we may use the constraint equations as represented by equation (5.1.29) to express the free parameters in terms of a single constant c Thus the parameters are α0 = 1− c   α1 = c  µ = λ = c  (5.1.30) and the approximation formula becomes  ∂g ( x, y) ∂ g ( x , y)  ∂g( x , y)  h  ∂ g( x , y) + + y n +1 = y n + hg( x , y) + h  g ( x , y ) + λg ( x , y)   ∂y  8c  ∂x ∂y   ∂x  ∂ g ( x , y)   + 2g( x, y)   ∂x∂y   (5.1.31) We can match the first two terms of the Taylor series with any choice of c The error term will than be of order O(h3) and specifically has the form R n +1 ∂g( x n , y n ) " h3   [3 − 4c]y 'n'' − =− yn ∂y 24c     (5.1.32) Clearly the most effective choice of c will depend on the solution so that there is no general "best" choice However, a number of authors recommend c = ½ as a general purpose value If we increase the number of terms in the series, the under-determination of the constants gets rapidly worse More and more parameters must be chosen arbitrarily When these formulae are given, the arbitrariness has often been removed by fiat Thus one may find various Runge-Kutta formulae of the same order For example, a common such fourth order formula is y n +1 = y n + ( t + 2t + 2t + t ) / t = hg( x n , y n ) t = hg[( x n + 12 h ), ( y n + 12 t )] t = hg[( x n + h ), ( y n + t )] 2 t = hg[( x n + h ), ( y n + t )]         (5.1.33) Here the "best" choice for the under-determined parameters has already been made largely on the basis of experience If we apply these formulae to our test differential equation (5.1.10), we need first specify which Runge-Kutta formula we plan to use Let us try the second order (i.e exact for quadratic polynomials) formula given by equation (5.1.23) with the choice of constants given by equation (5.1.29) when c = ½ The 129 Numerical Methods and Data Analysis formula then becomes y n +1 = y n + 12 t + 12 t   t = hg ( x n , y n )  t = hg[( x n + h ), ( y n + t )] (5.1.34) So that we may readily compare to the first order Picard formula, we will take h = and y(0) = Then taking g(x,y) from equation (5.1.10) we get for the first step that t = hx y = (1)(0)(1) =   t = h ( x + h )( y + t ) = (1)(0 + 1)(1 + 0) =   y( x + h ) = y1 = (1) + ( )(0) + ( )(1) = 32  (5.1.35) The second step yields t = hx y1 = (1)(1)( ) =   t = h ( x + h )( y1 + t ) = (1)(1 + 1)(1 + ) =  y( x + h ) = y = ( ) + ( )( ) + ( )(5) = 194  (5.1.36) Table 5.2 Sample Runge-Kutta Solutions Second Order Solution i y1 δy1 Fourth Order Solution Step h=1/2 ti [0 , 9/32] [1/4 , 45/64] - h=1 ti 0.0 1.0 1.5 1.6172 h=1 ti 0.00000 0.50000 0.62500 1.62500 1.64587 1.65583 0.1172 0.8532* h '1 i ti 1.5 5.0 - Step ti [0.8086 , 2.1984] [1.8193 , 5.1296] - y2 δy2 4.75 6.5951 1.8451 h'2 0.0635 * This value assumes that δy0 = 0.1 130 yc ti 1.64583 3.70313 5.24609 13.78384 7.38906 7.20051 Numerical Methods and Data Analysis The statistical design of an experiment is extremely important when dealing with an array of factors or variables whose interaction is unpredictable from theoretical considerations There are many pitfalls to be encountered in this area of study which is why it has become the domain of specialists However, there is no substitute for the insight and ingenuity of the researcher in identifying the variables to be investigated Any statistical study is limited in practice by the sample size and the systematic and unknown effects that may plague the study Only the knowledgeable researcher will be able to identify the possible areas of difficulty Statistical analysis may be able to confirm those suspicions, but will rarely find them without the foresight of the investigator Statistical analysis is a valuable tool of research, but it is not meant to be a substitute for wisdom and ingenuity The user must also always be aware that it is easy to phrase statistical inference so that the resulting statement says more than is justified by the analysis Always remember that one does not "prove" hypotheses by means of statistical analysis At best one may reject a hypothesis or add confirmatory evidence to support it But the sample population is not the parent population and there is always the chance that the investigator has been unlucky 254 • Moments and Statistical Tests Chapter Exercises Show that the variance of the t-probability density distribution function given by equation (8.1.2) is indeed σ 2t as given by equation (8.1.3) Use equation (8.1.7) to find the variance, mode , and skewness of the χ2-distribution function Compare your results to equation (8.1.8) Find the mean, mode and variance of the F-distribution function given by equation (8.1.11) Show that the limiting relations given by equations (8.1.13) - (8.1.15) are indeed correct Use the numerical quadrature methods discussed in chapter to evaluate the probability integral for the t-test given by equation (8.2.5) for values of p=.1, 0.1, 0.01, and N=10, 30, 100 Obtain values for and compare with the results you would obtain from equation (8.2.6) Use the numerical quadrature methods discussed in chapter to evaluate the probability integral for the χ2-test given by equation (8.2.8) for values of p=.1, 0.1, 0.01, and N=10, 30, 100 Obtain values for χ2p and compare with the results you would obtain from using the normal curve for the χ2probability density distribution function Use the numerical quadrature methods discussed in chapter to evaluate the probability integral for the F-test given by equation (8.2.9) for values of p=.1, 0.1, 0.01, N1=10, 30, 100, and N2=1, 10, 30 Obtain values for Fp Show how the various forms of the correlation coefficient given by equation (8.3.7) can be obtained from the definition given by the second term on the left Find the various values of the 0.1% marginally significant correlation coefficients when n= 5, 10, 30, 100, 1000 10 Find the correlation coefficient between X1 and Y1, and Y1 and Y2 in problem of chapter 11 Use the F-test to decide when you have added enough terms to represent the table given in problem of chapter 12 Use analysis of variance to show that the data in Table 8.1 imply that taking the bus and taking the ferry are important factors in populating the beach 13 Use analysis of variance to determine if the examination represented by the data in Table 7.1 sampled a normal parent population and at what level of confidence on can be sure of the result 255 Numerical Methods and Data Analysis 14 256 Assume that you are to design an experiment to find the factors that determine the quality of bread baked at 10 different bakeries Indicate what would be your central concerns and how you would go about addressing them Identify four factors that are liable to be of central significance in determining the quality of bread Indicate how you would design an experiment to find out if the factors are indeed important • Moments and Statistical Tests Chapter References and Supplemental Reading Croxton, F.E., Cowden, D.J., and Klein, S., "Applied General Statistics", (1967), Prentice-Hall, Inc., Englewood Cliffs, N.J Weast, R.C., "CRC Handbook of Tables for Probability and Statistics", (1966), (Ed W.H.Beyer), The Chemical Rubber Co Cleveland Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T., "Numerical Recipies the art of scientific computing" (1986), Cambridge University Press, Cambridge Smith, J.G., and Duncan, A.J., "Sampling Statistics and Applications: Fundementals of the Theory of Statistics", (1944), McGraw-Hill Book Company Inc., New York, London, pp.18 Cochran , W.G., and Cox, G.M., "Experimental Designs" (1957) John Wiley and Sons, Inc., New York, pp 10 Cochran , W.G., and Cox, G.M., "Experimental Designs" (1957) John Wiley and Sons, Inc., New York, pp 145-147 Weast, R.C., "CRC Handbook of Tables for Probability and Statistics", (1966), (Ed W.H.Beyer), The Chemical Rubber Co Cleveland, pp63-65 257 Numerical Methods and Data Analysis 258 Numerical Methods and Data Analysis Index A Adams-Bashforth-Moulton Predictor-Corrector 136 Analysis of variance .220, 245 design matrix for .243 for one factor 242 Anti-correlation: meaning of 239 Approximation norm 174 Arithmetic mean 222 Associativity defined Average 211 Axial vectors 11 B Babbitt Back substitution 30 Bairstow's method for polynomials 62 Bell-shaped curve and the normal curve …… 209 Binomial coefficient …………….…… 99, 204 Binomial distribution function .204, 207 Binomial series .204 Binomial theorem 205 Bivariant distribution .219 Blocked data and experiment design……… 272 Bodewig 40 Bose-Einstein distribution function .210 Boundary value problem 122 a sample solution .140 compared to an initial value problem 145 defined 139 Bulirsch-Stoer method 136 C Cantor, G Cartesian coordinates 8, 12 Causal relationship and correlation 239, 240 Central difference operator defined 99 Characteristic equation 49 of a matrix 49 Characteristic values 49 of a matrix 49 Characteristic vectors 49 of a matrix 49 Chebyschev polynomials 90 of the first kind 91 of the second kind 91 recurrence relation 91 relations between first and second kind 91 Chebyshev norm and least squares 190 defined 186 Chi square defined 227 distribution and analysis of variance 244 normalized 227 statistic for large N 230 Chi-square test confidence limits for 232 defined 232 meaning of 232 Cofactor of a matrix 28 Combination defined 204 Communitative law Complimentary error function 233 Confidence level defined 231 and percentiles 232 for correlation coefficients 241, 242 for the F-test 234 Confounded interactions defined 250 Constants of integration for ordinary differential equations 122 Contravariant vector………… …… 16 Convergence of Gauss-Seidel iteration 47 Convergent iterative function criterion for 46 259 Index Coordinate transformation Corrector Adams-Moulton 136 Correlation coefficient and causality .241 and covariance 242 and least squares .242 defined 239 for many variables 241 for the parent population 241 meaning of 239, 240 symmetry of 242 Covariance .219 and the correlation coefficient 241 coefficient of .219 of a symmetric function 220 Covariant vectors definition 17 Cramer's rule 28 Cross Product 11 Crout Method 34 example of 35 Cubic splines constraints for .75 Cumulative probability and KS tests .235 Cumulative probability distribution of the parent population 235 Curl 19 definition of .19 Curve fitting defined 64 with splines 75 D Degree of a partial differential equation 146 of an ordinary differential equation 121 Degree of precision defined 102 for Gaussian quadrature 106 for Simpson's rule .104 for the Trapezoid rule 103 260 Degrees of freedom and correlation 241 defined 221 for binned data 236 for the F-statistic 230 for the F-test 233 for the t-distribution 227 in analysis of variance 244 Del operator 19 (see Nabula) Derivative from Richardson extrapolation 100 Descartes's rule of signs 57 Design matrix for analysis of variance 243 Determinant calculation by Gauss-Jordan Method 33 of a matrix transformational invariance of……… 47 Deviation from the mean 238 statistics of 237 Difference operator definition 19 Differential equations and linear 2-point boundary value problems 139 Bulirsch-Stoer method 136 error estimate for 130 ordinary, defined 121 partial 145 solution by one-step methods 122 solution by predictor-corrector methods 134 solution by Runga-Kutta method …126 step size control 130 systems of 137 Dimensionality of a vector Dirac delta function as a kernel for an integral equation 155 Directions cosines Numerical Methods and Data Analysis Dirichlet conditions for Fourier series .166 Dirichlet's theorem 166 Discrete Fourier transform .169 Distribution function for chi-square 227 for the t-statistic 226 of the F-statistic 229 Divergence .19 definition of .19 Double-blind experiments 246 E Effect defined for analysis of variance 244 Eigen equation 49 of a matrix 49 Eigen-vectors 49 of a matrix 49 sample solution for 50 Eigenvalues of a matrix .48, 49 sample solution for 50 Equal interval quadrature .112 Equations of condition for quadrature weights 106 Error analysis for non-linear least squares .186 Error function .232 Euler formula for complex numbers 168 Expectation value 221 defined 202 Experiment design 245 terminology for 249 using a Latin square 251 Experimental area 249 Extrapolation 77, 78 F F-distribution function defined 227 F-statistic 230 and analysis of variance 244 for large N .230 F-test and least squares 234 defined 233 for an additional parameter 234 meaning of 234 Factor in analysis of variance 242 of an experiment 249 Factored form of a polynomial 56 Factorial design 249 Fast Fourier Transform 92, 168 Fermi-Dirac distribution function 210 Field definition scalar vector Finite difference calculus fundemental theorem of 98 Finite difference operator use for numerical differentiation 98 First-order variances defined 237 Fixed-point defined 46 Fixed-point iteration theory 46 and integral equations 153 and non-linear least squares 182, 186 and Picard's method 123 for the corrector in ODEs 136 Fourier analysis 164 Fourier integral 167 Fourier series 92, 160 and the discrete Fourier transform 169 coefficients for 165 convergence of 166 Fourier transform 92, 164 defined 167 for a discrete function 169 inverse of 168 Fredholm equation defined 146 solution by iteration 153 solution of Type 147 solution of Type 148 261 Index Freedom degrees of 221 Fundamental theorem of algebra 56 G Galton, Sir Francis 199 Gauss, C.F 106, 198 Gauss elimination and tri-diagonal equations……………38 Gauss Jordan Elimination 30 Gauss-Chebyschev quadrature and multi-dimension quadrature .114 Gauss-Hermite quadrature .114 Gauss-iteration scheme example of 40 Gauss-Jordan matrix inversion example of 32 Gauss-Laguerre quadrature 117 Gauss-Legendre quadrature 110 and multi-dimension quadrature .115 Gauss-Seidel Iteration 39 example of 40 Gaussian Elimination .29 Gaussian error curve 210 Gaussian quadrature 106 compared to other quadrature formulae112 compared with Romberg quadrature.111 degree of precision for 107 in multiple dimensions 113 specific example of 108 Gaussian-Chebyschev quadrature 110 Gegenbauer polynomials 91 Generating function for orthogonal polynomials87 Gossett 233 Gradient 19 definition of .19 of the Chi-squared surface………… 183 Hermitian matrix definition Higher order differential equations as systems of first order equations……………… 140 Hildebrandt 33 Hollerith Hotelling 40 Hotelling and Bodewig method example of 42 Hyper-efficient quadrature formula for one dimension 103 in multiple dimensions 115 Hypothesis testing and analysis of variance 245 I Identity operator 99 Initial values for differential equations 122 Integral equations defined 146 homogeneous and inhomogeneous 147 linear types 147 Integral transforms 168 Interaction effects and experimental design 251 Interpolation by a polynomial 64 general theory 63 Interpolation formula as a basis for quadrature formulae……………………104 Interpolative polynomial example of 68 Inverse of a Fourier Transform 168 Iterative function convergence of 46 defined 46 multidimensional 46 Iterative Methods and linear equations 39 H J Heisenberg Uncertainty Principle 211 Hermite interpolation .72 as a basis for Gaussian quadrature….106 Hermite Polynomials .89 recurrence relation………… .89 Jacobi polynomials 91 and multi-dimension Gaussian quadrature 114 Jacobian 113 Jenkins-Taub method for polynomials 63 262 Numerical Methods and Data Analysis K Kernel of an integral equation 148 and uniqueness of the solution… …154 effect on the solution .154 Kolmogorov-Smirnov tests 235 Type .236 Type .236 Kronecker delta 9, 41, 66 definition .6 Kurtosis 212 of a function of the normal curve 218 of the t-distribution 226 L Lagrange Interpolation 64 and quadrature formulae 103 Lagrange polynomials for equal intervals .66 relation to Gaussian quadrature…… 107 specific examples of 66 Lagrangian interpolation and numerical differention……………99 weighted form .84 Laguerre Polynomials 88 recurrence relation 89 Laplace transform defined 168 Latin square defined 251 Least square coefficients errors of 176, 221 Least Square Norm defined 160 Least squares and analysis of variance 243 and correlation coefficients…………236 and maximum likelihood 222 and regression analysis .199 and the Chebyshev norm 190 for linear functions 161 for non-linear problems 181 with errors in the independent variable181 Legendre, A 160, 198 Legendre Approximation .160, 164 Legendre Polynomials 87 for Gaussian quadrature 108 recurrence relation 87 Lehmer-Schur method for polynomials 63 Leibnitz 97 Levels of confidence defined 231 Levi-Civita Tensor 14 definition 14 Likelihood defined 221 213 maximum value for 221 Linear correlation 236 Linear equations formal solution for 28 Linear Programming 190 and the Chebyshev norm 190 Linear transformations Logical 'or' 200 Logical 'and' 200 M Macrostate 210 Main effects and experimental design……… 251 Matrix definition factorization 34 Matrix inverse improvement of 41 Matrix product definition Maximum likelihood and analysis of variance 243 of a function 222 Maxwell-Boltzmann statistics 210 Mean 211, 212 distribution of 225 of a function 211, 212 of the F-statistic 230 of the normal curve 218 of the t-distribution 226 Mean square error and Chi-square 227 statistical interpretation of………… 238 Mean square residual (see mean square error) determination of 179 263 Index Median defined 214 of the normal curve 218 Microstate 210 Milne predictor .136 Mini-max norm 186 (see also Chebyshev norm) Minor of a matrix .28 Mode 222 defined 213 of a function 214 of chi-square .227 of the F-statistic 230 of the normal curve 218 of the t-distribution 226 Moment of a function 211 Monte Carlo methods 115 quadrature 115 Multi-step methods for the solution of ODEs… …………………….134 Multiple correlation .245 Multiple integrals 112 Multivariant distribution 219 N Nabula 19 Natural splines 77 Neville's algorithm for polynomials .71 Newton, Sir I .97 Newton-Raphson and non-linear least squares 182 for polynomials 61 Non-linear least squares errors for 186 Non-parametric statistical tests (see Kolmogorov-Smirnov tests) .236 Normal curve 209 and the t-,F-statistics 230 Normal distribution 221 and analysis of variance 245 Normal distribution function 209 Normal equations 161 for non-linear least squares .181 for orthogonal functions 164 for the errors of the coefficients 175 264 for unequally spaced data 165 matrix development for tensor product …………………………… 162 for weighted 163 for Normal matrices defined for least squares 176 Null hypothesis 230 for correlation 240 for the K-S tests 235 Numerical differentiation 97 Numerical integration 100 O Operations research 190 Operator 18 central difference 99 difference 19 differential 18 finite difference 98 finite difference dentity……… 99 identity 19 integral 18 shift 19, 99 summation 19 vector 19 Optimization problems 199 Order for an ordinary differential equation 121 of a partial differential equation…….146 of an approximation 63 of convergence 64 Orthogonal polynomials and Gaussian quadrature 107 as basis functions for iterpolation 91 some specific forms for 90 Orthogonal unitary transformations 10 Orthonormal functions…………………………… 86 Orthonormal polynomials defined 86 Orthonormal transformations 10, 48 Over relaxation for linear equations 46 P Numerical Methods and Data Analysis Parabolic hypersurface and non-linear least squares 184 Parametric tests 235 (see t-,F-,and chi-square tests) Parent population 217, 221, 231 and statistics 200 correlation coefficients in 239 Partial correlation 245 Partial derivative defined 146 Partial differential equation 145 and hydrodynamics 145 classification of 146 Pauli exclusion principle 210 Pearson correlation coefficient .239 Pearson, K .239 Percent level 232 Percentile defined 213 for the normal curve 218 Permutation defined 204 Personal equation 246 Photons …………………………… 229 Picard's method 123 Poisson distribution 207 Polynomial factored form for 56 general definition 55 roots of 56 Polynomial approximation .97 and interpolation theory 63 and multiple quadrature 112 and the Chebyshev norm 187 Polynomials Chebyschev .91 for splines 76 Gegenbauer 90 Hermite .90 Jacobi 90 Lagrange .66 Laguerre 89 Legendre .87 orthonormal .86 Ultraspherical 90 Polytope 190 Power Spectra 92 Precision of a computer 25 Predictor Adams-Bashforth 136 stability of 134 Predictor-corrector for solution of ODEs 134 Probabilitly definition of 199 Probability density distribution function 203 defined 203 Probable error 218 Product polynomial defined 113 Proper values 49 of a matrix 49 Proper vectors 49 of a matrix 49 Protocol for a factorial design 251 Pseudo vectors 11 Pseudo-tensor 14 (see tensor density) Pythagoras theorem and least squares 179 Q Quadrature 100 and integral equations 148 for multiple integrals 112 Monte Carlo 115 Quadrature weights determination of 105 Quartile defined 214 upper and lower 214 Quotient polynomial 80 interpolation with 82 (see rational function) 80 R Random variable defined 202 moments for 212 Rational function 80 and the solution of ODEs 137 265 Index Recurrence relation for Chebyschev polynomials .91 for Hermite polynomials 90 for Laguerre polynomials 89 for Legendre polynomials 87 for quotient polynomials .81 for rational interpolative functions 81 Recursive formula for Lagrangian polynomials68 Reflection transformation 10 Regression analysis 217, 220, 236 and least squares .199 Regression line .237 degrees of freedom for 241 Relaxation Methods for linear equations .43 Relaxation parameter defined 44 example of 44 Residual error in least squares 176 Richardson extrapolation 99 or Romberg quadrature .111 Right hand rule .11 Romberg quadrature .111 compared to other formulae……… 112 including Richardson extrapolation 112 Roots of a polynomial 56 Rotation matrices .12 Rotational Transformation 11 Roundoff error 25 Rule of signs 57 Runga-Kutta algorithm for systems of ODEs 138 Runga-Kutta method 126 applied to boundary value problems 141 S Sample set and probability theory………… 200 Sample space 200 Scalar product definition .5 Secant iteration scheme for polynomials 63 Self-adjoint .6 Shift operator 99 266 Significance level of 230 meaning of 230 of a correlation coefficient 240 Similarity transformation 48 definition of 50 Simplex method 190 Simpson's rule and Runge-Kutta 143 as a hyper-efficient quadrature formula…………………….104 compared to other quadrature formulae 112 degree of precision for 104 derived 104 running form of 105 Singular matrices 33 Skewness 212 of a function 212 of chi-square 227 of the normal curve 218 of the t-distribution 226 Splines 75 specific example of 77 Standard deviation and the correlation coefficient 239 defined 212 of the mean 225 of the normal curve 218 Standard error of estimate 218 Statistics Bose-Einstein 210 Fermi-Dirac 211 Maxwell-Boltzmann 210 Steepest descent for non-linear least squares 184 Step size control of for ODE 130 Sterling's formula for factorials 207 Students's t-Test 233 (see t-test) Symmetric matrix Synthetic Division 57 recurrence relations for 58 Numerical Methods and Data Analysis T t-statistic defined 225 for large N .230 t-test defined 231 for correlation coefficients 242 for large N .231 Taylor series and non-linear least squares 183 and Richardson extrapolation .99 and Runga-Kutta method 126 Tensor densities 14 Tensor product for least square normal equations 162 Topology Trace of a matrix .6 transformational invarience of 49 Transformation- rotational .11 Transpose of the matrix 10 Trapezoid rule 102 and Runge-Kutta .143 compared to other quadrature formulae112 general form 111 Treatment and experimental design .249 Treatment level for an experiment 249 Tri-diagonal equations .38 for cubic splines 77 Trials and experimantal design 252 symbology for………………………252 Triangular matrices for factorization 34 Triangular system of linear equations .30 Trigonometric functions orthogonality of 92 Truncation error .26 estimate and reduction for ODE 131 estimate for differential equations 130 for numerical differentiation .99 Unit matrix 41 Unitary matrix V Vandermode determinant 65 Variance 211, 212, 220 analysis of 242 for a single observation 227 of the t-distribution 226 of a function 212 of a single observation 220 of chi-square 227 of the normal curve 218 of the F-statistic 230 of the mean 220, 225 Variances and Chi-squared 227 first order 238 of deviations from the mean 238 Vector operators 19 Vector product definition Vector space for least squares 179 Vectors contravariant 16 Venn diagram for combined probability 202 Volterra equations as Fredholm equations 150 defined 146 solution by iteration 153 solution of Type 150 solution of Type 150 W Weight function 86 for Chebyschev polynomials 90 for Gaussian quadrature 109 for Gegenbauer polynomials 90 for Hermite polynomials 89 for Laguerre polynomials 88 for Legendre polynomials 87 Jacobi polynomials 90 Weights for Gaussian quadrature 108 U 267 Index Y Yield for an experiment 249 Z Zeno's Paradox .197 268 ... step 127 Numerical Methods and Data Analysis If we expand g(x,y) about xn, yn, in a two dimensional taylor series, we can write g[( x n + µh ), ( y n + λt )] = g ( x n , y n ) + µh + 12 2 t 02 ∂g(... by equation (5.1 .23 ) with the choice of constants given by equation (5.1 .29 ) when c = ½ The 129 Numerical Methods and Data Analysis formula then becomes y n +1 = y n + 12 t + 12 t   t = hg (... y(1) 1.0 1.5 1. 625 1.6563 1.6641 1.6660 (c) y(1) (d) yc(1) ∞ 2. 000 5/3 7/4 1.6487 i y (2) 4.0 7.0 11.5 18 .25 28 .375 43.56 y (2) 1.6666 3.0000 4.5000 5. 625 0 6.4688 7.1015 y (2) yc (2) ∞ ∞ 9.0000 17.5

Ngày đăng: 19/05/2017, 07:59

Từ khóa liên quan

Mục lục

  • Table of Contents

  • A Note Added for the Internet Edition

  • A Further Note for the Internet Edition

  • Introduction and Fundamental Concepts

    • 1.1 Basic Properties of Sets and Groups

    • 1.2 Scalars, Vectors, and Matrices

    • 1.3 Coordinate Systems and Coordinate Transformations

    • 1.4 Tensors and Transformations

    • 1.5 Operators

    • Chapter 1 Exercises

    • Chapter 1 References and Additional Reading

    • The Numerical Methods for Linear Equations and Matrices

      • 2.1 Errors and Their Propagation

      • 2.2 Direct Methods for the Solution of Linear Algebraic Equations

      • 2.3 Solution of Linear Equations by Iterative Methods

      • 2.4 The Similarity Transformations and the Eigenvalues and Vectors of a Matrix

      • Chapter 2 Exercises

      • Chapter 2 References and Supplemental Reading

      • Polynomial Approximation, Interpolation, and Orthogonal Polynomials

        • 3.1 Polynomials and Their Roots

        • 3.2 Curve Fitting and Interpolation

        • 3.3 Orthogonal Polynomials

        • Chapter 3 Exercises

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan