numerical methods engineering python 3 3rd tủ tài liệu bách khoa

438 162 0
numerical methods engineering python 3 3rd tủ tài liệu bách khoa

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

more information - www.cambridge.org/9781107033856 Numerical Methods in Engineering with Python This book is an introduction to numerical methods for students in engineering It covers the usual topics found in an engineering course: solution of equations, interpolation and data fitting, solution of differential equations, eigenvalue problems, and optimization The algorithms are implemented in Python 3, a high-level programming language that riR vals MATLAB in readability and ease of use All methods include programs showing how the computer code is utilized in the solution of problems The book is based on Numerical Methods in Engineering with Python, which used Python Apart from the migration from Python to Python 3, the major change in this new text is the introduction of the Python plotting package Matplotlib Jaan Kiusalaas is a Professor Emeritus in the Department of Engineering Science and Mechanics at Pennsylvania State University He has taught computer methods, including finite element and boundary element methods, for more than 30 years He is also the co-author or author of four books – Engineering Mechanics: Statics; Engineering Mechanics: Dynamics; Mechanics of Materials; Numerical Methods in Engineering with MATLAB (2nd edition); and two previous editions of Numerical Methods in Engineering with Python NUMERICAL METHODS IN ENGINEERING WITH PYTHON Jaan Kiusalaas The Pennsylvania State University cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi, Mexico City Cambridge University Press 32 Avenue of the Americas, New York, NY 10013-2473, USA www.cambridge.org Information on this title: www.cambridge.org/9781107033856 C Jaan Kiusalaas 2013 This publication is in copyright Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press First published 2013 Printed in the United States of America A catalog record for this publication is available from the British Library Library of Congress Cataloging in Publication data Kiusalaas, Jaan Numerical methods in engineering with Python / Jaan Kiusalaas pages cm Includes bibliographical references and index ISBN 978-1-107-03385-6 Engineering mathematics – Data processing Python (Computer program language) TA345.K58 2013 620.00285 5133–dc23 2012036775 I Title ISBN 978-1-107-03385-6 Hardback Additional resources for this publication at www.cambridge.org/kiusalaaspython Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate Contents Preface .ix Introduction to Python 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 General Information Core Python Functions and Modules 16 Mathematics Modules 18 numpy Module 20 Plotting with matplotlib.pyplot 25 Scoping of Variables 28 Writing and Running Programs 29 Systems of Linear Algebraic Equations 31 2.1 Introduction 31 2.2 Gauss Elimination Method 37 2.3 LU Decomposition Methods 44 Problem Set 2.1 .55 2.4 Symmetric and Banded Coefficient Matrices 59 2.5 Pivoting 69 Problem Set 2.2 .78 ∗ 2.6 Matrix Inversion 84 ∗ 2.7 Iterative Methods 87 Problem Set 2.3 .98 2.8 Other Methods 102 Interpolation and Curve Fitting 104 3.1 Introduction 104 3.2 Polynomial Interpolation 105 3.3 Interpolation with Cubic Spline 120 Problem Set 3.1 126 3.4 Least-Squares Fit 129 Problem Set 3.2 141 Roots of Equations 145 4.1 Introduction 145 4.2 Incremental Search Method 146 v vi Contents 4.3 Method of Bisection .148 4.4 Methods Based on Linear Interpolation 151 4.5 Newton-Raphson Method 156 4.6 Systems of Equations 161 Problem Set 4.1 166 ∗ 4.7 Zeros of Polynomials 173 Problem Set 4.2 180 4.8 Other Methods 182 Numerical Differentiation 183 5.1 Introduction 183 5.2 Finite Difference Approximations .183 5.3 Richardson Extrapolation 188 5.4 Derivatives by Interpolation 191 Problem Set 5.1 195 Numerical Integration .199 6.1 Introduction 199 6.2 Newton-Cotes Formulas 200 6.3 Romberg Integration .207 Problem Set 6.1 212 6.4 Gaussian Integration 216 Problem Set 6.2 230 ∗ 6.5 Multiple Integrals .232 Problem Set 6.3 243 Initial Value Problems 246 7.1 Introduction 246 7.2 Euler’s Method 247 7.3 Runge-Kutta Methods 252 Problem Set 7.1 263 7.4 Stability and Stiffness 268 7.5 Adaptive Runge-Kutta Method 271 7.6 Bulirsch-Stoer Method 280 Problem Set 7.2 287 7.7 Other Methods 292 Two-Point Boundary Value Problems 293 8.1 Introduction 293 8.2 Shooting Method 294 Problem Set 8.1 304 8.3 Finite Difference Method 307 Problem Set 8.2 317 Symmetric Matrix Eigenvalue Problems 321 9.1 Introduction 321 9.2 Jacobi Method 324 9.3 Power and Inverse Power Methods 336 Problem Set 9.1 345 9.4 Householder Reduction to Tridiagonal Form 351 vii Contents 9.5 Eigenvalues of Symmetric Tridiagonal Matrices 359 Problem Set 9.2 368 9.6 Other Methods 373 10 Introduction to Optimization 374 10.1 Introduction 374 10.2 Minimization Along a Line 376 10.3 Powell’s Method 382 10.4 Downhill Simplex Method 392 Problem Set 10.1 399 Appendices 407 A1 Taylor Series 407 A2 Matrix Algebra .410 List of Program Modules (by Chapter) 417 Index .421 410 Appendices (x + y ) − x(2x) −x + y ∂2 f = = 2 2 ∂x (x + y ) (x + y )2 x2 − y ∂2 f = ∂y (x + y )2 ∂2 f −2xy ∂2 f = = ∂x∂y ∂y∂x (x + y )2 H(x, y) = H(−2, 1) = A2 −x + y −2xy −0.12 0.16 −2xy x2 − y (x + y )2 0.16 0.12 Matrix Algebra A matrix is a rectangular array of numbers The size of a matrix is determined by the number of rows and columns, also called the dimensions of the matrix Thus a matrix of m rows and n columns is said to have the size m × n (the number of rows is always listed first) A particularly important matrix is the square matrix, which has the same number of rows and columns An array of numbers arranged in a single column is called a column vector, or simply a vector If the numbers are set out in a row, the term row vector is used Thus a column vector is a matrix of dimensions n × 1, and a row vector can be viewed as a matrix of dimensions × n We denote matrices by boldfaced uppercase letters For vectors we use boldface lowercase letters Here are examples of the notation: ⎡ ⎤ ⎤ ⎡ b1 A 11 A 12 A 13 ⎢ ⎥ ⎥ ⎢ b = ⎣ b2 ⎦ (A9) A = ⎣ A 21 A 22 A 23 ⎦ A 31 A 32 A 33 b3 Indices of the elements of a matrix are displayed in the same order as its dimensions: The row number comes first, followed by the column number Only one index is needed for the elements of a vector Transpose The transpose of a matrix A is denoted by AT and defined as A ijT = A ji The transpose operation thus interchanges the rows and columns of the matrix If applied to vectors, it turns a column vector into a row vector and vice versa 411 A2 Matrix Algebra For example, transposing A and b in Eq (A9), we get ⎡ A 11 ⎢ AT = ⎣ A 12 A 13 A 21 A 22 A 23 ⎤ A 31 ⎥ A 32 ⎦ A 33 bT = b1 b2 b3 An n × n matrix is said to be symmetric if AT = A This means that the elements in the upper triangular portion (above the diagonal connecting A 11 and A nn ) of a symmetric matrix are mirrored in the lower triangular portion Addition The sum C = A + B of two m × n matrices A and B is defined as Cij = A ij + Bij , i = 1, 2, , m; j = 1, 2, , n (A10) Thus the elements of C are obtained by adding elements of A to the elements of B Note that addition is defined only for matrices that have the same dimensions Vector Products The dot or inner product c = a · b of the vectors a and b, each of size m, is defined as the scalar m c= a k bk (A11) k=1 It can also be written in the form c = aT b In NumPy the function for the dot product is dot(a,b) or inner(a,b) The outer product C = a ⊗ b is defined as the matrix Cij = b j An alternative notation is C = abT The NumPy function for the outer product is outer(a,b) Array Products The matrix product C = AB of an l × m matrix A and an m × n matrix B is defined by m Cij = A ik Bkj , i = 1, 2, , l; j = 1, 2, , n (A12) k=1 The definition requires the number of columns in A (the dimension m) to be equal to the number of rows in B The matrix product can also be defined in terms of 412 Appendices the dot product Representing the ith row of A as the vector and the j th column of B as the vector b j , we have ⎡ a1 · b1 ⎢a · b ⎢ AB = ⎢ ⎢ ⎣ a · b1 a1 · b2 a2 · b2 a · b2 ⎤ a1 · bn a2 · bn ⎥ ⎥ ⎥ ⎥ ⎦ a · bn ··· ··· ··· (A13) NumPy treats the matrix product as the dot product for arrays, so that the function dot(A,B) returns the matrix product of A and B NumPy defines the inner product of matrices A and B to be C = ABT Equation (A13) still applies, but now b represents the j th row of B NumPy’s definition of the outer product of matrices A (size k × ) and B (size m × n) is as follows Let be the ith row of A, and let b j represent the j th row of B Then the outer product is of A and B is ⎡ a1 ⊗ b1 ⎢a ⊗ b ⎢ A⊗B=⎢ ⎢ ⎣ ak ⊗ b1 a1 ⊗ b2 a2 ⊗ b2 ak ⊗ b2 ··· ··· ··· ⎤ a1 ⊗ bm a2 ⊗ bm ⎥ ⎥ ⎥ ⎥ ⎦ ak ⊗ bm (A14) The submatrices ⊗ b j are of dimensions × n As you can see, the size of the outer product is much larger than either A or B Identity Matrix A square matrix of special importance is the identity or unit matrix: ⎡ ⎢0 ⎢ ⎢ I=⎢ ⎢ ⎢ ⎣ 0 0 ··· ··· ··· 0 ⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ ⎥ ⎦ (A15) It has the property AI = IA = A Inverse The inverse of an n × n matrix A, denoted by A−1 , is defined to be an n × n matrix that has the property A−1 A = AA−1 = I (A16) 413 A2 Matrix Algebra Determinant The determinant of a square matrix A is a scalar denoted by |A| or det(A) There is no concise definition of the determinant for a matrix of arbitrary size We start with the determinant of a × matrix, which is defined as A 11 A 21 A 12 = A 11 A 22 − A 12 A 21 A 22 (A17) The determinant of a × matrix is then defined as A 11 A 21 A 31 A 12 A 22 A 32 A 13 A 22 A 23 = A 11 A 32 A 33 A 23 A 21 − A 12 A 33 A 31 A 23 A 21 + A 13 A 33 A 31 A 22 A 32 Having established the pattern, we can now define the determinant of an n × n matrix in terms of the determinant of an (n − 1) × (n − 1) matrix: n |A| = (−1)k+1 A 1k M1k (A18) k=1 where Mik is the determinant of the (n − 1) × (n − 1) matrix obtained by deleting the ith row and kth column of A The term (−1)k+i Mik is called a cofactor of A ik Equation (A18) is known as Laplace’s development of the determinant on the first row of A Actually Laplace’s development can take place on any convenient row Choosing the ith row, we have n |A| = (−1)k+i A ik Mik (A19) k=1 The matrix A is said to be singular if |A| = Positive Definiteness An n × n matrix A is said to be positive definite if xT Ax > (A20) for all nonvanishing vectors x It can be shown that a matrix is positive definite if the determinants of all its leading minors are positive The leading minors of A are the n square matrices ⎡ A 11 ⎢A ⎢ 12 ⎢ ⎢ ⎣ A k1 A 12 A 22 A k2 ··· ··· ··· ⎤ A 1k A 2k ⎥ ⎥ ⎥ , k = 1, 2, , n ⎥ ⎦ A kk 414 Appendices Therefore, positive definiteness requires that A 11 A 21 A 11 > 0, A 11 A 21 A 31 A 12 > 0, A 22 A 12 A 22 A 32 A 13 A 23 > 0, , |A | > A 33 (A21) Useful Theorems We list without proof a few theorems that are used in the main body of the text Most proofs are easy and could be attempted as exercises in matrix algebra (AB)T = BT AT (A22a) (AB)−1 = B−1 A−1 (A22b) A T = |A| (A22c) |AB| = |A| |B| (A22d) if C = AT BA where B = BT , then C = CT EXAMPLE A4 Letting ⎡ ⎢ A = ⎣1 2 ⎤ ⎥ 1⎦ ⎡ ⎤ ⎢ ⎥ u = ⎣ 6⎦ −2 ⎡ ⎤ ⎢ ⎥ v = ⎣ 0⎦ −3 compute u + v, u · v, Av, and uT Av Solution ⎡ ⎤ ⎡ ⎤ 1+8 ⎢ ⎥ ⎢ ⎥ u + v = ⎣ + 0⎦ = ⎣ 6⎦ −2 − −5 u · v = 1(8)) + 6(0) + (−2)(−3) = 14 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ a1 ·v 1(8) + 2(0) + 3(−3) −1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Av = ⎣ a2 ·v ⎦ = ⎣ 1(8) + 2(0) + 1(−3) ⎦ = ⎣ ⎦ a3 ·v 0(8) + 1(0) + 2(−3) −6 uT Av = u · (Av) = 1(−1) + 6(5) + (−2)(−6) = 41 EXAMPLE A5 Compute |A|, where A is given in Example A4 Is A positive definite? (A22e) 415 A2 Matrix Algebra Solution Laplace’s development of the determinant on the first row yields |A| = 1 −2 1 +3 2 = 1(3) − 2(2) + 3(1) = Development of the third row is somewhat easier because of the presence of the zero element: 2 |A| = −1 1 +2 1 2 = 0(−4) − 1(−2) + 2(0) = To verify positive definiteness, we evaluate the determinants of the leading minors: A 11 = > A 11 A 21 A 12 = A 22 O.K =0 Not O.K A is not positive definite EXAMPLE A6 Evaluate the matrix product AB, where A is given in Example A4 and ⎡ ⎤ −4 ⎢ ⎥ B = ⎣ −4 ⎦ −2 Solution ⎤ a1 ·b1 a1 ·b2 ⎥ ⎢ AB = ⎣ a2 ·b1 a2 ·b2 ⎦ a3 ·b1 a3 ·b2 ⎡ ⎤ ⎡ 1(−4) + 2(1) + 3(2) 1(1) + 2(−4) + 3(−2) ⎢ ⎥ ⎢ = ⎣ 1(−4) + 2(1) + 1(2) 1(1) + 2(−4) + 1(−2) ⎦ = ⎣ 0(−4) + 1(1) + 2(2) 0(1) + 1(−4) + 2(−2) ⎡ EXAMPLE A7 Compute A ⊗ b, where A= −3 −2 b= ⎤ −13 ⎥ −9 ⎦ −8 416 Appendices Solution A⊗b = a1 ⊗ b a2 ⊗ b −2 15 −6 −3 ⎡ ⎤ 15 ⎢ −2 −6 ⎥ ⎢ ⎥ ∴ A⊗b=⎢ ⎥ ⎣ −3 −9 ⎦ 12 −9 12 a1 ⊗ b = a2 ⊗ b = −2 −3 = = List of Program Modules (by Chapter) Chapter 1.7 error Error-handling routine Chapter 2.2 2.3 2.3 2.4 2.4 2.5 2.5 2.5 2.7 2.7 gaussElimin LUdecomp choleski LUdecomp3 LUdecomp5 swap gaussPivot LUpivot gaussSeidel conjGrad Gauss elimination LU decomposition Choleski decomposition LU decomposition of tridiagonal matrices LU decomposition of pentadiagonal matrices Interchanges rows or columns of a matrix Gauss elimination with row pivoting LU decomposition with row pivoting Gauss-Seidel method with relaxation Conjugate gradient method Chapter 3.2 3.2 3.2 3.3 3.4 3.4 newtonPoly neville rational cubicSpline polyFit plotPoly Newton’s method of polynomial interpolation Neville’s method of polynomial interpolation Rational function interpolation Cubic spline interpolation Polynomial curve fitting Plots data points and the fitting polynomial Chapter 4.2 4.3 417 rootsearch bisection Brackets a root of an equation Method of bisection 418 List of Program Modules (by Chapter) 4.4 4.5 4.6 4.7 4.7 ridder newtonRaphson newtonRaphson2 evalPoly polyRoots Ridder’s method Newton-Raphson method Newton-Raphson method for systems of equations Evaluates a polynomial and its derivatives Laguerre’s method for roots of polynomials Chapter 6.2 6.3 6.4 6.4 6.5 6.5 trapezoid romberg gaussNodes gaussQuad gaussQuad2 triangleQuad Recursive trapezoidal rule Romberg integration Nodes and weights for Gauss-Legendre quadrature Gauss-Legendre quadrature Gauss-Legendre quadrature over a quadrilateral Gauss-Legendre quadrature over a triangle Chapter 7.2 7.2 7.3 7.5 7.6 7.6 euler printSoln run kut4 run kut5 midpoint bulStoer Euler method for solution of initial value problems Prints solution of initial value problems in tabular form 4th order Runge-Kutta method Adaptive (5th order) Runge-Kutta method Midpoint method with Richardson extrapolation Simplified Bulirsch-Stoer method Chapter 8.2 8.2 linInterp 8.2 example8 8.2 example8 8.2 example8 8.3 example8 8.3 example8 8.4 example8 example8 Linear interpolation Shooting method example for second-order differential eqs Shooting method example for third-order linear diff eqs Shooting method example for fourth-order differential eqs Shooting method example for fourth-order differential eqs Finite difference example for second-order linear diff eqs Finite difference example for second-order differential eqs Finite difference example for fourth-order linear diff eqs 419 List of Program Modules (by Chapter) Chapter 9.2 9.2 9.2 9.3 9.3 9.4 9.5 9.5 9.5 9.5 9.5 jacobi sortJacobi stdForm inversePower inversePower5 householder sturmSeq gerschgorin lamRange eigenvals3 inversePower3 Jacobi’s method Sorts eigenvectors in ascending order of eigenvalues Transforms eigenvalue problem into standard form Inverse power method with eigenvalue shifting Inverse power method for pentadiagonal matrices Householder reduction to tridiagonal form Sturm sequence for tridiagonal matrices Computes global bounds on eigenvalues Brackets m smallest eigenvalues of a 3-diag matrix Finds m smallest eigenvalues of a tridiagonal matrix Inverse power method for tridiagonal matrices Chapter 10 10.2 10.3 10.4 goldSearch powell downhill Golden section search for the minimum of a function Powell’s method of minimization Downhill simplex method of minimization Index adaptive Runge–Kutta method, 271 arithmetic operators, in Python, arrays, in Python, 20 augmented assignment operators, in Python, augmented coefficient matrix, 32 banded matrix, 59 bisection method, for equation root, 148 boundary value problems, 293 shooting method, 294 finite difference method, 307 Brent’s method, 183 Bulirsch–Stoer method, 280 bulStoer.py, 284 choleski.py, 50 Choleski’s decomposition, 48 cmath module, 19 comparison operators, in Python, conditionals, in Python, conjGrad.py, 91 conjugate gradient method, 89 continuation character, in Python, cubic spline, 120 deflation of polynomials, 175 diagonal dominance, 70 docstring, in Python, 29 Doolittle’s decomposition, 45 Dormand-Prince coefficients, 273 downhill simplex method, 392 eigenvals3.py, 364 eigenvalue problems, 322 eigenvalues of tridiagonal matrices, 359 Householder reduction, 351 inverse power method, 336 Jacobi method, 329 power method, 338 elementary operations, linear algebra, 34 equivalent linear equation, 34 error control, in Python, 15 euler.py, 248 421 Euler’s method, 247 evalPoly.py, 175 evaluation of polynomials, 173 false position method, 152 finite difference approximations, 183 finite elements, 232 functions, in Python, 16 gaussElimin.py, 37 Gauss elimination method, 41 with scaled row pivoting, 71 Gaussian integration, 216 abscissas/weights for Gaussian quadratures, 221 orthogonal polynomials, 217 Gauss–Jordan elimination, 36 Gauss–Legendre quadrature over quadrilateral element, 233 gaussNodes.py, 224 gaussPivot.py, 72 gaussQuad.py, 225 gaussQuad2.py, 235 gaussSeidel.py, 89 Gauss–Seidel method, 87 gerschgorin.py, 363 Gerschgorin’s theorem, 361 golden section search, 377 goldSearch.py, 378 householder.py, 356 Householder reduction to tridiagonal form, 351 accumulated transformation matrix, 355 Householder matrix, 352 Householder reduction of symmetric matrix, 352–359 Idle (Python code editor), ill-conditioning, matrices, 33 incremental search method, roots of equations, 146 initial value problems, 246 integration order, 224 422 Index interpolation/curve fitting, 104 interval halving method See bisection method inversePower.py, 339 inverse power method, 336 inversePower3.py, 366 jacobi.py, 326–327 Jacobian matrix, 234 Jacobi method, 324 Jenkins–Traub algorithm, 182 Newton–Cotes formulas, 200 Simpson’s rules, 204 trapezoidal rule, 200 newtonPoly.py, 108 newtonRaphson.py, 158 newtonRaphson2.py, 162 Newton–Raphson method, 156, 161 norm of matrix, 33 numpy module, 20 numerical instability, 260 numerical integration, 199 knots of spline, 120 Lagrange’s method, of interpolation, 104 Laguerre’s method, for roots of polynomials, 176 lamRange.py, 363 least-squares fit, 129 linear algebra module, in Python, 24 linear algebraic equations, 31 linear regression, 130 linear systems, 30 linInterp.py, 295 lists, in Python, loops, in Python, LR algorithm, 373 LUdecomp.py, 47 LUdecomp3.py, 61 LUdecomp5.py, 66 LU decomposition methods, 44 Choleski’s decomposition, 48 Doolittle’s decomposition, 45 LUpivot.py, 73 mathematical functions, in Python, 11 math module, 18 MATLAB, matplotlib.pyplot module, 25 matrix algebra, 410 matrix inversion, 84 midpoint method, 280 midpoint.py, 282 minimization along line, 376 bracketing, 377 golden section search, 377 modules, in Python, 18 multiple integrals, 232 Gauss–Legendre quadrature over quadrilateral element, 233 Gauss–Legendre quadrature over triangular element, 239 multistep methods, for initial value problems, 292 namespace, in Python, 28 natural cubic spline, 120 Nelder–Mead method, 392 neville.py, 110 Neville’s method, 109 operators, in Python arithmetic, comparison, optimization, 374 orthogonal polynomials, 217 pivoting, 69 plotPoly.py, 133 polyFit.py, 132 polynomial fit, 131 polynomial interpolation, 104 Lagrange’s method, 104 Neville’s method, 109 Newton’s method, 106 polynomials, zeroes of, 173 deflation of polynomials, 175 evaluation of polynomials, 173 Laguerre’s method, 176 polyRoots.py, 177 powell.py, 385 Powell’s method, 382 printing, in Python, 12 printSoln.py, 249 QR algorithm, 373 quadrature See numerical integration rational function interpolation, 115 reading input, in Python, 11 relaxation factor, 88 Richardson extrapolation, 188, 281 Ridder’s method, 152 ridder.py, 153 romberg.py, 209 Romberg integration, 207 rootsearch.py, 147 roots of equations, 145 bisection, 148 incremental search, 146 Newton-Raphson method, 156, 161 Ridder’s method, 152 Runge–Kutta methods, 252 fifth-order adaptive, 271 fourth-order, 254 second-order, 253 run kut4.py, 255 run kut5.py, 274 423 Index scaled row pivoting, 71 shape functions, 234 shooting method, 294 higher-order equations, 299 second-order equation, 294 Shur’s factorization, 373 similarity transformation, 325 Simpson’s 1/3 rule, 204 Simpson’s 3/8 rule, 205 slicing operator, Python, sortJacobi.py, 331 stability of Euler’s method, 268 stiffness, in initial value problems, 267–268 stdForm.py, 332 strings, in Python, Sturm sequence, 359 sturmSeq.py, 359 swap.py, 72 symmetric/banded coefficient matrices, 59 symmetric coefficient matrix, 62 symmetric/pentadiagonal matrix, 63 tridiagonal matrix, 60 synthetic division, 175 trapezoid.py, 203 trapezoidal rule, 200 triangleQuad.py, 241 tridiagonal coefficient matrix, 60 tuples, in Python, two-point boundary value problems, 293 finite difference method, 307 shooting method, 294 type conversion, in Python, 10 weighted linear regression, 134 writing/running programs, in Python, 29

Ngày đăng: 09/11/2019, 09:42

Từ khóa liên quan

Mục lục

  • Contents

  • Preface

  • 1 Introduction to Python

    • 1.1 General Information

    • 1.2 Core Python

    • 1.3 Functions and Modules

    • 1.4 Mathematics Modules

    • 1.5 numpy Module

    • 1.6 Plotting with matplotlib.pyplot

    • 1.7 Scoping of Variables

    • 1.8 Writing and Running Programs

    • 2 Systems of Linear Algebraic Equations

      • 2.1 Introduction

      • 2.2 Gauss Elimination Method

      • 2.3 LU Decomposition Methods

      • Problem Set 2.1

      • 2.4 Symmetric and Banded Coefficient Matrices

      • 2.5 Pivoting

      • Problem Set 2.2

      • 2.6 Matrix Inversion

      • 2.7 Iterative Methods

      • Problem Set 2.3

Tài liệu cùng người dùng

Tài liệu liên quan