1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Numerical recipes in fortran 77 the art of scientific computing (volume 1 of fortran numerical recipes) – part 1

519 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

in Fortran 77 The Art of Scientific Computing Second Edition Volume of Fortran Numerical Recipes William H Press Harvard-Smithsonian Center for Astrophysics Saul A Teukolsky Department of Physics, Cornell University William T Vetterling Polaroid Corporation Brian P Flannery EXXON Research and Engineering Company Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Numerical Recipes Published by the Press Syndicate of the University of Cambridge The Pitt Building, Trumpington Street, Cambridge CB2 1RP 40 West 20th Street, New York, NY 10011-4211, USA 10 Stamford Road, Oakleigh, Melbourne 3166, Australia Some sections of this book were originally published, in different form, in Computers in Physics magazine, Copyright c American Institute of Physics, 1988–1992 First Edition originally published 1986; Second Edition originally published 1992 as Numerical Recipes in FORTRAN: The Art of Scientific Computing Reprinted with corrections, 1993, 1994, 1995 Reprinted with corrections, 1996, 1997, as Numerical Recipes in Fortran 77: The Art of Scientific Computing (Vol of Fortran Numerical Recipes) This reprinting is corrected to software version 2.08 Printed in the United States of America Typeset in TEX Without an additional license to use the contained software, this book is intended as a text and reference book, for reading purposes only A free license for limited use of the software by the individual owner of a copy of this book who personally types one or more routines into a single computer is granted under terms described on p xxi See the section “License Information” (pp xx–xxiii) for information on obtaining more general licenses at low cost Machine-readable media containing the software in this book, with included licenses for use on a single screen, are available from Cambridge University Press See the order form at the back of the book, email to “orders@cup.org” (North America) or “trade@cup.cam.ac.uk” (rest of world), or write to Cambridge University Press, 110 Midland Avenue, Port Chester, NY 10573 (USA), for further information The software may also be downloaded, with immediate purchase of a license also possible, from the Numerical Recipes Software Web Site (http://www.nr.com) Unlicensed transfer of Numerical Recipes programs to any other format, or to any computer except one that is specifically licensed, is strictly prohibited Technical questions, corrections, and requests for information should be addressed to Numerical Recipes Software, P.O Box 243, Cambridge, MA 02238 (USA), email “info@nr.com”, or fax 781 863-1739 Library of Congress Cataloging in Publication Data Numerical recipes in Fortran 77 : the art of scientific computing / William H Press [et al.] – 2nd ed Includes bibliographical references (p ) and index ISBN 0-521-43064-X Numerical analysis–Computer programs Science–Mathematics–Computer programs FORTRAN (Computer program language) I Press, William H QA297.N866 1992 519.4 0285 53–dc20 92-8876 A catalog record for this book is available from the British Library ISBN ISBN ISBN ISBN ISBN ISBN 0 0 0 521 43064 521 57439 521 43721 521 57440 521 57608 521 57607 X 0 Volume (this book) Volume Example book in FORTRAN FORTRAN diskette (IBM 3.5 ) CDROM (IBM PC/Macintosh) CDROM (UNIX) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Copyright c Cambridge University Press 1986, 1992 except for §13.10, which is placed into the public domain, and except for all other computer programs and procedures, which are Copyright c Numerical Recipes Software 1986, 1992, 1997 All Rights Reserved Contents xiii Preface to the Second Edition xv Preface to the First Edition License Information xx Computer Programs by Chapter and Section Preliminaries Solution of Linear Algebraic Equations 2.0 Introduction 2.1 Gauss-Jordan Elimination 2.2 Gaussian Elimination with Backsubstitution 2.3 LU Decomposition and Its Applications 2.4 Tridiagonal and Band Diagonal Systems of Equations 2.5 Iterative Improvement of a Solution to Linear Equations 2.6 Singular Value Decomposition 2.7 Sparse Linear Systems 2.8 Vandermonde Matrices and Toeplitz Matrices 2.9 Cholesky Decomposition 2.10 QR Decomposition 2.11 Is Matrix Inversion an N Process? xxiv 1.0 Introduction 1.1 Program Organization and Control Structures 1.2 Error, Accuracy, and Stability xviii 18 22 22 27 33 34 42 47 51 63 82 89 91 95 Interpolation and Extrapolation 99 3.0 Introduction 3.1 Polynomial Interpolation and Extrapolation 3.2 Rational Function Interpolation and Extrapolation 3.3 Cubic Spline Interpolation 3.4 How to Search an Ordered Table 3.5 Coefficients of the Interpolating Polynomial 3.6 Interpolation in Two or More Dimensions 99 102 104 107 110 113 116 v Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Plan of the Two-Volume Edition vi Contents Integration of Functions Evaluation of Functions 5.0 Introduction 5.1 Series and Their Convergence 5.2 Evaluation of Continued Fractions 5.3 Polynomials and Rational Functions 5.4 Complex Arithmetic 5.5 Recurrence Relations and Clenshaw’s Recurrence Formula 5.6 Quadratic and Cubic Equations 5.7 Numerical Derivatives 5.8 Chebyshev Approximation 5.9 Derivatives or Integrals of a Chebyshev-approximated Function 5.10 Polynomial Approximation from Chebyshev Coefficients 5.11 Economization of Power Series 5.12 Pad´e Approximants 5.13 Rational Chebyshev Approximation 5.14 Evaluation of Functions by Path Integration Special Functions 6.0 Introduction 6.1 Gamma Function, Beta Function, Factorials, Binomial Coefficients 6.2 Incomplete Gamma Function, Error Function, Chi-Square Probability Function, Cumulative Poisson Function 6.3 Exponential Integrals 6.4 Incomplete Beta Function, Student’s Distribution, F-Distribution, Cumulative Binomial Distribution 6.5 Bessel Functions of Integer Order 6.6 Modified Bessel Functions of Integer Order 6.7 Bessel Functions of Fractional Order, Airy Functions, Spherical Bessel Functions 6.8 Spherical Harmonics 6.9 Fresnel Integrals, Cosine and Sine Integrals 6.10 Dawson’s Integral 6.11 Elliptic Integrals and Jacobian Elliptic Functions 6.12 Hypergeometric Functions Random Numbers 7.0 Introduction 7.1 Uniform Deviates 123 124 130 134 135 140 155 159 159 159 163 167 171 172 178 180 184 189 191 192 194 197 201 205 205 206 209 215 219 223 229 234 246 248 252 254 263 266 266 267 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 4.0 Introduction 4.1 Classical Formulas for Equally Spaced Abscissas 4.2 Elementary Algorithms 4.3 Romberg Integration 4.4 Improper Integrals 4.5 Gaussian Quadratures and Orthogonal Polynomials 4.6 Multidimensional Integrals 123 Contents Sorting 8.0 Introduction 8.1 Straight Insertion and Shell’s Method 8.2 Quicksort 8.3 Heapsort 8.4 Indexing and Ranking 8.5 Selecting the M th Largest 8.6 Determination of Equivalence Classes Root Finding and Nonlinear Sets of Equations 9.0 Introduction 9.1 Bracketing and Bisection 9.2 Secant Method, False Position Method, and Ridders’ Method 9.3 Van Wijngaarden–Dekker–Brent Method 9.4 Newton-Raphson Method Using Derivative 9.5 Roots of Polynomials 9.6 Newton-Raphson Method for Nonlinear Systems of Equations 9.7 Globally Convergent Methods for Nonlinear Systems of Equations 10 Minimization or Maximization of Functions 10.0 Introduction 10.1 Golden Section Search in One Dimension 10.2 Parabolic Interpolation and Brent’s Method in One Dimension 10.3 One-Dimensional Search with First Derivatives 10.4 Downhill Simplex Method in Multidimensions 10.5 Direction Set (Powell’s) Methods in Multidimensions 10.6 Conjugate Gradient Methods in Multidimensions 10.7 Variable Metric Methods in Multidimensions 10.8 Linear Programming and the Simplex Method 10.9 Simulated Annealing Methods 11 Eigensystems 11.0 Introduction 11.1 Jacobi Transformations of a Symmetric Matrix 11.2 Reduction of a Symmetric Matrix to Tridiagonal Form: Givens and Householder Reductions 11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix 11.4 Hermitian Matrices 11.5 Reduction of a General Matrix to Hessenberg Form 277 281 287 290 295 299 306 320 320 321 323 327 329 333 337 340 340 343 347 352 355 362 372 376 387 387 390 395 399 402 406 413 418 423 436 449 449 456 462 469 475 476 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 7.2 Transformation Method: Exponential and Normal Deviates 7.3 Rejection Method: Gamma, Poisson, Binomial Deviates 7.4 Generation of Random Bits 7.5 Random Sequences Based on Data Encryption 7.6 Simple Monte Carlo Integration 7.7 Quasi- (that is, Sub-) Random Sequences 7.8 Adaptive and Recursive Monte Carlo Methods vii viii Contents 11.6 The QR Algorithm for Real Hessenberg Matrices 11.7 Improving Eigenvalues and/or Finding Eigenvectors by Inverse Iteration 12 Fast Fourier Transform 13 Fourier and Spectral Applications 13.0 Introduction 13.1 Convolution and Deconvolution Using the FFT 13.2 Correlation and Autocorrelation Using the FFT 13.3 Optimal (Wiener) Filtering with the FFT 13.4 Power Spectrum Estimation Using the FFT 13.5 Digital Filtering in the Time Domain 13.6 Linear Prediction and Linear Predictive Coding 13.7 Power Spectrum Estimation by the Maximum Entropy (All Poles) Method 13.8 Spectral Analysis of Unevenly Sampled Data 13.9 Computing Fourier Integrals Using the FFT 13.10 Wavelet Transforms 13.11 Numerical Use of the Sampling Theorem 14 Statistical Description of Data 14.0 Introduction 14.1 Moments of a Distribution: Mean, Variance, Skewness, and So Forth 14.2 Do Two Distributions Have the Same Means or Variances? 14.3 Are Two Distributions Different? 14.4 Contingency Table Analysis of Two Distributions 14.5 Linear Correlation 14.6 Nonparametric or Rank Correlation 14.7 Do Two-Dimensional Distributions Differ? 14.8 Savitzky-Golay Smoothing Filters 15 Modeling of Data 15.0 Introduction 15.1 Least Squares as a Maximum Likelihood Estimator 15.2 Fitting Data to a Straight Line 15.3 Straight-Line Data with Errors in Both Coordinates 15.4 General Linear Least Squares 15.5 Nonlinear Models 487 490 490 494 498 504 515 519 525 530 530 531 538 539 542 551 557 565 569 577 584 600 603 603 604 609 614 622 630 633 640 644 650 650 651 655 660 665 675 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 12.0 Introduction 12.1 Fourier Transform of Discretely Sampled Data 12.2 Fast Fourier Transform (FFT) 12.3 FFT of Real Functions, Sine and Cosine Transforms 12.4 FFT in Two or More Dimensions 12.5 Fourier Transforms of Real Data in Two and Three Dimensions 12.6 External Storage or Memory-Local FFTs 480 Contents 15.6 Confidence Limits on Estimated Model Parameters 15.7 Robust Estimation 16 Integration of Ordinary Differential Equations 17 Two Point Boundary Value Problems 17.0 Introduction 17.1 The Shooting Method 17.2 Shooting to a Fitting Point 17.3 Relaxation Methods 17.4 A Worked Example: Spheroidal Harmonics 17.5 Automated Allocation of Mesh Points 17.6 Handling Internal Boundary Conditions or Singular Points 18 Integral Equations and Inverse Theory 18.0 Introduction 18.1 Fredholm Equations of the Second Kind 18.2 Volterra Equations 18.3 Integral Equations with Singular Kernels 18.4 Inverse Problems and the Use of A Priori Information 18.5 Linear Regularization Methods 18.6 Backus-Gilbert Method 18.7 Maximum Entropy Image Restoration 19 Partial Differential Equations 19.0 Introduction 19.1 Flux-Conservative Initial Value Problems 19.2 Diffusive Initial Value Problems 19.3 Initial Value Problems in Multidimensions 19.4 Fourier and Cyclic Reduction Methods for Boundary Value Problems 19.5 Relaxation Methods for Boundary Value Problems 19.6 Multigrid Methods for Boundary Value Problems 20 Less-Numerical Algorithms 20.0 Introduction 20.1 Diagnosing Machine Parameters 20.2 Gray Codes 684 694 701 701 704 708 716 718 726 727 740 745 745 749 751 753 764 774 775 779 779 782 786 788 795 799 806 809 818 818 825 838 844 848 854 862 881 881 881 886 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 16.0 Introduction 16.1 Runge-Kutta Method 16.2 Adaptive Stepsize Control for Runge-Kutta 16.3 Modified Midpoint Method 16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 16.5 Second-Order Conservative Equations 16.6 Stiff Sets of Equations 16.7 Multistep, Multivalue, and Predictor-Corrector Methods ix x Contents 20.3 Cyclic Redundancy and Other Checksums 20.4 Huffman Coding and Compression of Data 20.5 Arithmetic Coding 20.6 Arithmetic at Arbitrary Precision 888 896 902 906 916 Index of Programs and Dependencies (Vol 1) 921 General Index to Volumes and Contents of Volume 2: Numerical Recipes in Fortran 90 Preface to Volume Foreword by Michael Metcalf viii x License Information xvii 21 Introduction to Fortran 90 Language Features 935 22 Introduction to Parallel Programming 962 23 Numerical Recipes Utilities for Fortran 90 987 Fortran 90 Code Chapters 1009 B1 Preliminaries 1010 B2 Solution of Linear Algebraic Equations 1014 B3 Interpolation and Extrapolation 1043 B4 Integration of Functions 1052 B5 Evaluation of Functions 1070 B6 Special Functions 1083 B7 Random Numbers 1141 B8 Sorting 1167 B9 Root Finding and Nonlinear Sets of Equations 1182 B10 Minimization or Maximization of Functions 1201 B11 Eigensystems 1225 B12 Fast Fourier Transform 1235 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) References for Volume B13 Fourier and Spectral Applications 1253 B14 Statistical Description of Data 1269 B15 Modeling of Data 1285 B16 Integration of Ordinary Differential Equations 1297 B17 Two Point Boundary Value Problems 1314 B18 Integral Equations and Inverse Theory 1325 B19 Partial Differential Equations 1332 B20 Less-Numerical Algorithms 1343 Appendices C1 Listing of Utility Modules (nrtype and nrutil) 1361 C2 Listing of Explicit Interfaces 1384 C3 Index of Programs and Dependencies (Vol 2) 1434 General Index to Volumes and 1447 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 1359 References for Volume xi Contents Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) xii 11.4 Hermitian Matrices 475 Smith, B.T., et al 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol of Lecture Notes in Computer Science (New York: Springer-Verlag) [3] Stoer, J., and Bulirsch, R 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §6.6.6 [4] The complex analog of a real, symmetric matrix is a Hermitian matrix, satisfying equation (11.0.4) Jacobi transformations can be used to find eigenvalues and eigenvectors, as also can Householder reduction to tridiagonal form followed by QL iteration Complex versions of the previous routines jacobi, tred2, and tqli are quite analogous to their real counterparts For working routines, consult [1,2] An alternative, using the routines in this book, is to convert the Hermitian problem to a real, symmetric one: If C = A + iB is a Hermitian matrix, then the n × n complex eigenvalue problem (A + iB) · (u + iv) = λ(u + iv) (11.4.1) is equivalent to the 2n × 2n real problem u u A −B =λ · v v B A (11.4.2) Note that the 2n × 2n matrix in (11.4.2) is symmetric: AT = A and BT = −B if C is Hermitian Corresponding to a given eigenvalue λ, the vector −v u (11.4.3) is also an eigenvector, as you can verify by writing out the two matrix equations implied by (11.4.2) Thus if λ1 , λ2 , , λn are the eigenvalues of C, then the 2n eigenvalues of the augmented problem (11.4.2) are λ1 , λ1 , λ2 , λ2 , , λn , λn ; each, in other words, is repeated twice The eigenvectors are pairs of the form u + iv and i(u + iv); that is, they are the same up to an inessential phase Thus we solve the augmented problem (11.4.2), and choose one eigenvalue and eigenvector from each pair These give the eigenvalues and eigenvectors of the original matrix C Working with the augmented matrix requires a factor of more storage than the original complex matrix In principle, a complex algorithm is also a factor of more efficient in computer time than is the solution of the augmented problem In practice, most complex implementations not achieve this factor unless they are written entirely in real arithmetic (Good library routines always this.) CITED REFERENCES AND FURTHER READING: Wilkinson, J.H., and Reinsch, C 1971, Linear Algebra, vol II of Handbook for Automatic Computation (New York: Springer-Verlag) [1] Smith, B.T., et al 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol of Lecture Notes in Computer Science (New York: Springer-Verlag) [2] Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 11.4 Hermitian Matrices 476 Chapter 11 Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form Balancing The sensitivity of eigenvalues to rounding errors during the execution of some algorithms can be reduced by the procedure of balancing The errors in the eigensystem found by a numerical procedure are generally proportional to the Euclidean norm of the matrix, that is, to the square root of the sum of the squares of the elements The idea of balancing is to use similarity transformations to make corresponding rows and columns of the matrix have comparable norms, thus reducing the overall norm of the matrix while leaving the eigenvalues unchanged A symmetric matrix is already balanced Balancing is a procedure with of order N operations Thus, the time taken by the procedure balanc, given below, should never be more than a few percent of the total time required to find the eigenvalues It is therefore recommended that you always balance nonsymmetric matrices It never hurts, and it can substantially improve the accuracy of the eigenvalues computed for a badly balanced matrix The actual algorithm used is due to Osborne, as discussed in [1] It consists of a sequence of similarity transformations by diagonal matrices D To avoid introducing rounding errors during the balancing process, the elements of D are restricted to be exact powers of the radix base employed for floating-point arithmetic (i.e., for most machines, but 16 for IBM mainframe architectures) The output is a matrix that is balanced in the norm given by summing the absolute magnitudes of the matrix elements This is more efficient than using the Euclidean norm, and equally effective: A large reduction in one norm implies a large reduction in the other Note that if the off-diagonal elements of any row or column of a matrix are all zero, then the diagonal element is an eigenvalue If the eigenvalue happens to Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice By contrast, it is impossible to design equally satisfactory algorithms for the nonsymmetric case There are two reasons for this First, the eigenvalues of a nonsymmetric matrix can be very sensitive to small changes in the matrix elements Second, the matrix itself can be defective, so that there is no complete set of eigenvectors We emphasize that these difficulties are intrinsic properties of certain nonsymmetric matrices, and no numerical procedure can “cure” them The best we can hope for are procedures that don’t exacerbate such problems The presence of rounding error can only make the situation worse With finiteprecision arithmetic, one cannot even design a foolproof algorithm to determine whether a given matrix is defective or not Thus current algorithms generally try to find a complete set of eigenvectors, and rely on the user to inspect the results If any eigenvectors are almost parallel, the matrix is probably defective Apart from referring you to the literature, and to the collected routines in [1,2], we are going to sidestep the problem of eigenvectors, giving algorithms for eigenvalues only If you require just a few eigenvectors, you can read §11.7 and consider finding them by inverse iteration We consider the problem of finding all eigenvectors of a nonsymmetric matrix as lying beyond the scope of this book 11.5 Reduction of a General Matrix to Hessenberg Form 477 SUBROUTINE balanc(a,n,np) INTEGER n,np REAL a(np,np),RADIX,SQRDX PARAMETER (RADIX=2.,SQRDX=RADIX**2) Given an n by n matrix a stored in an array of physical dimensions np by np, this routine replaces it by a balanced matrix with identical eigenvalues A symmetric matrix is already balanced and is unaffected by this procedure The parameter RADIX should be the machine’s floating-point radix INTEGER i,j,last REAL c,f,g,r,s continue last=1 14 i=1,n Calculate row and column norms c=0 r=0 11 j=1,n if(j.ne.i)then c=c+abs(a(j,i)) r=r+abs(a(i,j)) endif enddo 11 if(c.ne.0 and.r.ne.0.)then If both are nonzero, g=r/RADIX f=1 s=c+r if(c.lt.g)then find the integer power of the machine radix that f=f*RADIX comes closest to balancing the matrix c=c*SQRDX goto endif g=r*RADIX if(c.gt.g)then f=f/RADIX c=c/SQRDX goto endif if((c+r)/f.lt.0.95*s)then last=0 g=1./f 12 j=1,n Apply similarity transformation a(i,j)=a(i,j)*g enddo 12 13 j=1,n a(j,i)=a(j,i)*f enddo 13 endif endif enddo 14 Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) be ill-conditioned (sensitive to small changes in the matrix elements), it will have relatively large errors when determined by the routine hqr (§11.6) Had we merely inspected the matrix beforehand, we could have determined the isolated eigenvalue exactly and then deleted the corresponding row and column from the matrix You should consider whether such a pre-inspection might be useful in your application (For symmetric matrices, the routines we gave will determine isolated eigenvalues accurately in all cases.) The routine balanc does not keep track of the accumulated similarity transformation of the original matrix, since we will only be concerned with finding eigenvalues of nonsymmetric matrices, not eigenvectors Consult [1-3] if you want to keep track of the transformation 478 Chapter 11 Eigensystems if(last.eq.0)goto return END Reduction to Hessenberg Form  × ×      × × × × × × × × × × × × × × × × × ×  × ×  ×  ×  × × By now you should be able to tell at a glance that such a structure can be achieved by a sequence of Householder transformations, each one zeroing the required elements in a column of the matrix Householder reduction to Hessenberg form is in fact an accepted technique An alternative, however, is a procedure analogous to Gaussian elimination with pivoting We will use this elimination procedure since it is about a factor of more efficient than the Householder method, and also since we want to teach you the method It is possible to construct matrices for which the Householder reduction, being orthogonal, is stable and elimination is not, but such matrices are extremely rare in practice Straight Gaussian elimination is not a similarity transformation of the matrix Accordingly, the actual elimination procedure used is slightly different Before the rth stage, the original matrix A ≡ A1 has become Ar , which is upper Hessenberg in its first r − rows and columns The rth stage then consists of the following sequence of operations: • Find the element of maximum magnitude in the rth column below the diagonal If it is zero, skip the next two “bullets” and the stage is done Otherwise, suppose the maximum element was in row r • Interchange rows r and r + This is the pivoting procedure To make the permutation a similarity transformation, also interchange columns r and r + • For i = r + 2, r + 3, , N , compute the multiplier ni,r+1 ≡ air ar+1,r Subtract ni,r+1 times row r + from row i To make the elimination a similarity transformation, also add ni,r+1 times column i to column r + A total of N − such stages are required Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) The strategy for finding the eigensystem of a general matrix parallels that of the symmetric case First we reduce the matrix to a simpler form, and then we perform an iterative procedure on the simplified matrix The simpler structure we use here is called Hessenberg form An upper Hessenberg matrix has zeros everywhere below the diagonal except for the first subdiagonal row For example, in the × case, the nonzero elements are: 11.5 Reduction of a General Matrix to Hessenberg Form 479 SUBROUTINE elmhes(a,n,np) INTEGER n,np REAL a(np,np) Reduction to Hessenberg form by the elimination method The real, nonsymmetric, n by n matrix a, stored in an array of physical dimensions np by np, is replaced by an upper Hessenberg matrix with identical eigenvalues Recommended, but not required, is that this routine be preceded by balanc On output, the Hessenberg matrix is in elements a(i,j) with i ≤ j+1 Elements with i > j+1 are to be thought of as zero, but are returned with random values INTEGER i,j,m REAL x,y 17 m=2,n-1 m is called r + in the text x=0 i=m 11 j=m,n Find the pivot if(abs(a(j,m-1)).gt.abs(x))then x=a(j,m-1) i=j endif enddo 11 if(i.ne.m)then Interchange rows and columns 12 j=m-1,n y=a(i,j) a(i,j)=a(m,j) a(m,j)=y enddo 12 13 j=1,n y=a(j,i) a(j,i)=a(j,m) a(j,m)=y enddo 13 endif if(x.ne.0.)then Carry out the elimination 16 i=m+1,n y=a(i,m-1) if(y.ne.0.)then y=y/x a(i,m-1)=y 14 j=m,n a(i,j)=a(i,j)-y*a(m,j) enddo 14 15 j=1,n a(j,m)=a(j,m)+y*a(j,i) enddo 15 endif enddo 16 endif enddo 17 return END Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) When the magnitudes of the matrix elements vary over many orders, you should try to rearrange the matrix so that the largest elements are in the top left-hand corner This reduces the roundoff error, since the reduction proceeds from left to right Since we are concerned only with eigenvalues, the routine elmhes does not keep track of the accumulated similarity transformation The operation count is about 5N /6 for large N 480 Chapter 11 Eigensystems CITED REFERENCES AND FURTHER READING: Wilkinson, J.H., and Reinsch, C 1971, Linear Algebra, vol II of Handbook for Automatic Computation (New York: Springer-Verlag) [1] Smith, B.T., et al 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol of Lecture Notes in Computer Science (New York: Springer-Verlag) [2] 11.6 The QR Algorithm for Real Hessenberg Matrices Recall the following relations for the QR algorithm with shifts: Qs · (As − ks 1) = Rs (11.6.1) where Q is orthogonal and R is upper triangular, and As+1 = Rs · QTs + ks = Qs · As · QTs (11.6.2) The QR transformation preserves the upper Hessenberg form of the original matrix A ≡ A1 , and the workload on such a matrix is O(n2 ) per iteration as opposed to O(n3 ) on a general matrix As s → ∞, As converges to a form where the eigenvalues are either isolated on the diagonal or are eigenvalues of a × submatrix on the diagonal As we pointed out in §11.3, shifting is essential for rapid convergence A key difference here is that a nonsymmetric real matrix can have complex eigenvalues This means that good choices for the shifts ks may be complex, apparently necessitating complex arithmetic Complex arithmetic can be avoided, however, by a clever trick The trick depends on a result analogous to the lemma we used for implicit shifts in §11.3 The lemma we need here states that if B is a nonsingular matrix such that B·Q = Q·H (11.6.3) where Q is orthogonal and H is upper Hessenberg, then Q and H are fully determined by the first column of Q (The determination is unique if H has positive subdiagonal elements.) The lemma can be proved by induction analogously to the proof given for tridiagonal matrices in §11.3 The lemma is used in practice by taking two steps of the QR algorithm, either with two real shifts ks and ks+1 , or with complex conjugate values ks and ks+1 = ks * This gives a real matrix As+2 , where As+2 = Qs+1 · Qs · As · QTs · QTs+1 · (11.6.4) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Stoer, J., and Bulirsch, R 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §6.5.4 [3] 11.6 The QR Algorithm for Real Hessenberg Matrices 481 The Q’s are determined by As − ks = QTs · Rs As+1 = Qs · As · QTs (11.6.5) (11.6.6) As+1 − ks+1 = QTs+1 · Rs+1 (11.6.7) As − ks+1 = QTs · QTs+1 · Rs+1 · Qs (11.6.8) M = (As − ks+1 1) · (As − ks 1) (11.6.9) Hence, if we define equations (11.6.5) and (11.6.8) give R= Q·M (11.6.10) Q = Qs+1 · Qs R = Rs+1 · Rs (11.6.11) (11.6.12) where Equation (11.6.4) can be rewritten As · QT = QT · As+2 (11.6.13) Thus suppose we can somehow find an upper Hessenberg matrix H such that T T As · Q = Q · H (11.6.14) T where Q is orthogonal If Q has the same first column as QT (i.e., Q has the same first row as Q), then Q = Q and As+2 = H The first row of Q is found as follows Equation (11.6.10) shows that Q is the orthogonal matrix that triangularizes the real matrix M Any real matrix can be triangularized by premultiplying it by a sequence of Householder matrices P1 (acting on the first column), P2 (acting on the second column), , Pn−1 Thus Q = Pn−1 · · · P2 · P1 , and the first row of Q is the first row of P1 since Pi is an (i − 1) × (i − 1) identity matrix in the top left-hand corner We now must find Q satisfying (11.6.14) whose first row is that of P1 The Householder matrix P1 is determined by the first column of M Since As is upper Hessenberg, equation (11.6.9) shows that the first column of M has the form [p1 , q1, r1 , 0, , 0]T , where p1 = a211 − a11 (ks + ks+1 ) + ks ks+1 + a12 a21 q1 = a21 (a11 + a22 − ks − ks+1 ) r1 = a21 a32 (11.6.15) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Using (11.6.6), equation (11.6.7) can be rewritten 482 Chapter 11 Eigensystems Hence P1 = − 2w1 · wT1 (11.6.16)  × ×  x  P1 · A1 · PT1 =  x    × × × x × × × × × × × × × × × × × × × × × × × × × ×  × ×  ×  ×  ×  × × (11.6.17) This matrix can be restored to upper Hessenberg form without affecting the first row by a sequence of Householder similarity transformations The first such Householder matrix, P2 , acts on elements 2, 3, and in the first column, annihilating elements and This produces a matrix of the same form as (11.6.17), with the extra elements appearing one column over:  × ×        × × × x x × × × × x × × × × × × × × × × × × × × × × × ×  × ×  ×  ×  ×  × × (11.6.18) Proceeding in this way up to Pn−1 , we see that at each stage the Householder matrix Pr has a vector wr that is nonzero only in elements r, r + 1, and r + These elements are determined by the elements r, r + 1, and r + in the (r − 1)st column of the current matrix Note that the preliminary matrix P1 has the same structure as P2 , , Pn−1 The result is that Pn−1 · · · P2 · P1 · As · PT1 · PT2 · · · PTn−1 = H (11.6.19) where H is upper Hessenberg Thus Q = Q = Pn−1 · · · P2 · P1 (11.6.20) As+2 = H (11.6.21) and The shifts of origin at each stage are taken to be the eigenvalues of the × matrix in the bottom right-hand corner of the current As This gives ks + ks+2 = an−1,n−1 + ann ks ks+1 = an−1,n−1ann − an−1,n an,n−1 (11.6.22) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) where w1 has only its first elements nonzero (cf equation 11.2.5) The matrix P1 · As · PT1 is therefore upper Hessenberg with extra elements: 11.6 The QR Algorithm for Real Hessenberg Matrices 483 Substituting (11.6.22) in (11.6.15), we get p1 = a21 {[(ann − a11 )(an−1,n−1 − a11 ) − an−1,nan,n−1 ]/a21 + a12 } q1 = a21 [a22 − a11 − (ann − a11 ) − (an−1,n−1 − a11 )] r1 = a21 a32 (11.6.23) In summary, to carry out a double QR step we construct the Householder matrices Pr , r = 1, , n − For P1 we use p1 , q1 , and r1 given by (11.6.23) For the remaining matrices, pr , qr , and rr are determined by the (r, r − 1), (r + 1, r − 1), and (r + 2, r − 1) elements of the current matrix The number of arithmetic operations can be reduced by writing the nonzero elements of the 2w · wT part of the Householder matrix in the form   (p ± s)/(±s) (11.6.24) 2w · wT =  q/(±s)  · [ q/(p ± s) r/(p ± s) ] r/(±s) where s2 = p2 + q + r (11.6.25) (We have simply divided each element by a piece of the normalizing factor; cf the equations in §11.2.) If we proceed in this way, convergence is usually very fast There are two possible ways of terminating the iteration for an eigenvalue First, if an,n−1 becomes “negligible,” then ann is an eigenvalue We can then delete the nth row and column of the matrix and look for the next eigenvalue Alternatively, an−1,n−2 may become negligible In this case the eigenvalues of the × matrix in the lower right-hand corner may be taken to be eigenvalues We delete the nth and (n − 1)st rows and columns of the matrix and continue The test for convergence to an eigenvalue is combined with a test for negligible subdiagonal elements that allows splitting of the matrix into submatrices We find the largest i such that ai,i−1 is negligible If i = n, we have found a single eigenvalue If i = n − 1, we have found two eigenvalues Otherwise we continue the iteration on the submatrix in rows i to n (i being set to unity if there is no small subdiagonal element) After determining i, the submatrix in rows i to n is examined to see if the product of any two consecutive subdiagonal elements is small enough that we can work with an even smaller submatrix, starting say in row m We start with m = n − and decrement it down to i + 1, computing p, q, and r according to equations (11.6.23) with replaced by m and by m + If these were indeed the elements of the special “first” Householder matrix in a double QR step, then applying the Householder matrix would lead to nonzero elements in positions (m + 1, m − 1), (m + 2, m − 1), and (m + 2, m) We require that the first two of these elements be Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) We have judiciously grouped terms to reduce possible roundoff when there are small off-diagonal elements Since only the ratios of elements are relevant for a Householder transformation, we can omit the factor a21 from (11.6.23) 484 Chapter 11 Eigensystems small compared with the local diagonal elements am−1,m−1 , amm and am+1,m+1 A satisfactory approximate criterion is |am,m−1 |(|q| + |r|) |p|(|am+1,m+1 | + |amm | + |am−1,m−1 |) (11.6.26) ks + ks+1 = 1.5 × (|an,n−1| + |an−1,n−2|) ks ks+1 = (|an,n−1| + |an−1,n−2|)2 (11.6.27) The factor 1.5 was arbitrarily chosen to lessen the likelihood of an “unfortunate” choice of shifts This strategy is repeated after 20 unsuccessful iterations After 30 unsuccessful iterations, the routine reports failure The operation count for the QR algorithm described here is ∼ 5k per iteration, where k is the current size of the matrix The typical average number of iterations per eigenvalue is ∼ 1.8, so the total operation count for all the eigenvalues is ∼ 3n3 This estimate neglects any possible efficiency due to splitting or sparseness of the matrix The following routine hqr is based algorithmically on the above description, in turn following the implementations in [1,2] SUBROUTINE hqr(a,n,np,wr,wi) INTEGER n,np REAL a(np,np),wi(np),wr(np) Finds all eigenvalues of an n by n upper Hessenberg matrix a that is stored in an np by np array On input a can be exactly as output from elmhes §11.5; on output it is destroyed The real and imaginary parts of the eigenvalues are returned in wr and wi, respectively INTEGER i,its,j,k,l,m,nn REAL anorm,p,q,r,s,t,u,v,w,x,y,z anorm=0 Compute matrix norm for possible use 12 i=1,n in locating single small subdiagonal element 11 j=max(i-1,1),n anorm=anorm+abs(a(i,j)) enddo 11 enddo 12 nn=n t=0 Gets changed only by an exceptional shift if(nn.ge.1)then Begin search for next eigenvalue its=0 13 l=nn,2,-1 Begin iteration: look for single small subs=abs(a(l-1,l-1))+abs(a(l,l)) diagonal element if(s.eq.0.)s=anorm if(abs(a(l,l-1))+s.eq.s)goto enddo 13 l=1 x=a(nn,nn) if(l.eq.nn)then One root found wr(nn)=x+t wi(nn)=0 nn=nn-1 else y=a(nn-1,nn-1) w=a(nn,nn-1)*a(nn-1,nn) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) Very rarely, the procedure described so far will fail to converge On such matrices, experience shows that if one double step is performed with any shifts that are of order the norm of the matrix, convergence is subsequently very rapid Accordingly, if ten iterations occur without determining an eigenvalue, the usual shifts are replaced for the next iteration by shifts defined by 11.6 The QR Algorithm for Real Hessenberg Matrices if(l.eq.nn-1)then Two roots found p=0.5*(y-x) q=p**2+w z=sqrt(abs(q)) x=x+t if(q.ge.0.)then a real pair z=p+sign(z,p) wr(nn)=x+z wr(nn-1)=wr(nn) if(z.ne.0.)wr(nn)=x-w/z wi(nn)=0 wi(nn-1)=0 else a complex pair wr(nn)=x+p wr(nn-1)=wr(nn) wi(nn)=z wi(nn-1)=-z endif nn=nn-2 else No roots found Continue iteration if(its.eq.30)pause ’too many iterations in hqr’ if(its.eq.10.or.its.eq.20)then Form exceptional shift t=t+x 14 i=1,nn a(i,i)=a(i,i)-x enddo 14 s=abs(a(nn,nn-1))+abs(a(nn-1,nn-2)) x=0.75*s y=x w=-0.4375*s**2 endif its=its+1 15 m=nn-2,l,-1 Form shift and then look for consecuz=a(m,m) tive small subdiagonal elements r=x-z s=y-z p=(r*s-w)/a(m+1,m)+a(m,m+1) Equation (11.6.23) q=a(m+1,m+1)-z-r-s r=a(m+2,m+1) s=abs(p)+abs(q)+abs(r) Scale to prevent overflow or underflow p=p/s q=q/s r=r/s if(m.eq.l)goto u=abs(a(m,m-1))*(abs(q)+abs(r)) v=abs(p)*(abs(a(m-1,m-1))+abs(z)+abs(a(m+1,m+1))) if(u+v.eq.v)goto Equation (11.6.26) enddo 15 16 i=m+2,nn a(i,i-2)=0 if (i.ne.m+2) a(i,i-3)=0 enddo 16 19 k=m,nn-1 Double QR step on rows l to nn and if(k.ne.m)then columns m to nn p=a(k,k-1) Begin setup of Householder vector q=a(k+1,k-1) r=0 if(k.ne.nn-1)r=a(k+2,k-1) x=abs(p)+abs(q)+abs(r) if(x.ne.0.)then p=p/x Scale to prevent overflow or underflow q=q/x r=r/x endif Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) 485 486 Chapter 11 Eigensystems CITED REFERENCES AND FURTHER READING: Wilkinson, J.H., and Reinsch, C 1971, Linear Algebra, vol II of Handbook for Automatic Computation (New York: Springer-Verlag) [1] Golub, G.H., and Van Loan, C.F 1989, Matrix Computations, 2nd ed (Baltimore: Johns Hopkins University Press), §7.5 Smith, B.T., et al 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol of Lecture Notes in Computer Science (New York: Springer-Verlag) [2] Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) endif s=sign(sqrt(p**2+q**2+r**2),p) if(s.ne.0.)then if(k.eq.m)then if(l.ne.m)a(k,k-1)=-a(k,k-1) else a(k,k-1)=-s*x endif p=p+s Equations (11.6.24) x=p/s y=q/s z=r/s q=q/p r=r/p 17 j=k,nn Row modification p=a(k,j)+q*a(k+1,j) if(k.ne.nn-1)then p=p+r*a(k+2,j) a(k+2,j)=a(k+2,j)-p*z endif a(k+1,j)=a(k+1,j)-p*y a(k,j)=a(k,j)-p*x enddo 17 18 i=l,min(nn,k+3) Column modification p=x*a(i,k)+y*a(i,k+1) if(k.ne.nn-1)then p=p+z*a(i,k+2) a(i,k+2)=a(i,k+2)-p*r endif a(i,k+1)=a(i,k+1)-p*q a(i,k)=a(i,k)-p enddo 18 endif enddo 19 goto for next iteration on current eigenvalue endif endif goto for next eigenvalue endif return END 487 11.7 Eigenvalues or Eigenvectors by Inverse Iteration 11.7 Improving Eigenvalues and/or Finding Eigenvectors by Inverse Iteration (A − τ 1) · y = b (11.7.1) where b is a random vector and τ is close to some eigenvalue λ of A Then the solution y will be close to the eigenvector corresponding to λ The procedure can be iterated: Replace b by y and solve for a new y, which will be even closer to the true eigenvector We can see why this works by expanding both y and b as linear combinations of the eigenvectors xj of A: α j xj y= b= βj xj j (11.7.2) j Then (11.7.1) gives αj (λj − τ )xj = j βj xj (11.7.3) j so that αj = βj λj − τ (11.7.4) and y= j βj xj λj − τ (11.7.5) If τ is close to λn , say, then provided βn is not accidentally too small, y will be approximately xn , up to a normalization Moreover, each iteration of this procedure gives another power of λj − τ in the denominator of (11.7.5) Thus the convergence is rapid for well-separated eigenvalues Suppose at the kth stage of iteration we are solving the equation (A − τk 1) · y = bk (11.7.6) where bk and τk are our current guesses for some eigenvector and eigenvalue of interest (let’s say, xn and λn ) Normalize bk so that bk · bk = The exact eigenvector and eigenvalue satisfy A · xn = λn xn (11.7.7) (A − τk 1) · xn = (λn − τk )xn (11.7.8) so Since y of (11.7.6) is an improved approximation to xn , we normalize it and set bk+1 = y |y| (11.7.9) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) The basic idea behind inverse iteration is quite simple Let y be the solution of the linear system 488 Chapter 11 Eigensystems We get an improved estimate of the eigenvalue by substituting our improved guess y for xn in (11.7.8) By (11.7.6), the left-hand side is bk , so calling λn our new value τk+1 , we find τk+1 = τk + bk · y (11.7.10) Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) While the above formulas look simple enough, in practice the implementation can be quite tricky The first question to be resolved is when to use inverse iteration Most of the computational load occurs in solving the linear system (11.7.6) Thus a possible strategy is first to reduce the matrix A to a special form that allows easy solution of (11.7.6) Tridiagonal form for symmetric matrices or Hessenberg for nonsymmetric are the obvious choices Then apply inverse iteration to generate all the eigenvectors While this is an O(N ) method for symmetric matrices, it is many times less efficient than the QL method given earlier In fact, even the best inverse iteration packages are less efficient than the QL method as soon as more than about 25 percent of the eigenvectors are required Accordingly, inverse iteration is generally used when one already has good eigenvalues and wants only a few selected eigenvectors You can write a simple inverse iteration routine yourself using LU decomposition to solve (11.7.6) You can decide whether to use the general LU algorithm we gave in Chapter or whether to take advantage of tridiagonal or Hessenberg form Note that, since the linear system (11.7.6) is nearly singular, you must be careful to use a version of LU decomposition like that in §2.3 which replaces a zero pivot with a very small number We have chosen not to give a general inverse iteration routine in this book, because it is quite cumbersome to take account of all the cases that can arise Routines are given, for example, in [1,2] If you use these, or write your own routine, you may appreciate the following pointers One starts by supplying an initial value τ0 for the eigenvalue λn of interest Choose a random normalized vector b0 as the initial guess for the eigenvector xn , and solve (11.7.6) The new vector y is bigger than b0 by a “growth factor” |y|, which ideally should be large Equivalently, the change in the eigenvalue, which by (11.7.10) is essentially 1/ |y|, should be small The following cases can arise: • If the growth factor is too small initially, then we assume we have made a “bad” choice of random vector This can happen not just because of a small βn in (11.7.5), but also in the case of a defective matrix, when (11.7.5) does not even apply (see, e.g., [1] or [3] for details) We go back to the beginning and choose a new initial vector • The change |b1 − b0 | might be less than some tolerance We can use this as a criterion for stopping, iterating until it is satisfied, with a maximum of – 10 iterations, say • After a few iterations, if |bk+1 − bk | is not decreasing rapidly enough, we can try updating the eigenvalue according to (11.7.10) If τk+1 = τk to machine accuracy, we are not going to improve the eigenvector much more and can quit Otherwise start another cycle of iterations with the new eigenvalue The reason we not update the eigenvalue at every step is that when we solve the linear system (11.7.6) by LU decomposition, we can save the decomposition 11.7 Eigenvalues or Eigenvectors by Inverse Iteration 489 CITED REFERENCES AND FURTHER READING: Acton, F.S 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America) Wilkinson, J.H., and Reinsch, C 1971, Linear Algebra, vol II of Handbook for Automatic Computation (New York: Springer-Verlag), p 418 [1] Smith, B.T., et al 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol of Lecture Notes in Computer Science (New York: Springer-Verlag) [2] Stoer, J., and Bulirsch, R 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), p 356 [3] Sample page from NUMERICAL RECIPES IN FORTRAN 77: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43064-X) Copyright (C) 1986-1992 by Cambridge University Press.Programs Copyright (C) 1986-1992 by Numerical Recipes Software Permission is granted for internet users to make one paper copy for their own personal use Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited To order Numerical Recipes books,diskettes, or CDROMs visit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America) if τk is fixed We only need the backsubstitution step each time we update bk The number of iterations we decide to with a fixed τk is a trade-off between the quadratic convergence but O(N ) workload for updating τk at each step and the linear convergence but O(N ) load for keeping τk fixed If you have determined the eigenvalue by one of the routines given earlier in the chapter, it is probably correct to machine accuracy anyway, and you can omit updating it There are two different pathologies that can arise during inverse iteration The first is multiple or closely spaced roots This is more often a problem with symmetric matrices Inverse iteration will find only one eigenvector for a given initial guess τ0 A good strategy is to perturb the last few significant digits in τ0 and then repeat the iteration Usually this provides an independent eigenvector Special steps generally have to be taken to ensure orthogonality of the linearly independent eigenvectors, whereas the Jacobi and QL algorithms automatically yield orthogonal eigenvectors even in the case of multiple eigenvalues The second problem, peculiar to nonsymmetric matrices, is the defective case Unless one makes a “good” initial guess, the growth factor is small Moreover, iteration does not improve matters In this case, the remedy is to choose random initial vectors, solve (11.7.6) once, and quit as soon as any vector gives an acceptably large growth factor Typically only a few trials are necessary One further complication in the nonsymmetric case is that a real matrix can have complex-conjugate pairs of eigenvalues You will then have to use complex arithmetic to solve (11.7.6) for the complex eigenvectors For any moderate-sized (or larger) nonsymmetric matrix, our recommendation is to avoid inverse iteration in favor of a QR method that includes the eigenvector computation in complex arithmetic You will find routines for this in [1,2] and other places ... t 1. 2 Error, Accuracy, and Stability 10 000000 10 000000000000000000000 (a) = 10 000 010 11 000000000000000000000 (b) 1? ?? = 011 111 11 10000000000000000000000 (c) 10 −7 = 011 010 01 1 10 110 1 011 111 110 010 10... systems of equations 10 .1 10 .1 10.2 10 .3 10 .4 10 .4 10 .5 10 .5 10 .5 10 .6 10 .6 10 .7 10 .8 10 .8 10 .8 10 .8 10 .9 10 .9 10 .9 10 .9 10 .9 10 .9 10 .9 10 .9 mnbrak golden brent dbrent amoeba amotry powell linmin f1dim... Deviates 12 3 12 4 13 0 13 4 13 5 14 0 15 5 15 9 15 9 15 9 16 3 16 7 17 1 17 2 17 8 18 0 18 4 18 9 19 1 19 2 19 4 19 7 2 01 205 205 206 209 215 219 223 229 234 246 248 252 254 263 266 266 267 Sample page from NUMERICAL RECIPES

Ngày đăng: 22/10/2022, 08:03

Xem thêm: