1. Trang chủ
  2. » Y Tế - Sức Khỏe

Spatiotemporal Data Analysis doc

336 2,5K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 336
Dung lượng 6,65 MB

Nội dung

Spatiotemporal Data Analysis Spatiotemporal Data Analysis Gidon Eshel Princ eton Universit y Press Princ eton a nd Oxf ord Copyright © 2012 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, Oxford Street, Woodstock, Oxfordshire OX20 1TW press.princeton.edu All Rights Reserved Library of Congress Cataloging-in-Publication Data Eshel, Gidon, 1958–   Spatiotemporal data analysis / Gidon Eshel    p cm   Includes bibliographical references and index   ISBN 978-0-691-12891-7 (hardback)   Spatial analysis (Statistics)  I Title   QA278.2.E84 2011  519.5'36—dc23 2011032275 British Library Cataloging-­ n-­ ublication Data is available i P MATLAB® and Simulink® are registered trademarks of The MathWorks Inc and are used with permission The MathWorks does not warrant the accuracy of the text or exercises in this book This book’s use of MATLAB® and Simulink® does not constitute an endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® and Simulink® software This book has been composed in Minion Pro Printed on acid-­ ree paper ∞ f Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 To Laura, Adam, and Laila, with much love and deep thanks Contents Preface xi Acknowledgments xv Part 1.  Foundations one  Introduction and Motivation 1 two  Notation and Basic Operations 3 three  Matrix Properties, Fundamental Spaces, Orthogonality 12 3.1 Vector Spaces 12 3.2 Matrix Rank 18 3.3 Fundamental Spaces Associated with A d R M # N 23 3.4 Gram-­ chmidt Orthogonalization 41 S 3.5 Summary 45 four  Introduction to Eigenanalysis 47 4.1 Preface 47 4.2 Eigenanalysis Introduced 48 4.3 Eigenanalysis as Spectral Representation 57 4.4 Summary 73 five  The Algebraic Operation of SVD 75 5.1 SVD Introduced 75 5.2 Some Examples 80 5.3 SVD Applications 86 5.4 Summary 90 Part 2.  Methods of Data Analysis six  The Gray World of Practical Data Analysis: An Introduction to Part 2 95 seven  Statistics in Deterministic Sciences: An Introduction 96 7.1 Probability Distributions 99 7.2 Degrees of Freedom 104 eight  Autocorrelation 109 8.1 Theoretical Autocovariance and Autocorrelation Functions of AR(1) and AR(2) 118 viii  •  Contents 8.2 Acf-­ erived Timescale 123 d 8.3 Summary of Chapters and 8 125 nine  Regression and Least Squares 126 9.1 Prologue 126 9.2 Setting Up the Problem 126 9.3 The Linear System Ax = b 130 9.4 Least Squares: The SVD View 144 9.5 Some Special Problems Giving Rise to Linear Systems 149 9.6 Statistical Issues in Regression Analysis 165 9.7 Multidimensional Regression and Linear Model Identification 185 9.8 Summary 195 ten The Fundamental Theorem of Linear Algebra 197 10.1 Introduction 197 10.2 The Forward Problem 197 10.3 The Inverse Problem 198 eleven Empirical Orthogonal Functions 200 11.1 Introduction 200 11.2 Data Matrix Structure Convention 201 11.3 Reshaping Multidimensional Data Sets for EOF Analysis 201 11.4 Forming Anomalies and Removing Time Mean 204 11.5 Missing Values, Take 1 205 11.6 Choosing and Interpreting the Covariability Matrix 208 11.7 Calculating the EOFs 218 11.8 Missing Values, Take 2 225 11.9 Projection Time Series, the Principal Components 228 11.10 A Final Realistic and Slightly Elaborate Example: Southern New York State Land Surface Temperature 234 11.11 Extended EOF Analysis, EEOF 244 11.12 Summary 260 twelve The SVD Analysis of Two Fields 261 12.1 A Synthetic Example 265 12.2 A Second Synthetic Example 268 12.3 A Real Data Example 271 12.4 EOFs as a Prefilter to SVD 273 12.5 Summary 274 thirteen Suggested Homework 276 13.1 Homework 1, Corresponding to Chapter 3 276 13.2 Homework 2, Corresponding to Chapter 3 283 13.3 Homework 3, Corresponding to Chapter 3 290 13.4 Homework 4, Corresponding to Chapter 4 292 Contents  •  ix 13.5 Homework 5, Corresponding to Chapter 5 296 13.6 Homework 6, Corresponding to Chapter 8 300 13.7 A Suggested Midterm Exam 303 13.8 A Suggested Final Exam 311 Index 313 Suggested Homework  •  303 That is, for b2, the poorly known mode is scaled by a c1 whose magnitude is t orders of magnitude smaller than in the cases of b1 and b3; we fully expect x to t t work, and x and x to fail! 13.7  A Suggested Midterm Exam 13.7.1 Assignment J1 K Let a = K K4 L and 1 J1 K B=K0 K L0 −1 −1 −3 1 2N O 1O 5O P 1N O O O 0P (a) Enumerate symbolically (by writing down the relevant equations) the fundamental spaces associated with a rank q M # N matrix, their dimensions, and the various relationships satisfied by those spaces and their parent spaces (b) Reduce A above to its corresponding U No need to normalize pivots Report all elementary operations in as many Es as needed Report A’s q (c) Devise a nice spanning set for R(A) and explain this space’s relationship to a right-hand-side b with which Ax = b has an exact solution (d) Transform the above set to an orthonormal one Explain briefly what you needed to (e) Devise a nice spanning set for R(AT  ) Which of A’s rows, if any, can you replace so as to render A full rank if it isn’t already? If you found such a row (or rows), what is the general form of the replacement set? (f) Devise a nice spanning set for N(A) (g) Obtain the general solution of J2N K O Ax= b= K5O, K O L9P outline its various pieces and their relationships, if any, to any or all of A’s fundamental spaces (h) If you wanted/needed to, could you devise two solutions x1 and x2 to the above system such that x = x ? (i) Find the eigenvalues and eigenvectors of B (j) Divide R3 into orthogonal subspaces based on the action of B That is, some R3 vectors, call them f, will have one fate upon being premultiplied 304  •  Chapter 13 by B, while others, call them g, will have another fate; give some definition of f and g, and describe their fates (i.e., Bf and Bg) (k) What is B’s rank? What is the relationship between U and the eigen-decomposition? 13.7.2 Answers (a) N (A) = = = = T N (A ) R (A) T R (A ) # n l ! R M : AT n l = ! R N # c ! R M : c = Ax, x ! R N - # r ! R N : r = AT y, y ! R M - dim 7N (a)A dim 7N (aT )A dim 7R (a)A dim 7R (aT )A # n ! R N : An = ! R M - = = = = N −q M −q q q In words: • A’s null space: the set of all RN vectors n killed by A (i.e., mapped by it onto the RM zero vector) • A’s left null space: the set of all RM vectors nl killed by AT (i.e., mapped by it onto the RN zero vector) • A’s column space: the set of all RM vectors c that are linear combinations of A’s columns • A’s row space: the set of all RN vectors r that are linear combinations of A’s rows (b) With J e = K−2 K K L−4 0N 0O O O 1P J1 K U = E 2E 1A = K K L 0 so this A’s q = 2 (c) span 8R (A)B = J1 e2 = K K K L0 and t t = $ a 1, a −1 −4 −1 0N O, O O 1P 2N O −3 O, 0O P −1 Z J1 N J0N_ ] ] K O K Ob b [ K2 O, K1O` ] 21 K4O K1Ob ] b L P L Pa \ Suggested Homework  •  305 To show that this set spans R(A), we need to show that any of A columns, or linear combinations thereof, can be expressed as a linear combination of the above set But since most of A columns are dependent, and only the first (left most) two are independent, all we need to show is that J NJ N K O a K O K O K b O, K OL P L4 1P for any (a, b ), can be expressed in terms of our spanning set Since the latter is simply the former normalized, this is a triviality, a condition that is obviously met If Ax = b, b ! R(A), b better be from A’s columns space t1 t (d) First, are the two linearly independent? Since clearly a T a ! , they are With the first one, which is already normalized, we nothing For the second, so t t q = a − (a T q 1) q J0N J1N J1 N K O `0 1j K O K O = K1O − K2O 21 K2O (13.42) K O K O K O 21 1P L L4P L4 P J1N J 2N J0 N 1K O K O K O = K1O − K2O = − K−3O, 21 K O 7K O K O L1P L4P L 1P J 2N K O t q2 = − −3 K O 14 K 1O L P (e) Let’s reduce AT to its V With J K K K F1 = K K−2 K −1 K−2 L and J1 K K0 K −1 F2 = K K K0 K0 L 0 0 0 0 0 0 0 0N O 0O 0O , 0O O 0O 1O P 0 0 0 0 0 0 0 0N O 0O 0O , 0O O 0O 1O P 306  •  Chapter 13 = V: F2F1AT = J K K K K K K K K K L 0 0 0 0 N 4O 1O O 0O O, 0O 0O O 0O P so Z J 2N_ J 1N ] K O K Ob ] K 0O K 1Ob ] b K−1O ] K O K−1Ob T K O` basis 8R (a )B = [ , ] 11 K 2O K 0Ob K O K Ob ] K 1O K 1Ob ] K O K Ob ] 2P L L 1Pa \ Since V has nonzero pivots in columns and 2, A’s row to replace is the third That third row is twice row plus row We have an infinite number of ways to make that not so It would be most convenient, but by no means unique, to make the change in a33 so as to get a pivot right away So any a 33 ! 2a 13 + a 23 will (f) Revisiting U, recalling that null space vectors are ones satisfying An = 0 ! RM and assuming the general structure of null space vectors Ja N K O Ka O Ka O n = K O, Ka O K O Ka O Ka O L 6P the third row is obviously not helpful, so we move to the second, which states that a2 =−a3 + 4a4 + a5 + 3a6 and the top row, a1 =  -a3 + 2a4 + a5 + 2a6 Then J a − 2a − a − 2a N K O K−a + 4a + a + 3a 6O K O a 3O K (13.43) n=K a 4O K O K a 5O K O K a 6O L P Suggested Homework  •  307 J 1N J−2N J−1N J−2N K O K O K O K O K−1O K 4O K 1O K 3O K 1O K 0O K 0O K 0O = a K O + a K O + a K O + a K O, K 0O K 1O K 0O K 0O K 0O K 0O K 1O K 0O K O K O K O K O K 0O K 0O K 0O K 1O L P L P L P L P so a basis for A’s null space, neither normalized nor orthogonal, is basis[N(A)] ZJ N J N J N J N_ ]K 1O K−2O K−1O K−2Ob ]K−1O K 4O K 1O K 3Ob ]K O K O K O K Ob ] 0 0b = [K O, K O, K O, K O` 0O K 1O K 0O K 0Ob ]K ]K 0O K 0O K 1O K 0Ob ]K O K O K O K Ob ]K 0O K 0O K 0O K 1Ob \L P L P L P L Pa (g) The system Ax = b is equivalent to Ux = d := E2E1b Obtaining this modified right-hand side, J2N K O d = K1O, K0O L P is reassuring because the third element vanishes; a solution exists From the first and second rows of Ux = d we get a1 = + a3 − 2a4 − a5 − 2a6 a2 = − a3 + 4a4 + a5 + 3a6 and thus where J + a − 2a − a − 2a N K O K − a + 4a + a + 3a O K a3 O O = x p + xh, x=K a4 O K K a5 O K O K a6 O L P J2N K O K1O K0O xp = K O K0O K0O K O K0O L P 308  •  Chapter 13 and J 1N J−2N K O K O K−1O K 4O K 1O K O K O + a K 0O xh = a3 K 0O K 1O K O K O K 0O K 0O K O K O L 0P L 0P J−1N J−2N K O K O K 1O K 3O K 0O K 0O   + a K O + a K O (13.44) K 0O K 0O K O K O K 1O K 0O K O K O 0P L L 1P Note the general form of the solution The particular solution, which is unique, is fully determined by the right-hand-side vector b All the indeterminacy is collected into the entirely unconstrained coefficients of the null space vectors, which together make up the homogeneous part of the solution (h) Sure I could! Here’s how, but note that there are an infinite number of ways of doing this, so my way is by no means unique I am just offering one   Let’s choose, mostly arbitrarily, a5 = , ai = 0, i = 3, 4, 6, with which J2N J−1 N J3N K O K O K O K1O K 1O K3O K0O K 0O K O 1 x = K O + K O = K O, K0O 2K 0O 2K0O K O K O K O K0O K 1O K1O K O K O K O L0P L 0P L0P We are next looking for an x2 satisfying = x2 19 = or x 2 x1 = 19 19 16 While this vector, in general, comprises a piece from each of the null space vectors, let’s restrict our attention to one comprising only xp plus some of the null space vector whose coefficient is a3 Then Suggested Homework  •  309 J2N J 1N K O K O K1O K−1 O K0O K 1O x = K O + a K O, K0O K 0O K O K O K0O K 0O K O K O L0P L 0P and its squared norm is (4 + 2a3 + a2 ) + (1 − a3 + a2 ) + a2 = 19 16 3 Simplifying, we get 3a2 + a3 + − 19 = 0, 16 whose roots give the necessary coefficient to satisfy x = x If they are complex (as they are here, and which they can easily be), the relationship still holds, by definition of the norm, but it may not be exactly what we want . .    Can we guarantee a real solution? Yes, again Here’s how, assuming, for simplicity, that only a3 and a4 are nonzero, and also that a4 = 1 In this case, The general form of the solution’s squared magnitude is x = (2 + a3 − 2)2 + (1− a3 + 4)2 + a2 + = 3a2 − 5a3 + 25 3 For simplicity of notation, let’s now call x1’s a3 a, and x2’s a3 b and note that x 2/ x = : x 3a2 − 5a + 25 = = x 2 3b − 5b + 25 Let’s set b = 1, with which x 3a2 − 5a + 25 3a2 − 5a + 25 = = =4 − + 25 23 x2 3a2 − 5a − 67 = Since this has two real roots (+5.6 and +-3.96), we are set (i) 1− m 1− m = −m _1− m i2 = 0, 0 −m so 310  •  Chapter 13 Z ] 1, i = ] m i = [ 1, i = 2, ] 0, i = ] \ is a repeated eigenvalue with an algebraic multiplicity of 2; would its geometric multiplicity suffice? Let’s see, solving (for m1, 2 = 1) J1 1N K O = K = e Be 1 O e1 K O L0 0P From the third row it is clear that e13 = 0, and from the first row, e 11 + e 13 = e 11 = e 11 , which is identically true The second row is also identically satisfied for any e12 Since neither e11 nor e12 are constrained and e13 = 0, the general form of the eigenvectors corresponding to m = 1 is Je 11N K O e = Ke 12O K O L 0P Let’s set either one once to and once to zero, yielding J0N J1N K O K O t K0O, t K1O, = e = e K O K O L0P L0P so, yes, the geometric multiplicity is as high as it needs to be (as high as the algebraic multiplicity) Good   What about m3 = 0? Clearly, in this case e32 = 0, but e33 is unconstrained (because now the zero eigenvalue guarantees the third row’s vanishing) and, from the first row, e31 = -e33 Therefore, J 1N J e 31 N K O K O t e3 = K O ( e3 = K O 2K O K−e O L−1 P L 31P (j) The answer to this is not unique Subjectively, I think it makes the most sense to split R3 into those vectors killed by pre multiplication by B, {f : Bf = 0 ! R3}, and a remainder, {g : Bg ! ! R3} Of course, if this is t t our choice of a split, then f = ae because m3 = 0, i.e., e = span[N(B)] (k) B’s rank is q = 2 This is revealed by the nonzero pivots in U (not that we got it, but in general) and the nonzero eigenvalues This is the only relationship between eigenanalysis and Gaussian elimination that I know of Suggested Homework  •  311 13.8  A Suggested Final Exam (a) Write down the governing equation of the SVD operation Explain what each term is and point out the dimensions and nature of each participant (b) Write the above relations for individual modes Be sure to include the range of modes over which the modal representation applies (c) From the above relations, derive the biorthogonality conditions relating the left and right singular vectors (d) Show that the SVD representation of a matrix is a spectral representation (e) Explain why the left and right eigenvalue problems share one set of eigenvalues What are the conditions that set satisfies? (f) Describe symbolically R of a rank 1, 3 # 2 matrix Point out R’s parts corresponding to the null spaces in the model and data spaces (g) Write down the covariance matrix of a 2 # N matrix whose rows are N-element, unit norm time series of perfect sine waves, with the top and bottom series comprising exactly and full waves What part(s) of the original matrix’s SVD can you construct numerically with this information? Write down these parts (h) How would you get symbolically the remainder of that matrix’s SVD? What does it represent? What is its general form? (i) What type of matrix lends itself best to SVD compression? How does it work? At what (accuracy) cost? (j) What can you say about a time series whose acf is given by (1, 0, -1, 0, 1, 0, . . .)? At what resolution is it sampled? (k) What can you say about a time series whose acf is given by (1, 0, 0, 0 $)? (l) What can you say about a weekly resolution time series whose acf is given by (1, 0.65, 0.38, 0.21, 0.09, 0.01 $)? (m) Consider a time series yi , i = [1, 100], measured at ti , i = [1, 100] Suppose you have a reason to believe y grows exponentially in time Set up the system that will let you choose the parameters of this growth optimally Describe and explain the parameters (n) Describe how you’d solve the optimization problem How the coefficient matrix’s fundamental spaces figure into this? (o) Describe the general split of the right-hand-side vector (b in most of our class discussions) How does each of b’s pieces affect the solution? (p) Recast the above solution of the optimization problem in terms of the coefficient matrix’s SVD (q) How would you obtain a solution to the above problem for the case of a rank-deficient coefficient matrix What does this procedure amount to? (r) Write down the general procedure of obtaining the empirical orthogonal functions of an M # N A 312  •  Chapter 13 0.1 a b c -0.1 0 10 15 20 25 x 10 15 20 25 x 10 15 20 25 x Figure 13.1 Hypothetical EOFs addressed in the exam (s) What can you say about a matrix whose three leading EOFs are the fields shown in fig 13.1 (with panel a showing EOF 1)? What else can N you say given m1 = 94, m2 = 8, m3 = 0.24, and / i =1 mi =112 ? (t) Describe symbolically the preliminary transformation of a data set depending on two space coordinates and time that will allow you to obtain the data set’s EOFs value y 12 Index addition and subtraction field axioms, 3; associativity, 3, 12; commutativity, 3, 12; distributivity, 3, 12; identity, 3; inverses, 3, 12, 41, 54, 56 (see also inverse problem; “inversion” of singular problems) Africa, 216 agronomy, 98 algebra, 95, 139, 156, 235; algebra texts, See also linear algebra Amazonia, 216 Andes Mountains, 216 anomalies, 99, 152, 153, 204–5, 237, 239, 241, 256, 261; annual mean anomalies, 216; annual vs monthly anomalies, 219; in climate work, 99; equatorial anomalies, 222; forming anomalies and removing time mean, 204; latitude (zonal) anomalies, 222; moisture anomalies, 256, 257; North Atlantic sea level surface pressure anomalies, 230, 232; positive anomalies, 222–23; temperature anomalies, 115, 118, 219, 221, 239, 245 AR See autoregression (AR) ARPACK package, 243 Asia (southeast), 216 Australia, 216, 221 autocorrelation function (acf), 109–18, 125; acf-­ erived timescale, 123–25; of a decaying d periodic signal, 111; dependence of on sampling rate, 111–12; key role of, 113; partial autocorrelation, 118–25, 131 autoregression (AR), 113; AR identification, 154; direct inversion for, 154–56; estimating the noise variance, 162–63; and the partial autocorrelation function (PACF), 159–62; theoretical autocovariance and autocorrelation functions of AR(1) and AR(2), 118–23; and the Yule-­ alker equations, 156–59 W browser downloads, and information packets, 87 Cane, Mark A., 245 Cantrell, C D., 25n2 Cartesian basis set, 14, 58 central limit theorem (CLT), 100–101, 103–4 climate dynamics, 232 climatic variables, near surface, 215–16 covariability matrix, 48, 208; the correlation matrix, 214–16; the covariance matrix, 208–11, 214 cumulative distribution function (CDF), 101, 103 data analysis: as a branch of statistical science, 96; empirical data analysis, 95; grayness of, 95; as the principal tool of multidimensional data analysis (the covariance matrix), 48; reasons for analyzing data, 1–2 See also eigenanalysis data matrix structure convention, 201; reshaping multidimensional data sets for EOF analysis, 201–2 data sets, deterministic parts of, 99 decomposition See eigen-­decomposition; singular value decomposition (SVD) degrees of freedom (df ), 15, 104–8, 125 distributions: black distributions, 171, 174; bounded distributions, 253; and F, 169, 171–73; probability distributions, 99–101, 103–4, 125; theoretical distributions, 99, 100; zero mean distributions, 208 See also normal distributions earth science, 98 eigenanalysis (calculation of eigenvalues and eigenvectors), 47, 73–74; background, 47–48; introduction to in the context of 2-­ pecies system, 48–57 See also spectral s representation eigen-­ ecomposition, 47; as spectral represend tation, 62–64 Einstein and the Generations of Science (Feuer), 1n1, 2n2 El Niño, 245, 256; dynamics of, 194; El Niño-­ Southern Oscillation (ENSO), 219, 230; of the Equatorial Pacific, 216, 254 empirical orthogonal functions (EOFs), 200–201, 260, 261; analysis of using SVD 314  •  Index empirical orthogonal functions (EOFs) (continued) and economy considerations, 232, 234; calculating EOFs, 218; displaying EOFs, 218–19; dynamical systems view of extended EOFs (EEOFs), 253–54, 256; EOF analysis of southern New York State land surface temperatures, 234–43; EOFs as a prefilter to SVD, 273–74; extended EOF (EEOF) analysis, 244–50, 252–53; filling in missing values, 227–28; and missing values, 205–8, 225–26; and Monte Carlo simulations, 208, 227; real data example, 219–22; removing time means of individual time series, 205; and “stacked” variables, 245–46, 253, 254; synthetic analyses of, 222–25 See also anomalies; covariability matrix; data matrix structure convention; projection time series, principal components empirical/theoretical science, complementary relationship of, empiricism, 1–2 Equatorial Pacific, near surface air temperature of, 192–95 Euclidean norm, extended empirical orthogonal functions (EEOFs) See empirical orthogonal ­functions (EOFs) Farrell, B F., 56n1 Feuer, L S., 1n1, 2n2 fields, 3; the real line (R), 3, 12 Fisher z-­transform, 179–80 Flannery, B P., 72n2 fluid turbulence, force/acceleration, functional relationship b ­ etween (using the matrix approach), 127–29 forcing processes, 113 forward problem, 197–98 Fourier basis, 163 Fourier fit, 133, 163–65 Fourier matrix, 163, 164–65, 176 Fourier vectors, 163 fundamental theorem of linear algebra, 24–26 “Fundamental Theorem of Linear Algebra, The” (Strang), 25n1 Gaussian distribution, 89, 100, 101 Gaussian elimination, 18–22, 54, 145, 310; and elementary operations, 18–19 Gaussian noise, 89, 99 “Generalized Stability Theory, Part I: Autonomous Operators” (Farrell and Ioannou), 56n1 geographical areas: Africa, 216; Amazonia, 216; Andes Mountains, 216; Asia (southeast), 216; Australia, 216, 221; Equatorial Pacific, 192–95, 216, 254; Gulf of Mexico, 221; Himalayas, the, 216; Mauna Loa (­ awaii), 151–53; Middle East, 216; Peru, H 216; South America, 221; Tibetan plateau, 216 Gram-­ chmidt orthogonalization, 41–45, 76 S gravity, Gulf of Mexico, 221 Himalayas, the, 216 homework: for Chapter 3, 276–87, 290–92; for Chapter 4, 292–96; for Chapter 5, 296–300; for Chapter 8, 300–3; on complementary sets, 289–90; on orthonormalization, 287– 89; suggested final examination, 311–12; suggested midterm examination, 303–10 How to Think About Statistics (Philips), 104, 104n1 Hurrell, Jim, 105 income/body weight relationship, 105 “Initial Growth of Disturbances in a Baroclinic Flow, The” (Farrell), 56n1 inner product, 5, 9, 13, 282 inverse problem, 198–99 “inversion” of singular problems: formal view, 147–48; informal view, 145–46 Ioannou, P J., 56n1, 185n3, 187n4 jackknife estimate, 115 Kragh, H., 1n1 Kronecker delta, 132–33 least squares, SVD view of, 144–45; and the choice of spectral truncation point, 146–47; “inversion” of singular problems (formal view), 147–48; “inversion” of singular problems (informal view), 145–46; over­ etermined least squares, 144; rank-­ d deficient least squares, 144–45; under­ determined least squares, 144; well-­ osted p problems, 145 Legendre, Hermite, and Hough functions, 200 Index  •  315 linear algebra, 197–99, 201; fundamental theorem of, 24–26 See also linear algebraic equation systems, application of fundamental vector spaces to linear algebraic equation systems, application of fundamental vector spaces to, 36–40 linear Ax = b setup, 129; generality of, 129; and the inverse problem, 129–30 linear operators, spectra of, 64–73 See also ocean surface patterns problem linear system Ax = b, 128, 130–32; e (error vector) and weighted, dual minimization formalism, 143–44; minimizing e using calculus, 132–36; minimizing e using geometry, 136–43; rank-­ eficient linear Ax = b, d 144–45; the system as overdetermined, 131, 144; the system as undetermined, 130–31, 144 See also linear systems, special problems giving rise to linear systems, special problems giving rise to, 149; identifying or removing a trend, 151– 53; regression vs interpolation, 149–51 See also AR identification; Fourier fit Lotka Volterra dynamical system, 254 Maclaurin expansion, 191 Matlab, 178, 202, 204, 205, 227, 235 matrices, 3; column-­ ise representation of, 7; w the condition number of a matrix, 41; funN damental spaces associated with A ! RM  #  , 23–41; matrix addition, 7; matrix outer product, 10–11; matrix product, 9–10; m ­ atrix rank (q), 10, 18–23; matrix variables, 7; M #  matrix, 3, 7, 8, 9; numerical N considerations in determining q, 40–41; rectangular diagonal matrix (M > N), 8; rectangular diagonal matrix (M 2, 183–85 multiple species dynamics, 48–49, 113 NCAR-­ CEP Reanalysis, 219 N Newton, Isaac, 126–27, 128–29 noise, 60, 61, 89–90, 97, 111, 113, 162–63, 181–82, 188, 222, 224, 228, 238, 252, 268, 273; data noise, 198; estimating the noise variance of an AR process, 162–63; meaningless noise, 183; noise contamination, 62, 89, 113–14, 147, 183, 250, 273; noise variance, 163, 224; pure white noise, 105, 107, 114, 115, 123, 160, 185; random noise, 115, 125; unstructured noise, 210, 223 See also Gaussian noise normal distribution, 89, 99, 100, 101, 104, 192, 190; non-­ ormal (non-­ elf adjoint) n s operators, 74 North Atlantic surface pressure anomalies, 230, 232 null hypothesis, 103, 114 “numerical forgiveness,” 95 Numerical Recipes in Fortran: The Art of Scientific Computing (Press, Teukosky, Vetterling, and Flannery), 72n2 Occam’s razor, 180 ocean surface patterns problem, 70–73; and the “piling up” of fresh water, 70; and surface displacement, 70 Octave, 178, 202, 204, 205, 227, 235, 257 316  •  Index orthogonality, 200 orthonormal sets, 133 outer product, 10, 221, 283 parsimony principle, 180 persistence (null model skill), 191 Peru, 216 Philips, John L., 104, 104n1 physics, 1, 96 pivots, 19, 22 Press, W H., 72n2 probability density function (PDF), 96–99; estimation of, 98 probability distributions, 99–104 projection, 5, 34, 42, 59, 60, 64, 68, 141, 146, 198, 218, 226, 228, 229; projection coefficients, 200; projection matrices, 63 See also projection time series, principal components projection time series, principal components, 228–30, 232 product, 17, 51, 63, 129, 147, 162, 202, 209, 215, 237, 247, 258, 296; dot product, 34; inner product, 5, 9, 13, 75, 90, 282; matrix product, 9; outer product, 10; representation product, 262 Pythagorean theorem, Quantum Generations: A History of Physics in the Twentieth Century (Kragh), 1n1 quantum mechanics, 2n2 rank, 23, 45–46, 62, 63, 77, 145, 146–50, 164, 181, 188–89, 260, 277, 290, 303, 311 rank-­ ullity theorem, 21, 25 n regression, 123, 195–96; definition of, 126; and “fitting,” 126; as interpretation, 151; multiple regression, 169, 176 (see also multiple regression, succinct protocol of) See also linear system Ax = b; multidimensional regression, and linear model identification; regression analysis, statistical issues in regression analysis, statistical issues in, 165–73; confidence intervals with multiple predictors, 175–77; correlation significance, 177–78; using the Fisher z-­transform, 179–80; using the t test, 178; variance of the regression parameters, 173–75 See also multiple regression, succinct protocol of relativity, 2n2 Runge-­ utta method, 71 K Sahara, the, 216 Sarachik, Edward S., 245 Scandinavia vs Labrador seesaw, 232 sea surface temperature anomalies (SSTAs): in the Equatorial Pacific, 256; in the North Atlantic, 256–57, 258 sea surface temperatures (SSTs), 200 seasonality, 98–99 signals: noisy signals, 97–98, 111; and ­seasonality, 98–99 significance tests, 100, 103–4 singular value decomposition (SVD), 75, 90–91, 232, 234; and data compression, 86–89; examples, 80–86; filtering or noise suppression (of synthetic field F), 89–90; introduction to, 75–80 See also least squares, SVD view of; singular value decomposition (SVD) analysis singular value decomposition (SVD) analysis, 261–64; empirical orthogonal functions (EOFs) as a prefilter to SVD, 273–74; as a generalization of empirical orthogonal function (EOF) analysis, 261; real data examples, 271–72; summary of, 274–75; synthetic examples, 265–71 South America, 221 southern New York State land surface temperatures, empirical orthogonal function (EOF) analysis of, 234–43; and the Ashokan Reservoir, 242; in the Catskill Mountains, 242; along the Hudson River, 240–41, 243 “spanning,” 13–15 Spectra and Pseudo-­ pectra: The Behavior s of Nonnormal Matrices and Operators (­Trefethen), 56n1 spectral representation: eigen-­ ecomposition d as spectral representation, 62–64; key importance of, 57; the spectra of linear operators, 64–73; the spectrum, 57–60; ­ tility of, 60–62 u spectral theorem, 57 spectral truncation point c, 146–47 spectrum, 57, 60, 74, 87, 89, 147, 224, 253, 273 Statistical Analysis in Climate Research (von Storch and Zwier), 123n2 statistics, 96 See also data analysis; degrees of freedom (df  ); probability density function (PDF); probability distributions; significance tests; t statistic stochastic processes, 3, 109; stochastic forcing, 187 Strang, G., 25n1 Index  •  317 Taylor, G I., 123n1 temperature, and solar radiation, 97 Teukolsky, S A., 72n2 Tibetan plateau, 216 time series, 105–8, 113–18, 125; hourly mean air temperature anomalies in semi-­ ural r New York State, 115, 117–18; memory h ­ orizon of, 113; the North Atlantic Oscillation Index (NAOI) time series, 105, 115, 116; stable time series, 113; time series variance, 113 traffic congestion, study of, 98–99 Trefethen, L N., 56n1 t statistic, 104 t test, 178 United States National Center for Atmospheric Research (NCAR), 105, 105n2 variables: matrix variables, 3, 7; scalar variables, 3; stochastic variables, 3; vector variables, 3–4 vector spaces, 12; normed inner-­ roduct vecp tor spaces, 13; subspaces of, 17; vector space rules and axioms, 12; vector space spanning, 13–17 vectors, 3, 3–4, 12; 1-­ ectors (scalars), 12; v 2-­ ectors, 12; 3-­ ectors, 12, 15, 58; basis vecv v tors, 13–17; the inner product of two vectors, 5; and linear independence, 4–5, 15, 16, 41; the norm of a vector, 5, 7; and orthogonality, 5, 16, 22; and projection, 5; unit vectors, 7; vector addition, 4; vector transpose, 4; vector variables, 3–4 See also vector spaces Vetterling, W T., 72n2 von Storch, H., 123n2 Walker, Gilbert, 245, 246 Warm Pool, 221 Wyrtki, Klaus, 246 Yule-­ alker equations, 156–59 W Zwier, F W., 123 .. .Spatiotemporal Data Analysis Spatiotemporal Data Analysis Gidon Eshel Princ eton Universit y Press Princ eton a nd Oxf ord... Cataloging-in-Publication Data Eshel, Gidon, 1958–   Spatiotemporal data analysis / Gidon Eshel    p cm   Includes bibliographical references and index   ISBN 978-0-691-12891-7 (hardback)   Spatial analysis (Statistics) ... rise to data whose analysis this book addresses, your data must meet one criterion for this book to optimally answer practical challenges your data may present This criterion is that the data possess

Ngày đăng: 14/03/2014, 10:20

TỪ KHÓA LIÊN QUAN

w