1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Mô hình hóa và nhận dạng hệ thống - random process 2

11 475 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 11
Dung lượng 403,74 KB

Nội dung

Xác định ngõ vào, ngõ ra của hệ thống cần nhận dạng ⇒ xác định tín hiệu “kích thích“ để thực hiện thí nghiệm thu thập số liệu và vị trí đặt cảm biến để đo tín hiệu ra. Chọn tín hiệu

Lecture Stochastic Processes 2.1 Introduction Spectral analysis is the study of models of a class called stationary stochastic processes Stochastic processes {X(t) : t ∈ T } is a family or rv’s indexed by a variable t, where t is a subset of T which may be infinite  continuous → X(t)  These may be real, vector valued or  discrete → Xt complex with suitable added indices We will use the Riemann-Stieltjes notation in what follows because mixed continuous- discrete distributions are common for time series, and hence the R-S notation is standard in stochastic theory Let g(x) and H(x) be real valued functions on [L, U ] where L < U and L, U may be −∞, ∞ with suitable limiting processes Let PN be a partition of [L, U ] into N + intervals L = x0 < x · · · < x N = U Define the mesh fineness: |PN | = max {x1 − x0 , x2 − x1 , , xN − xN −1 } LECTURE STOCHASTIC PROCESSES Then Z N X U g(x)dH(x) = lim |PN |→0 L g(x0 j )[H(xj ) − H(xj−1 )] j=1 where x0 j ∈ [xj−1 , xj ] There are cases If H(x) = x then we have the Riemann integral RU L g(x)dx If H(x) is continuously differentiable on [L, U ] with h(x) = ∂x H(x), then Z Z U U g(x)dH(x) = g(x)h(x)dx L L If H(x) undergoes step changes of size bi at on [L, U ] so that c i + bi H(x) = ci L ≤ x < bi H(x) = ci + bi ≤ x ≤ U ci then Z N X U g(x)dH(x) = L bi g(ai ) j=1 Example For a continuous process, we have f (x) = ∂x F (x) and hence: Z Z ∞ E[X] = ∞ xdF (x) = −∞ xf (x)dx −∞ Example For a discrete process where the cdf F (x) undergoes discrete jumps of size N at a set of values {xi }: Z ∞ E[X] = xdF (x) = −∞ N X xi N j=1 Example For a fixed value of t, Xt is an rv and hence has a cdf, where Ft (a) = P [Xt ≤ a] 2.1 INTRODUCTION with Z ∞ E[X] = Z xdF (x) = µt −∞ ∞ var[X] = −∞ (x − µt )dF (x) = σt2 Note that the statistics become time dependent We also may need higher order cdf’s like the bivariate for two times: Ft1 ,t2 (a1 , a2 ) = P [Xt1 ≤ a1 , Xt2 ≤ a2 ] and the N dimensional generalization: Ft1 , ,tN (a1 , , aN ) = P [Xt1 ≤ a1 , , XtN ≤ aN ] The set of cdf’s from Ft to Ft1 , ,tN are a complete description of the stochastic process if we know them for all t and N However, the result is a mess and the distributions are unknowable in practice We can start to narrow this down by considering stationary processes: one whose statistical properties are independent of time, or a physical system which is steady state If {Xt } is a result of a stationary process, then each element must have the same cdf and Ft (x) → F (x) Any pair of elements in {Xt } must have the same bivariate distribution, etc In summary, the joint cdf of {Xt } for a set of N time points {ti } must be unaltered by time shifts There are several cases of stationarity: Complete stationarity If the joint cdf of {Xt1 , , XtN } is identical to that for {Xtk+1 , , Xtk+N } for any k, then it is completely stationary All of the statistical structure is unchanged under shifts in the time origin This is a severe requirement and rarely establishable in practice Stationarity of order LECTURE STOCHASTIC PROCESSES E[Xt ] = µ for ∀t No other stationarity is implied Stationarity of order E[Xt ] = µ and E[Xt2 ] = µ2 , so that the mean and the variance are time independent E[Xt Xs ] is a function of |t − s| only and hence cov[Xt , Xs ] is a function of |t − s| only This class is called weakly stationary or second order stationary, and is the most important type of stochastic process for our purposes For a second order stationary process, we define the autocovariance sequence by Sτ = cov[Xt , Xt+τ ] = cov[X0 , Xτ ] This is a measure of the covariance between members of the process separated by τ time units τ is called the lag We would expect Sτ to be largest at τ = and be symmetric about the origin 2.2 Properties of the acvs S0 = σ 2 S−τ = Sτ (even function) |Sτ | ≤ S0 for τ > Sτ is positive semidefinite PN PN k=1 Stj −tk aj ak ≥ for {a1 , , aN } ∈ < j=1 ↔ or in matrix form ~aT Σ~a ≥ ↔ where Σ is the covariance matrix The autocorrelation sequence is the acvs normalized to S0 ρτ = Sτ S0 2.3 EXAMPLES OF STATIONARY PROCESSES and has properties: ρ0 = ρ−τ = ρτ for τ > |ρτ | ≤ for τ > ρτ is positive semidefinite Note that a completely stationary process is also second order stationary, but second order stationarity does not imply complete stationarity However, if the process is Gaussian (i.e, the joint cdfs of the rv’s are multivariate normal) then second order stationarity does imply complete stationarity because a Gaussian distribution is completely specified by its first and second moments All of this machinery extends to complex processes Let Zt = Xt,1 + iXt,2 This is second order stationary if all of the joint first and second order moments of Xt,1 and Xt,2 exist, are finite, and are invariant to shifts in time This implies that Xt,1 and Xt,2 are themselves second order stationary We have E[Zt ] = µ1 + iµ2 = µ cov[Zt1 , Zt2 ] = E[(Zt1 − µ)∗ (Zt2 − µ)] = Sτ and hence S−τ = Sτ∗ for a complex process 2.3 Examples of stationary processes Let {Xt } be a sequence of uncorrelated rv’s such that E[Xt ] = µ var[Xt ] = σ cov[Xt , Xt+τ ] = (follows from uncorrelatedness) LECTURE STOCHASTIC PROCESSES Then {Xt } is stationary with acvs Sτ =   σ2 , τ = 0;  0, τ 6= Note that a sequence of uncorrelated rv’s are not necessarily independent, but independence does imply uncorrelatedness Independence implies that the joint cdf may be factored into the product of individual cdf’s, and we have not applied this condition The exception for these statements would be a Gaussian process where uncorrelatedness does imply independence A random or white noise process is a process without memory One datum does not depend on any other 2.4 First order Autoregressive process Example Consider a particle of unit mass moving in a straight line and subject to a random force Let Xt denote the particle velocity at time t and ²t denote the random force per unit mass acting on it Then X˙ t = ²t − αXt if the resistive force is proportional to velocity from Newton’s laws X˙ t ≈ Xt −Xt−1 and hence: Xt = (²t + Xt−1 ) 1+α = ²0 t + α0 Xt−1 This is a first order AR process where the value of the rv at the time t depends on that at time t − but not at earlier times Xt − aXt−1 = ²t 2.4 FIRST ORDER AUTOREGRESSIVE PROCESS where a is a constant and {²t } is random This is analogous to linear regression with Xt depending linearly on Xt−1 and ²t being the residual, hence the term “autoregressive” The difference equation can be solved assuming X0 = yielding Xt = ²t + a²t−1 + a2 ²t−2 + · · · + at−1 ²1 if E[²t ] = µ then: E[Xt ] = µ(1 + a + · · · + at−1 )  ³ ´  µ 1−at , a 6= 1; 1−a =  µt, a = If µ = 0, this vanishes and Xt is first order stationary and otherwise is not However if |a| < then E[Xt ] ≈ µ 1−a (t → ∞) and hence Xt is asymptotically first order stationary If var[²t ] = σ and cov(²t ²s ) = 0, we have  ³ ´  σ 1−a2t2 , a 6= 1; 1−a var[Xt ] =  σ t, a = cov[Xt Xt+r ] =  ³ ´  σ ar 1−a2t2 , |a| 6= 1; 1−a  σ t, |a| = This is not second order stationary unless σ = but it is asymptotically so if |a| < Sτ τ LECTURE STOCHASTIC PROCESSES The AR process easily generalizes to an order p Xt + a1 Xt−1 + · · · + ap Xt−p = ²t Let z denote the unit delay operator Then (1 + a1 z + · · · + ap z p )Xt = ²t This is asymptotically stationary if the roots of the z polynomial lie inside a circle of radius one An AR process is a finite linear combination of its past values and the current value of a random process The present noise value ²t is drawn into the process and hence influences the present and all future values Xt + Xt−1 , This can be shown by recursively solving the AR(p) equation Xt = ∞ X θj ²t−j , with θ0 = j=0 This shows why the acvs for an AR process dies out gradually with lag and never reaches zero 2.5 Moving Average Process An MA process is a linear combination of present and past values of a noise process with a finite extent Xt = b0 ²t + b1 ²t−1 + · · · + bp ²t−p A given noise term ²t influences only p future values of X and hence the acvs for an MA process will vanish beyond some finite value of lag cov[Xt Xt+τ ] = p p X X j=0 k=0 bj bk E[²t−j ²t+τ −k ] = σ p−τ X j=0 bj bj+τ 2.6 ERGODIC PROPERTY where var[²t ] = σ Since cov[Xt Xt−τ ] = cov[Xt Xt+τ ], an MA process is stationary with acvs: Sτ =  P  σ p−|τ | bj bj+|τ | , |τ | ≤ p; j=0  0 (2.1) |τ | > p There are no restrictions on the size of bj It can be shown that an AR(p) process is equivalent to an infinite order MA process, and vice versa Mixed AR + MA processes, called ARMA processes are also in existence Spectral estimators exist which are based on AR, MA and ARMA models These are called parametric estimators because their result is dependent on the model, i.e AR of order p, etc AR models are also called maximum entropy None of these work satisfactorily with geophysical data except in pathological cases The problem is that no test exists to determine which model or what order is appropriate Failure to use the correct model/order gives wildly wrong answers, as shown on the next page Note: In the MATLAB online help is stated that the parametric methods give better results for the estimation of the spectrum That is based on an example that is shown there and that it represents an AR model, logically the parametric methods will be better in this case than the non-parametric More nonsense has been written about the superior resolving power of AR or MEM than anything else in geophysics (see any issue of JGR in the 1970’s) As an example see figure (2.1) 2.6 Ergodic Property Estimation of the mean or acvs using observations from a single realization are based on replacing ensemble averages with time averages Estimates which 10 LECTURE STOCHASTIC PROCESSES AR MA ARMA Figure 2.1: Example of AR (top figure), MA (middle) and ARMA (bottom) models (solid lines) and their approximation by AR, MA and ARMA models Note how without any knowledge of the process, this parametric methods fail to recover the real spectrum 2.6 ERGODIC PROPERTY 11 “converge” under this interchange are called ergodic The ergodic assumption is typically applied without justification in all of spectral analysis ... Xt ,2 are themselves second order stationary We have E[Zt ] = µ1 + i? ?2 = µ cov[Zt1 , Zt2 ] = E[(Zt1 − µ)∗ (Zt2 − µ)] = Sτ and hence S−τ = Sτ∗ for a complex process 2. 3 Examples of stationary processes... Gaussian process where uncorrelatedness does imply independence A random or white noise process is a process without memory One datum does not depend on any other 2. 4 First order Autoregressive process. .. acvs for an AR process dies out gradually with lag and never reaches zero 2. 5 Moving Average Process An MA process is a linear combination of present and past values of a noise process with a

Ngày đăng: 16/10/2012, 09:09

TỪ KHÓA LIÊN QUAN