1. Trang chủ
  2. » Thể loại khác

Chapter 09_Autocorrelation

7 28 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 147,88 KB

Nội dung

Advanced Econometrics Chapter 9: Autocorrelation Chapter AUTOCORRELATION Non-zero correlation between errors at different observations: E (ε t ε t ) ≠ t ≠ s → violated assumption (4): E(εε') = σε2I because the off-diagonals ≠ Example: log Qt = β1 + β log K t + β log Lt + ε t t = 1,2, T In recession Q↓ more than inputs εt < In boom Q↑ more than inputs εt > Autocorrelation, also called serial correlation, can exist in any research study in which the order of the observations has some meaning, it occur most frequently in time-series data • Pure serial correlation is caused by the underlying distribution of the error term of the true specification of an equation • Impure serial correlation is caused by a specification error such as an omitted variable or incorrect functional form • We here study about the pure serial correlation Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics I Chapter 9: Autocorrelation PROPERTIES OF OLS ESTIMATOR UNDER AUTOCORRELATION: βˆOLS is still unbiased βˆOLS is still consistent βˆOLS is longer best (efficient), it is less efficient than βˆGLS variances VarCov( βˆOLS ) ≠ σ ε2 ( X ' X ) −1 : so the standard errors of βˆ j ' s are biased (downward) and inconsistent because they are based on incorrect formula t-statistic, R2, overall F-statistics upward II DISTURBANCE PROCESS: For testing or treatment we need to make more explicit assumption about the type of autocorrelation The most common is first order autoregressive process [AR(1)] ε t = ρε t −1 + ut ut satisfies all classical assumptions  E (u t ) =  2  E (u t ) = σ u   E (u t u s ) = (t ≠ s ) ρ: coefficient of autocorrelation |ρ| → stationary of εt • Covariance stationary of εt: the mean variance and all autocovariances of εt are constant Autocovariances: Cov[ε t , ε t − s X ] = Cov[ε t + s , ε t X ] = σ Ω t ,t − s = γ s = γ − s So Cov[ε t , ε t − s X ] does not depend on t, only depend on s ε t = ρε t −1 + ut = ρ [ ρε t − + ut −1 ] + ut = ρ 2ε t − + ρut −1 + ut Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation = ρ [ ρε t − + ut − ] + ρut −1 + ut = ρ 3ε t − + ρ ut − + ρut −1 + ut n −1 ε t = ρ n ε t − n + ∑ ρ j ut − j |ρ| < j =0 n →∞ ∞ ε t = ∑ ρ j ut − j → ε unrelated to future of u , u , t t t+j j =0 this is called infinite moving average process Moment of εt : ∞  ∞ E (ε t ) = E ∑ ρ j ut − j  = ∑ ρ j E (ut − j ) =   j =0  j =  Var (ε t ) = E (ε t2 ) = E ( ρε t −1 + ut ) → E (ε t2 ) = E ( ρ 2ε t2−1 + ut2 + ρε t −1ut ) → E (ε t2 ) = ρ E (ε t2−1 ) + E (ut2 ) + ρ E (ε t −1ut )      σ ε2 σ ε2 → σ ε2 = ρ 2σ ε2 + σ u2 → σ u2 σε = 1− ρ σ u2 Autocovariance: E (ε t ε t −1 ) = E [( ρε t −1 + ut )ε t −1 ] = ρE ( ρε t2−1 ) + E (ut , ε t −1 ) = ρσ ε2  Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation → E (ε t ε t − s ) = ρ sσ ε2 Corr(ε t , ε t −1 ) = Corr(ε t , ε t −1 ) Var (ε t )Var (ε t −1 ) = ρσ ε2 =ρ σ ε2 → Corr(ε t , ε t −1 ) = ρ Cov(ε t , ε t − ) = E (ε t , ε t − ) = ρE [ε t −1ε t − ] + E (ut , ε t − ) = ρ 2σ ε2  → Corr(ε t , ε t − ) = ρ → Corr(ε t , ε t − s ) = ρ s We can use this to construct the matrix Ω in E(εε') = σ ε Ω    ρ Σ = σε  ρ  σ u2   1− ρ  ρ T −1 σ ε2 = ρ ρ2 ρ       ρ T −2 ρ T −3  ρ ρ T −1   ρ T −2  ρ T −3      σ u2 1− ρ Cov(ε t ε t − s ) = ρ sσ ε2  Corr(ε t , ε t − s ) = ρ s s = 1, 2, , T-1 III ESTIMATION UNDER AUTOCORRELATION: Estimation with known ρ: We can find GLS estimator: βˆGLS = ( X ' Ω −1 X ) −1 X ' Ω −1Y Find matrix H such that: H'H = Ω-1 Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation HY = HX + Hε meets all classical assumptions Choose  1− ρ   −ρ H =       1− ρ   −ρ → HY =       −ρ  0 0   0 1  0  1− ρ     1  −ρ  0   0 1  0  1− ρ     1  − ρ X 11   X 21 − ρX 11 For HX: HX =  X 31 − ρX 21     X − ρX T −1,1  T1  Y1   − ρ Y1   Y      Y2 − ρY1   Y  =  Y − ρY              YT  YT − ρYT −1   − ρ X 12 − ρ X 13 X 22 − ρX 12 X 23 − ρX 13 X 32 − ρX 22 X 33 − ρX 23   X T − ρX T −1, X T − ρX T −1,3 − ρ X 1k    X k − ρX 1k   X k − ρX k       X Tk − ρX T −1,k    X 11  1  X  1  21    X 31  can be = 1          X T −1,1  1 For Hε  − ρ ε1     ε − ρε  Hε =  ε − ρε       ε − ρε  T −1   T So transformed model is: Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation k (i) − ρ Y1 = ∑ β j ( − ρ X j ) + − ρ ε j =1 k (i) Yt − ρYt −1 = ∑ β j ( X tj − ρX t −1, j ) + ε t − ρε t −1   j =1     Yt* t = 2, 3, T ut X tj* This is also called "Autoregressive transformation" or "quasi-differencing" "rhotransformation" Note: βˆGLS = ( X ' Ω −1 X ) −1 X ' Ω −1Y Estimation with unknown ρ: Using Cochrane – Orcutt procedure: (1) Estimate= Y X β + ε by OLS, save et ’s (2) Use et ’s to estimate ρ from regression T = et ρ et −1 + ut → ρˆ = ∑e e t =2 T t t −1 ∑e t =2 t −1 (3) Transform the model as in (ii) by quasi-differencing the data and estimate (ii) by OLS Stop here → Cochrane – Orcutt (4) Use βˆ j from step to compute new et ’s algebraically from= Y X β + ε again k e= Yt − ∑ βˆ j X tj t j =1 (5) Repeat step → until convergence ( ρˆ ’s at successive step differ by less than 0.001) Exercise: For AR(2) process ε t = ρ1ε t −1 + ρ 2ε t − + ut where ut meets classical assumptions  Define the quasi-differencing that eliminate autocorrelation  Spell out the iterative Cochrane – Orcutt procedure for this model Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation Cochrane – Orcutt procedure for AR(2) model: AR(2) process: ε t = ρ1ε t −1 + ρ 2ε t − + ut ut meets classical assumptions: Quasi-differencing: Yt − ) (Yt − ρ1Yt −1 − ρ2= k ∑β j =1 j ( X tj − ρ1 X t −1, j − ρ X t − 2, j ) + ( ε t − ρ1ε t −1 − ρ 2ε t − )   ut (t = 3,4, … T) Procedure: (1) Estimate= Y X β + ε by OLS, save et ’s (2) Estimate ε t = ρ1ε t −1 + ρ 2ε t − + ut by OLS, to get ρˆ1 , ρˆ (3) Use ρˆ1 , ρˆ to quasi-differencing, estimate by OLS Stop here → steps Cochrane – Orcutt (4) Use βˆ j from step to compute new et ’s algebraically from= Y X βˆ + ε again (5) Repeat step → until convergence Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam ...Advanced Econometrics I Chapter 9: Autocorrelation PROPERTIES OF OLS ESTIMATOR UNDER AUTOCORRELATION: βˆOLS is still unbiased... University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation = ρ [ ρε t − + ut − ] + ρut −1 + ut = ρ 3ε t − + ρ ut − + ρut −1 + ut... University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 9: Autocorrelation → E (ε t ε t − s ) = ρ sσ ε2 Corr(ε t , ε t −1 ) = Corr(ε t , ε t −1

Ngày đăng: 09/12/2017, 08:38

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN