1. Trang chủ
  2. » Thể loại khác

Chapter 07_Generalized Linear Regression Model

8 101 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 8
Dung lượng 99,55 KB

Nội dung

Advanced Econometrics Chapter 7: Generalized Linear Regression Model Chapter GENERALIZED LINEAR REGRESSION MODEL I MODEL: Our basic model: with ε ~ N [0, σ I ] Y = X + ε We will now generalize the specification of the error term E(ε) = 0, E(εε') = σ Ω = Σ n ×n This allows for one or both of: Heteroskedasticity Autocorrelation The model now is: (1) Y = X β + ε n ×k (2) X is non-stochastic and Rank ( X ) = k (3) E(ε) = n ×1 (4) E(εε') = Σ = σ ε2 Ω n× n n× n Heteroskedasticity case: σ 12  σ 22 Σ=     0 0   0      σ n2   Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model Autocorrelation case:  ρ 2 Σ = σε     ρ n −1 ρ1  ρ n −2  ρ n −1   ρ n −2         ρ i = Corr (ε t , ε t −i ) = correlation between errors that are i periods apart II PROPERTIES OF OLS ESTIMATORS: βˆ = ( X ′X ) −1 X ′Y = ( X ′X ) −1 X ′( Xβ + ε ) βˆ = β + ( X ′X ) −1 X ′ε E ( βˆ ) = β + ( X ′X ) −1 X ′E (ε ) = β βˆ is still an unbiased estimator VarCov( βˆ ) = E [( βˆ − β )( βˆ − β )' ] = E[( X ′X ) −1 X ′ε )(( X ′X ) −1 X ′ε )' ] −1 −1 = E [( X ′X ) X ′εε ' X ( X ′X ) ] = ( X ′X ) −1 X ′E (εε ' ) X ( X ′X ) −1 = ( X ′X ) −1 X ′(σ Ω) X ( X ′X ) −1 ≠ σ ( X ′X ) −1 so standard formula for σˆ βˆ no longer holds and σˆ βˆ is a biased estimator of true σˆ βˆ → βˆ ~ N[ , σ ( X ′X ) −1 X ′Ω) X ( X ′X ) −1 ] so the usual OLS output will be misleading, the std error, t-statistics, etc will be based on σˆ ε2 ( X ' X ) −1 not on the correct formula OLS estimators are no longer best (inefficient) Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model Note: for non-stochastic X, we care about the efficient of βˆ Because we know if n↑ → Var( βˆ j ) ↓ → plim βˆ = , βˆ is consistent If X is stochastic: - OLS estimators are still consistent (when E(ε|X) = - IV estimators are still consistent (when E(ε|X) ≠ 0) - The usual covariance matrix estimator of VarCov( βˆ ) which is σˆ ε2 ( X ' X ) −1 will be inconsistent (n →∞) for the true VarCov( βˆ ) We need to know how to deal with these issues This will lead us to some generalized estimator ˆ III WHITE'S HETEROSCEDASCITY CONSISTENT ESTIMATOR OF VarCov( β ) (Or Robust estimator of VarCov( βˆ ) If we knew σ2Ω then the estimator of the VarCov( βˆ ) would be: V = ( X ′X ) −1 X ′(σ Ω) X ( X ′X ) −1 −1 1 1 =  X ′X   X ′(σ Ω) X  X ′X  nn  n  n  −1 1 1 =  X ′X   X ′Σ) X  X ′X  nn  n  n  −1 −1 1  If Σ is unknown, we need a consistent estimator of  X ′Σ) X  (Note that the number of n  unknowns is Σ grows one-for-one with n, but [X ′Σ) X ] is k×k matrix it does not grow with n) Let: Σ* = X ′ΣX n Σ* = n n ∑∑σ ij Xk ×1i X1×k′j n i =1 j =1 Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model σ 12  σ 22  In the case of heteroskedasticity Σ =     0 0   0      σ n2   n ∑σ i X i X i′ n i =1 Σ* = White (1980) showed that if: Σ0 = n ∑ ei X i X i′ n i =1 then plim(Σ0) = plim(Σ*) so we can estimate by OLS and then a consistent estimator of V will be: −1 11   1 n  Vˆ =  X ′X   ∑ ei2 X i X i′ X ′X  nn   n i =1   n −1 −1 −1 Vˆ = n ( X ′X ) Σ ( X ′X ) Vˆ is consistent estimator for V, so White's estimator for VarCov( βˆ ) is: −1 −1 VarCov ( βˆ ) = ( X ′X ) X ' Σˆ X ( X ′X ) = Vˆ e12  ˆ where: Σ =    0 0  0 (Note Σ = Σˆ ) n   2  en   e22   Vˆ is consistent for V = n ( X ′X )−1 σ Ω( X ′X )−1 regardless of the (unknown) form of the heteroskedasticity (only for heteroskedasticity) Newey - West produced a corresponding consistent estimator of V when there is autocorrelation and/or heteroskedasticity Note that White's estimator is only for the case of heteroskedasticity and autocorrelation White's estimator just modifies the covariance matrix estimator, not βˆ The t-statistics, F-statistics, etc will be modified, but only in a manner that is appropriate asymptotically So if we have heteroskedasticity or autocorrelation, whether we modify the covariance matrix estimator or not, the usual t-statistics will be unreliable in finite samples (the Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model white's estimator of VarCov( βˆ ) only useful when n is very large, n → ∞ the Vˆ → VarCov( βˆ ) → βˆ is still inefficient → To obtain efficient estimators, use generalized lest squares - GLS A good practical solution is to use White's adjustment, then use Wald test, rather than the F-test for exact linear restrictions Now let's turn to the estimation of , taking account of the full process for the error term IV GENERALIZED LEAST SQUARES ESTIMATION (GLS): OLS estimator will be inefficient in finite samples Assume E(εε') = n×Σn is known, positive definite → there exists C j and λ j n ×1 j = 1,2, ,n such that n ×1 Σ Cj = Cj λj n× n n ×1 (characteristic vector C, Eigen-value λ) n ×1 n ×1 → before C'ΣC = Λ where C = [C1 C  C n ] n ×1 λ1 0 λ Λ=    0 0  0      λn   Λ1 /   =    λ1 0  λ2     0        λn  C ' ΣC = Λ = ( Λ1 / )' ( Λ1 / ) → −1 / C 'ΣC ( Λ−1 / ) = ( Λ−1 / )( Λ1 / )( Λ1 / )( Λ−1 / ) = I ( Λ )     H' H' → HΣ H ' = I → Σ = H −1 IH ' −1 = H −1 H ' −1 → Σ = H'H Nam T Hoang University of New England - Australia H = Λ−1 / C ' University of Economics - HCMC - Vietnam Advanced Econometrics Our model: Chapter 7: Generalized Linear Regression Model Y = Xβ + ε Pre-multiply by H: HY =  HX β + H ε   Y* → ε* X* Y * = X *β + ε * ε* will satisfy all classical assumption because: E(ε*ε*') = E[H(εε')H'] = HΣH' = I Since transformed model meets classical assumptions, application of OLS to (Y*, X*) data yields BLUE → βˆGLS = ( X * ' X * ) −1 X * ' Y * = (X ' H ' H X ) −1 X '  H ' HY → → Σ −1 Σ −1 βˆGLS = ( X ' Σ −1 X ) −1 X ' Σ −1Y Moreover: [ ] [ VarCov ( βˆGLS ) = ( X * ' X * ) −1 X * ' E (ε *ε * ' ) X * ( X * ' X * ) −1 ] = ( X * ' X * ) − = ( X ' Σ −1 X ) −1 VarCov( βˆGLS ) = ( X ' Σ −1 X ) −1 → βˆGLS ~ N [β , ( X ' Σ −1 X )] Note that: βˆGLS is BLUE of βˆ → E ( βˆGLS ) = β GLS estimator is just OLS, applied to the transformed model → satisfy all assumptions Gauss - Markov theorem can be applied → βˆGLS is BLUE of βˆ → βˆOLS must be inefficient in this case → Var ( βˆ j GLS ) ≤ Var ( βˆ j OLS ) Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model Example: σ 12  σ 22 Σ = known     0 1 / σ 12  / σ 22 → Σ −1 =       0   0      σ n2   1 / σ 12  / σ 22 →H =      H'H = Σ-1 1 / σ 12  / σ 22  HY =      1 / σ  1/ σ * X = HX =     1 / σ n          / σ n2             / σ n2   Y1  Y1 / σ 12      Y2  Y2 / σ 22  =Y* =             / σ n2  Yn  Yn / σ n2     X 12 / σ X 22 / σ  X n2 / σ n  X 1k / σ    X 2k / σ       X nk / σ n  Transformed model has each observations divided by σi:   Yi    = β1  σi σi  X X    + β  i  +  + β k  ik  σi  σi     εi   +    σi  Apply OLS to this transformed equation → "Weighted Least Squares": Let: βˆ = GSL estimator εˆ = Y * − X * βˆGLS σˆ = εˆ' εˆ n−k Then to test: H0: R = q (F Wald test) [ Rβˆ − q ]' [ R ( X * ' X * ) −1 R ' ]−1 [ Rβˆ − q ] Fnr− k = Nam T Hoang University of New England - Australia r ~F if H0 is true ( r ,n − k ) σˆ University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model [εˆc′εˆc − εˆ' εˆ ] r n −k and F = εˆ' εˆ r (n − k ) where: εˆc = Y * − X * βˆc GLS βˆc GLS = βˆGLS − ( X ' Σ −1 X ) −1 R ' [ R ( X ' Σ −1 X ) −1 R' ]−1 ( RβˆGLS − q) is the "constrained" GLS estimator of Feasible GLS estimation: In practice, of course, Σ is usually unknown, and so βˆ cannot be constructed, it is not feasible The obvious solution is to estimate Σ, using some Σˆ then construct: βˆGLS = ( X ' Σˆ −1 X ) −1 X ' Σˆ −1Y A practical issue: Σ is an (n×n), it has n(n+1)/2 distinct parameters, allowing for symmetry But we only have "n" observations → need to constraint Σ Typically Σ = Σ(θ) where θ contain a small number of parameters Ex: Heteroskedasticity var(εi) = σ2(θ1+θ2Zi) θ1 + θ z1  θ1 + θ z n Σ=               θ1 + θ z n   just parameters to be estimated to form Σˆ Serial correlation:   ρ Σ=    n −1 ρ ρ  ρ n −2  ρ n−1    ρ n −2  = Σ( ρ )       only one parameter to be estimated • • If Σˆ is consistent for Σ then will be asymptotically efficient for Of course to apply we want to know the form of Σ → construct tests Nam T Hoang University of New England - Australia University of Economics - HCMC - Vietnam ... C ' University of Economics - HCMC - Vietnam Advanced Econometrics Our model: Chapter 7: Generalized Linear Regression Model Y = Xβ + ε Pre-multiply by H: HY =  HX β + H ε   Y* → ε* X* Y *... - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model Note: for non-stochastic X, we care about the efficient of βˆ Because... - Australia University of Economics - HCMC - Vietnam Advanced Econometrics Chapter 7: Generalized Linear Regression Model σ 12  σ 22  In the case of heteroskedasticity Σ =     0 0 

Ngày đăng: 09/12/2017, 08:38