1. Trang chủ
  2. » Thể loại khác

Chapter 13_Generalized Method of Moments

9 95 1

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 9
Dung lượng 110,97 KB

Nội dung

Advanced Econometrics Chapter 13: Generalized Method of Moments Chapter 13 GENERALIZED METHOD OF MOMENTS (GMM) I ORTHOGONALITY CONDITION: = Y The classical model: ( n×1) X β + ε ( n×k ) ( k ×1) ( n×1) (1) E (ε X ) = (2) E (εε / X ) = σ I (3) X and ε are generated independently If E (ε i X i ) = then (for equation= i: Yi E ( X iε i ) X β + εi ) (1×k ) ( k ×1) = E X i  E ( X iε i X i )     = E X i E (ε i X i ) X i        = E= X i [ X i ] ( k ×1) → Orthogonality condition ( ) ( ) E  X i/ − E ( X i/ ) ( ε i − E (ε i ) )  Note: Cov( X i/ , ε i ) = = E  X i/ − E ( X i/ ) ε i  ( ) = E X i/ ε i − E ( X i/ ) E (ε i )   /  = E=  X i εi   ( k ×1) (1×1)  if E (ε X ) = ( k ×1) So for the classical model:   E  X i/ ε i  =  ( k ×1) (1×1)  ( k ×1) Nam T Hoang UNE Business School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments II METHOD OF MOMENTS Method of moments involves replacing the population moments by the sample moment Example 1: For the classical model: Population : E ( X i/ε i ) = → E  X i/ (Yi − X i β ) =    ( k ×1) population moment Sample moment of this: n / 1 /  X i (Yi − X i= βˆ ) X (Y − X βˆ )  ∑  n i =1 ( k ×1) n  ( k ×n ) n×1 1×1  ( k ×1) Moment function: A function that depends on observable random variables and unknown parameters and that has zero expectation in the population when evaluated at the true parameter m( β ) - moment function – can be linear or non-linear β is a vector of unknown parameters ( k ×1) E[m( β )] = population moment Method of moments involves replacing the population moments by the sample moment Example 1: For the classical linear regression model: The moment function: X i/ ε i = m( β ) The population function: E ( X i/ ε i ) = E [ m( β )= ] E  X i/ (Yi − X i β )= ( k ×1) The sample moment of E ( X i ε i ) is  n / 1 / βˆ ) X i (Yi − X i= X (Y − X βˆ )  ∑  n i =1 ( k ×1) n  ( k ×n ) n×1 1×1  ( k ×1) Replacing sample moments for population moments: Nam T Hoang UNE Business School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments ( ) 1 / X Y − X βˆ  =   n X /Y − X / X βˆ = X / X βˆ = X /Y = βˆMOM (= X / X ) −1 X /Y βˆOLS Example 2: If X i are endogenous → Cov( X i/ , ε ) ≠ (1×k ) Suppose Z i = ( Z1i Z 2i  Z Li ) is a vector of instrumental variables for X i (1× L ) (1×k ) Zi satisfies: E (ε i Z= → E ( Z i/ε i ) = i ) ( L×1) (1× L ) and Cov ( Z i/ε i ) = ( L×1) We have: E ( Z i/ ε i ) = ( L×1) (1×1) ( L×1)   E  Z i/ (Yi − X i β ) = ( L×1)      population moment ( ) n  / The sample moment for E  Z i/ (Yi − X i β ) is ∑ Zi Yi − X i βˆ  n i =1       =  Z /  Y − X βˆ   n  ( L ×n )  ( k ×1)      ( n×1)   ( L×1) Replacing sample moments for population moments:    1 /   0  Z  Y − X βˆ   = n  ( L ×n )  ( k ×1)  ( L ×1)      ( n×1) (*) a) If L < k (*) has no solution b) If L = k exact identified Nam T Hoang UNE Business School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments   Z /  Y − X βˆ  = 0 ( n×k ) ( k ×1) ( L ×n )   ( k ×1) Z / X βˆ = Z /Y βˆMOM (= Z / X ) −1 Z /Y βˆIV = c) If L > k → k parameters, L equations → There is NO unique solution because of "too many" equations → GMM III GENERALIZED METHOD OF MOMENTS: The general case: Denote: m / ( βˆ ) is the sample moment of the population moment E [m( β )] = Method of moments:  βˆ1     βˆ  βˆ =       βˆk  m/ ( βˆ ) = ( L×1) ( k ×1)  ( L×1) a) If L < k: no solution for βˆ b) If L = k: unique solution for βˆ as m/ ( βˆ ) = ( L×1) ( k ×1)  ( L×1) c) If L > k, how we estimate β ? Hausen (1982) suggested that instead of solving the equations for βˆ m / ( βˆ ) = ( L× 1) ( k ×1) We solve the minimum problem: / m m / ( βˆ )  ( W / ( βˆ )  (**) L× L )  βˆ  (1× L ) ( L×1) Where W is any positive definite matrix that may depend on the data Note: If X is a positive definite matrix then for any vector a = ( a1 a2  an ) ( n ×n ) a/ X a > (1×n ) ( n×n ) (1×n ) Nam T Hoang UNE Business School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments βˆ that minimize (**) is called generalized method of moments (GMM) estimator of βˆ , denote βˆGMM Hausen (1982) showed that βˆGMM from (**) is a consistent estimator of β p lim βˆGMM = β n →∞ ( k ×1) Problem here: What is the best W to use ? - Hansen (1982) indicated: W = VarCov ( n m( βˆ ))  ( L× L ) −1 - With this W, βˆGMM is efficient → has the smallest variance: −1 VarCov ( βˆGMM ) = G /WG  n G = p lim where: ( L ×k ) ( ∂m( βˆ ) ∂βˆ / ) −1 −1 VarCov βˆGMM = G /WG /  G /W VarCov n m( βˆ )  WG G /WG  n The linear model: n  Z i/ (Yi − X i β ) = m/ ( β ) ∑ n i =1 The sample moment becomes: /  n   n  /    Z i/ (Yi − X i β )  Z Y X β W − ( ) ∑ ∑ i i i      ˆ β =  i 1=  ( L× L )  i  (1× L ) ( L×1) First- order condition: /  n /   n  / Z X W  ∑ ( L×i1) (1×ki )  ( L×L )  ∑  Z i (Yi − X i β )  = =  i 1=  i  ( k ×1) (1× L ) ( L×1) k equations: → ( Z X ) W ( Z Y − Z X βˆ ) = → ( Z X ) W Z X βˆ ( Z X ) W Z Y = / / Nam T Hoang UNE Business School / / / / / / / / University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments −1 ( Z / X ) / W Z / X  ( Z / X ) / W Z /Y  → βˆGMM =     For the linear regression model: βˆ GMM ( k ×1)   = ( X / Z ) W ( Z / X )   ( k ×L ) ( L×L ) ( L×1)  −1  /  Z /Y )  ( ( X Z ) (W  ( k ×L ) L×L ) ( L×1)  IV GMM AND OTHER ESTIMATORS IN THE LINEAR MODELS: Notice that if L = k (exact identified) then X / Z is a square matrix ( k × k ) so that: −1 −1 −1 ( X / Z ) W ( Z / X )  = ( Z / X ) W −1 ( X / Z )      and βˆGMM = ( Z / X ) −1 (Z Y ) / which is the IV estimator → The IV estimator is a special case of the GMM estimator If Z = X then βˆGMM = βˆ= OLS (X X ) (X Y ) / −1 / If L > k over-identification, the choice of matrix W is important W is called weight matrix βˆGMM is consistent for any W positive definite The choice of W will affect the variance of βˆGMM → We could choose W such that Var ( βˆGMM ) is the smallest → efficient estimator If W = ( Z / Z ) −1 then: −1 −1 −1 βˆGMM = ( X / Z )( Z / Z ) ( Z / X )  ( X / Z )( Z / Z ) ( Z /Y )      which is the 2SLS estimator is also a special case of the GMM estimator From Hausen (1982), the optimal W in the case of linear model is: −1 −1 1  W =  Z / ΣZ  = E ( Z i/ε i ε i/ Z i ) n n    W = VarCov ( Z i/ε i )  n   Nam T Hoang UNE Business School −1 University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments The next problem is to estimate W: a) If no heteroskedasticity and no autocorrelation:  n  W = σˆ ε2 ∑ ( Z i/ Z i )   n i =1  −1   = σˆ ε2 ( Z / Z )   n  −1 −1 −1 −1 βˆGMM = ( X / Z )( Z / Z ) ( Z / X )  ( X / Z )( Z / Z ) ( Z /Y )      We get the 2SLS estimator → there is no different between βˆ2 SLS and βˆGMM in the case of no heteroskedasticity and no autocorrelation b) If there is heteroskedasticity in the error terms (but no autocorrelation) in unknown forms 1 n  Wˆ =  ∑ ( ei2 Z i/ Z i )   n i =1  −1 (White's estimator) → efficient gain over βˆ2 SLS c) If there are both heteroskedasticity and autocorrelation in unknown forms, use: Wˆ = Newey - West estimator) 1  = Wˆ  Z / ΣZ  n  −1 L n 1 n  / / / Wˆ =   ∑ ( Z i Z i ei ) + ∑ ∑ w j ei ei − j ( Z i Z i − j + Z i − j Z i )   j = i = j +1   n  i= −1 j   ˆ  wj = −  → efficient gain over β SLS L +1  Notes: Σ = E (ε iε i/ ) if heteroskedasticity and autocorrelation forms are known σ = f ( X i ) EX:  i then Σ = E (ε iε i/ ) can be consistently estimated and we u ε ρε = + t −1 t  t could perform GLS to get the efficient estimators βˆGLS (using instrumental variables), GMM is not necessary here Usually the form of autocorrelation and heteroskedasticity are not known → GMM estimator is an important improvement in these cases Nam T Hoang UNE Business School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments V GMM ESTIMATION PROCEDURE: Step 1: Use W = I or W = ( Z / Z ) −1 to obtain a consistent estimator of β Then estimate Wˆ by White's procedure (heteroskedasticity case) or Newey-West procedure (general case) Step 2: Use the estimated Wˆ to compute the GMM estimator: βˆGMM = ( X / Z ) Wˆ ( Z / X )  ( X / Z ) Wˆ ( Z /Y )  −1 Note: We always need to construct Wˆ in the first step VI THE ADVANTAGES OF GMM ESTIMATOR: If we don't know the form/patent of heteroskedasticity or autocorrelation → correct by Robust standard error (White or Newey-West) → stuck with inefficient estimators 2-SLS estimator is consistent but still inefficient if error terms are correlated/ heteroskedasticity GMM brings efficient estimator in the case of unknown heteroskedasticity and autocorrelation forms/ correct standard errors also Potential drawbacks: Definition of the weight matrix W for the first is arbitrary, different choices will lead to different point estimates in the second step One possible remedy is to not stop after iterations but continue to update the weight matrix W until convergence has been achieved This estimator can be obtained by using the "cue" (continuously updated estimators) option within ivreg 2 Inference problem because the optimal weight matrix is estimated → can lead to sometime downward bias in the estimated standard errors for GMM estimator Nam T Hoang UNE Business School University of New England Advanced Econometrics VII Chapter 13: Generalized Method of Moments VARIANCE OF THE GMM ESTIMATOR FOR LINEAR MODELS: ( ) −1 Note: VarCov βˆGMM = ( X / Z ) Wˆ ( Z / X )  n −1 n →∞  → Q X / ZWˆ QZ / X  n n→∞  → → consistency ) ( ( L ˆ / → N 0, Q X / ZWQ n βˆGMM − β  Z X  ( ) ) −1 Estimated: VarCov βˆGMM = n ( X / Z ) Wˆ ( Z / X )   so that βˆ GMM is consistent estimator  −1 In practice Wˆ is noise, since the residual in the first step are affected by sampling error The upshot is that step standard errors tend to be too good Methods now exist that enable you to correct for sampling error in the first step (Windmeijer procedure) VIII SPECIFICATION TESTS WITH GMM: R βˆGMM : Restricted estimator (under constraints) βˆGMM : Unrestricted estimator (no constraints) n n n  n / / / /  ˆ ˆ ˆ ˆ ˆ ˆ Z ε ( R ) W Z ε ( R ) − Z ε W ∑ 2∑ i i i i ∑ Zi ε i  / n  χ q ∑ i i =  i =i =i =i  J q: number of restrictions Nam T Hoang UNE Business School University of New England ...Advanced Econometrics Chapter 13: Generalized Method of Moments II METHOD OF MOMENTS Method of moments involves replacing the population moments by the sample moment Example 1:... School University of New England Advanced Econometrics Chapter 13: Generalized Method of Moments βˆ that minimize (**) is called generalized method of moments (GMM) estimator of βˆ , denote βˆGMM... solution because of "too many" equations → GMM III GENERALIZED METHOD OF MOMENTS: The general case: Denote: m / ( βˆ ) is the sample moment of the population moment E [m( β )] = Method of moments: 

Ngày đăng: 09/12/2017, 08:39

TỪ KHÓA LIÊN QUAN