1. Trang chủ
  2. » Thể loại khác

paper twostep gmm estimation of the errorsinvariables model using highorder moments

24 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Two-Step GMM Estimation of the Errors-in-Variables Model Using High-Order Moments
Tác giả Timothy Erickson, Toni M. Whited
Trường học University of Iowa
Chuyên ngành Econometrics
Thể loại Journal Article
Năm xuất bản 2002
Định dạng
Số trang 24
Dung lượng 180,81 KB

Nội dung

order moments of the data+ We consider a linear regression containing any ber of perfectly and imperfectly measured regressors+ To facilitate empiricalapplication, we present the asympto

Trang 1

TWO-STEP GMM ESTIMATION OF THE ERRORS-IN-VARIABLES MODEL USING HIGH-ORDER MOMENTS

1 INTRODUCTION

It is well known that if the independent variables of a linear regression arereplaced with error-laden measurements or proxy variables then ordinary leastsquares~OLS! is inconsistent+ The most common remedy is to use economictheory or intuition to find additional observable variables that can serve as in-struments, but in many situations no such variables are available+ Consistentestimators based on the original, unaugmented set of observable variables aretherefore potentially quite valuable+ This observation motivates us to revisit theidea of consistent estimation using information contained in the third- and higher

We gratefully acknowledge helpful comments from two referees, Joel Horowitz, Steven Klepper, Brent Moulton, Tsvetomir Tsachev, Jennifer Westberg, and participants of seminars given at the 1992 Econometric Society Sum- mer Meetings, the University of Pennsylvania, the University of Maryland, the Federal Reserve Bank of Phila- delphia, and Rutgers University+ A version of this paper was circulated previously under the title “Measurement-

Error Consistent Estimates of the Relationship between Investment and Q+” Address correspondence to: Timothy

Erickson, Bureau of Labor Statistics, Postal Square Building, Room 3105, 2 Massachusetts Avenue, NE, ington, DC, 20212-0001, USA+

Wash-776

Trang 2

order moments of the data+ We consider a linear regression containing any ber of perfectly and imperfectly measured regressors+ To facilitate empiricalapplication, we present the asymptotic distribution theory for two-step estima-tors, where the first step is “partialling out” the perfectly measured regressorsand the second step is high-order moment generalized method of moments

num-~GMM! estimation of the regression involving the residuals generated by tialling+ The orthogonality condition for GMM expresses the moments of theseresiduals as functions of the parameters to be estimated+ The advantage of thetwo-step approach is that the numbers of equations and parameters in the non-linear GMM step do not grow with the number of perfectly measured regres-sors, conferring a computational simplicity not shared by the asymptoticallymore efficient one-step GMM estimators that we also describe+ Basing GMMestimation on residual moments of more than second order requires that theGMM covariance matrix be explicitly adjusted to account for the fact that es-timated residuals are used instead of true residuals defined by population re-gressions+ Similarly, the weighting matrix giving the optimal GMM estimatorbased on true residuals is not the same as that giving the optimal estimatorbased on estimated residuals+ We determine both the adjustment required forcovariance matrices and the weighting matrix giving the optimal GMM estima-tor+ The optimal estimators perform well in Monte Carlo simulations and insome cases minimize mean absolute error by using moments up to seventh order+

par-Interest will often focus on a function that depends on GMM estimates and

other estimates obtained from the same data+ Such functions include those ing the coefficients on the partialled-out regressors and that giving the popula-

giv-tion R2of the regression+ To derive the asymptotic distribution of such a function,

we must determine the covariances between its “plug-in” arguments, which arenot jointly estimated+ We do so by using estimator influence functions+Our assumptions have three notable features+ First, the measurement errors,the equation error, and all regressors have finite moments of sufficiently highorder+ Second, the regression error and the measurement errors are independent

of each other and of all regressors+ Third, the residuals from the populationregression of the unobservable regressors on the perfectly measured regressorshave a nonnormal distribution+ These assumptions imply testable restrictions

on the residuals from the population regression of the dependent and proxyvariables on the perfectly measured regressors+ We provide partialling-adjustedstatistics and asymptotic null distributions for such tests+

Reiersöl~1950! provides a framework for discussing previous papers based

on the same assumptions or on related models+ Reiersöl defines Model A andModel B versions of the single regressor errors-in-variables model+ Model Aassumes normal measurement and equation errors and permits them to be cor-related+ Model B assumes independent measurement and equation errors butallows them to have arbitrary distributions+ We additionally define Model A*,which has arbitrary symmetric distributions for the measurement and equationerrors, permitting them to be correlated+ Versions of these models with more

Trang 3

than one mismeasured regressor we shall call multivariate+ In reading the

fol-lowing list of pertinent articles, keep in mind that the present paper deals with

a multivariate Model B+

The literature on high-order moment based estimation starts with Neyman’s

~1937! conjecture that such an approach might be possible for Model B+ Reiersöl

~1941! gives the earliest actual estimator, showing how Model A* can be mated using third-order moments+ In the first comprehensive paper, Geary ~1942!shows how multivariate versions of Models A and B can be estimated usingcumulants of any order greater than two+ Madansky ~1959! proposes minimumvariance combinations of Geary-type estimators, an idea Van Montfort, Mooi-jaart, and de Leeuw ~1987! implement for Model A*+ The state of the art inestimating Model A is given by Bickel and Ritov ~1987! and Dagenais andDagenais~1997!+ The former derive the semiparametric efficiency bound forModel A and give estimators that attain it+ The latter provide linear instrumentalvariable~IV! estimators based on third- and fourth-order moments for multi-variate versions of Models A and A*+1

esti-The state of the art for estimating Model B has been the empirical istic function estimator of Spiegelman~1979!+ He establishesMn -consistency

character-for an estimator of the slope coefficient+ This estimator can exploit all able information, but its asymptotic variance is not given because of the com-plexity of its expression+ A related estimator, also lacking an asymptoticvariance, is given by Van Monfort, Mooijaart, and de Leeuw ~1989!+ Cragg

avail-~1997! combines second- through fourth-order moments in a single regressorversion of the nonlinear GMM estimator we describe in this paper+2 Lewbel

~1997! proves consistency for a linear IV estimator that uses instruments based

on nonlinear functions of the perfectly measured regressors+ It should be notedthat Cragg and Lewbel generalize the third-order moment Geary estimator indifferent directions: Cragg augments the third-order moments of the depen-dent and proxy variables with their fourth-order moments, whereas Lewbelaugments those third-order moments with information from the perfectly mea-sured regressors+

We enter this story by providing a multivariate Model B with two-step mators based on residual moments of any order+ We also give a parsimonioustwo-step version of an estimator suggested in Lewbel~1997! that exploits high-

esti-order moments and functions of perfectly measured regressors+ Our version

re-covers information from the partialled-out perfectly measured regressors, yetretains the practical benefit of a reduced number of equations and parameters+The paper is arranged as follows+ Section 2 specifies a multivariate Model Band presents our estimators, their asymptotic distributions, and results usefulfor testing+ Section 3 describes a more efficient but less tractable one-step es-timator and a tractable two-step estimator that uses information from perfectlymeasured regressors+ Section 4 presents Monte Carlo simulations, and Sec-tion 5 concludes+ The Appendix contains our proofs+

Trang 4

2 THE MODEL

Let ~ y i , x i , z i !, i ⫽ 1,+++, n, be a sequence of observable vectors, where x i [

~x i1 , + + + , x iJ ! and z i [ ~1, z i1 , + + + , z iL !+ Let ~u i, «i, xi ! be a sequence of

unobserv-able vectors, where xi[ ~xi1, + + + , xiJ! and «i[ ~«i1, + + + , «iJ!+

~iii! u i and the elements of z i, xi, and «ihave finite moments of every order;

~iv! ~u i, «i ! is independent of ~z i, xi !, and the individual elements in ~u i, «i! are dependent of each other;

in-~v! E~u i!⫽ 0 and E~« i!⫽ 0;

~vi! E @~z i, xi!'~z i, xi!# is positive definite+

Equations~1! and ~2! represent a regression with observed regressors z iandunobserved regressorsxi that are imperfectly measured by x i+ The assumption

that the measurement errors in«iare independent of each other and also of the

equation error u i goes back to Geary~1942! and may be regarded as the tional multivariate extension of Reiersöl’s Model B+ The assumption of finitemoments of every order is for simplicity and can be relaxed at the expense ofgreater complexity+

tradi-Before stating our remaining assumptions, we “partial out” the perfectly sured variables+ The 1⫻ J residual from the population linear regression of x i

mea-on z i is x i ⫺ z imx, where mx [ @E~z i'z i!#⫺1E ~z i'x i!+ The corresponding 1⫻ J

residual from the population linear regression of xi on z i equals hi [ xi

z imx + Subtracting z imxfrom both sides of~2! gives

Trang 5

We consider a two-step estimation approach, where the first step is to tute least squares estimates ~ [mx, [my! [ @(i⫽1 n z i'z i#⫺1(i⫽1 n z i'~x i , y i! into ~3!and~5! to obtain a lower dimensional errors-in-variables model, and the sec-ond step is to estimateb using high-order sample moments of y i ⫺ z i [myand

substi-x i ⫺ z i [mx+ Estimates of a are then recovered via ~4!+

Our estimators are based on equations giving the moments of y i ⫺ z imyand

x i ⫺ z imx as functions of b and the moments of ~u i, «i, hi!+ To derive these

equations, write ~5! as y i ⫺ z imy⫽(j⫽1 J hijbj ⫹ u i and the jth equation in~3!

as x ij ⫺ z imxj⫽ hij⫹ «ij, where mxj is the jth column ofmxand~hij, «ij! is the

where~r0, r1, + + + , r J! are nonnegative integers+ Expand ~(j⫽1 J hijbj ⫹ u i!r0 and

~hij⫹ «ij!r j using the multinomial theorem, multiply the expansions together,and take the expected value of the resulting polynomial, factoring the expecta-tions in each term as allowed by Assumption 1~iv!+ This gives

Let m⫽(j J⫽0r j + We will say that equation ~7! has moment order equal to m,

which is the order of its left-hand-side moment+ Each term of the sum on theright-hand side of~7! contains a product of moments of ~u i, «i, hi!, where the

orders of the moments sum to m+ All terms containing first moments ~and

there-fore also~m⫺ 1!th order moments! necessarily vanish+ The remaining terms

can contain moments of orders 2, + + + , m ⫺ 2 and m+

Systems of equations of the form~7! can be written as

where m [ vec~ my, mx !, g i~ m! is a vector of distinct elements of the form

~ y i ⫺ z imy!r0)j⫽1 J ~x ij ⫺ z imxj!r j

, the elements of c ~u! are the corresponding

right-hand sides of~7!, and u is a vector containing those elements of b andthose moments of~u, «, h! appearing in c~u!+ The number and type of ele-

Trang 6

ments in u depend on what instances of ~7! are included in ~8!+ First-ordermoments, and moments appearing in the included equations only in terms con-taining a first-moment factor, are excluded from u+ Example systems are given

in Section 2+1+

Equation~8! implies E @g i~ m!#⫺ c~t! ⫽ 0 if t ⫽ u+ There are numerous

spec-ifications for ~8! and alternative identifying assumptions that further ensure

E @g i~ m!#⫺ c~t! ⫽ 0 only if t ⫽ u+ For simplicity we confine ourselves to the

following statements, which should be the most useful in application+

DEFINITION 1+ Let M ⱖ 3 We will say that (8) is an S M system if it sists of all second through Mth order moment equations except possibly those for one or more of E @~ y i ⫺ z imy!M # , E @~ y i ⫺ z imy!M⫺1# , E @~x ij ⫺ z imxj!M # ,

there-as M grows+ For fixed M, each of the optional equations contains a moment of

u i or«i that is present in no other equation of the system; deleting such anequation from an identified system therefore yields a smaller identified system+Assumption 2+ Every element of b is nonzero, and the distribution of h sat-

isfies E@~hi c!3# ⫽ 0 for every vector of constants c ⫽ ~c1, + + + , c J! having atleast one nonzero element+

The assumption thatb contain no zeros is required to identify all the eters inu+ We note that Reiersöl ~1950! shows for the single-regressor Model Bthatb must be nonzero to be identifiable+ Our assumption on h is similar tothat given by Kapteyn and Wansbeek~1983! and Bekker ~1986! for the multi-variate Model A+ These authors show that b is identified if there is no linearcombination of the unobserved true regressors that is normally distributed+ As-suming thathi c is skewed for every c⫽ 0 implies, among other things, that notall third-order moments ofhiwill equal zero and that no nonproduct moment

param-E~hij3! will equal zero+

PROPOSITION 1+ Suppose Assumptions 1 and 2 hold and (8) is an S M

This implies E @g i~ m!#⫺ c~t! ⫽ 0 for t 僆 D if and only if t ⫽ u+

Identifica-tion then follows from the next assumpIdentifica-tion:

Assumption 3+ u僆 Q 傺 D, where Q is compact+

Trang 7

It should be noted that Assumptions 2 and 3 also identify some systems notincluded in Definition 1; an example is the system of all third-order momentequations+ The theory given subsequently applies to such systems also+

Let s have the same dimension asm and define Sg~s! [ n⫺1(i⫽1 n g i ~s! for all

mo-The influence function for [m, which is denoted cmi, is defined as follows+3

LEMMA 1+ Let R i ~s! [ vec@z i'~ y i ⫺ z i s y !, z i'~x i ⫺ z i s x !# , Q [ I J⫹1

E ~z i'z i ! , and c mi [ Q⫺1R

i ~ m! If Assumption 4 holds, then E~c mi! ⫽ 0,

avar~ [m! ⫽ E~c micmi' !⬍ `, and Mn~ [m ⫺ m! ⫽ n⫺102(i⫽1 n cmi ⫹ o p ~1! Here o p~1! denotes a random vector that converges in probability to zero+

The next result applies to all g i~ m! as defined at ~8!+

LEMMA 2+ Let G ~s! [ E @]g i ~s!0]s'# If Assumption 4 holds, then

Mn~ Sg~ [m! ⫺ E @g i~ m!#! d

&N ~0, V! , where

V [ var@g i~ m!⫺ E @g i~ m!#⫹ G~ m!c mi#+

Elements of G~ m! corresponding to moments of order three or greater are

gen-erally nonzero, which is why “partialling” is not innocuous in the context of

high-order moment-based estimation+ For example, if g i ~ m! contains ~x ij ⫺ z imxj!3

W, and W is positive definite, then

~i! Zu exists with probability approaching one and Zu p

Trang 8

SM systems with the same M are asymptotically equivalent; they differ from

each other by optional equations that each contain a parameter present in noother equation of the system+ This suggests that in practice one should use, for

each M, the smallest S Msystem containing all parameters of interest+

2.1 Examples of Identifiable Equation Systems

Suppressing the subscript i for clarity, let _y [ y ⫺ zm y and _x j [ x j ⫺ zm xj+

Equations for the case J ⫽ 1 ~where we also suppress the j subscript! include

E~ _y2_x2!⫽ b2@E~h4!⫹ E~h2!E~«2!#⫹ E~u2!@E~h2!⫹ E~«2!#, (16)

E~ _y _x3!⫽ b@E~h4!⫹ 3E~h2!E~«2!#+ (17)

The first five equations, ~10!–~14!, constitute an S3 system by Definition 1+This system has five right-hand-side unknowns, u ⫽ ~ b, E~h2!, E~u2!,

E~«2!, E~h3!!'

+ Note that the parameter E ~u2! appears only in ~10! and E~«2!appears only in~12!+ If one or both of these parameters is of no interest, thentheir associated equations can be omitted from the system without affecting

the identification of the resulting smaller S3 system+ Omitting both gives

the three-equation S3 system consisting of ~11!, ~13!, and ~14!, with u ⫽

~ b, E~h2!, E~h3!!'

+ Further omitting ~11! gives a two-equation, two-parametersystem that is also identified by Assumptions 2 and 3+

The eight equations~10!–~17! are an S4system+ The corresponding u has six

elements, obtained by adding E~h4! to the five-element u of the system ~10!–

~14!+ Note that Definition 1 allows an S system to exclude, but requires an S

Trang 9

system to include, equations ~10! and ~12!+ It is seen that these equations are

needed to identify the second-order moments E~u2! and E~«2! that now alsoappear in the fourth-order moment equations+

For all of the J⫽ 1 systems given previously, Assumption 2 specializes to

b⫽ 0 and E~h3!⫽ 0+ The negation of this condition can be tested via ~13!and~14!; simply test the hypothesis that the left-hand sides of these equations

equal zero, basing the test statistic on the sample averages n⫺1(i n⫽1 [y i2 [x i and

n⫺1(i n⫽1 [y i [x i2 where [y i [ y i ⫺ z i [my and [x ij [ x ij ⫺ z i [mxj+ ~An appropriateWald test can be obtained by applying Proposition 5, which follows+! Notethat when b ⫽ 0 and E~h3! ⫽ 0, then ~13! and ~14! imply b ⫽ E~ _y2_x!0

E~ _y _x2!, a result first noted by Geary ~1942!+ Given b, all of the precedingsystems can then be solved for the other parameters in their associatedu+

An example for the J ⫽ 2 case is the 13-equation S3system

The associated u consists of 12 parameters: b1, b2, E ~h1!, E~h1h2!, E~h2!,

E ~u2!, E~«1!, E~«2!, E~h1!, E~h1h2!, E~h1h2!, and E~h2!+ To see how sumption 2 identifies this system through its third-order moments, substitute

As-~23! and ~24! into ~22!, and substitute ~25! into ~24!, to obtain the equation system

冢 b1E~h1!⫹ b2E~h1h2! b1E~h1h2!⫹ b2E~h1h2!

b1E~h1h2!⫹ b2E~h1h2! b1E~h1h2!⫹ b2E~h2!

Trang 10

If the matrix does not have full rank, then it can be postmultiplied by a c [

~c1, c2!'⫽ 0 to produce a vector of zeros+ Simple algebra shows that such a c

must also satisfy

@c1E~h1h2!⫹ c2E~h1h2!#⫽ 0, (28)

b1@c1E~h1!⫹ c2E~h1h2!#⫽ 0, (29)

b2@c1E~h1h2!⫹ c2E~h2!#⫽ 0+ (30)

Both elements ofb are nonzero by Assumption 2, so these equations hold only

if the quantities in the square brackets in~28!–~30! all equal zero+ But thesesame quantities appear in

E @~c1h1⫹ c2h2!3# [ c1@c1E~h1!⫹ c2E~h1h2!# (31)

⫹ c2@c1E~h1h2!⫹ c2E~h2!#

⫹ 2c1c2@c1E~h1h2!⫹ c2E~h1h2!#,

which Assumption 2 requires to be nonzero for any c⫽ 0+ Thus, ~26! can

be solved forb, and, because both elements of b are nonzero, ~18!–~25! can besolved for the other 10 parameters+

We can test the hypothesis that Assumption 2 does not hold+ Let detj 3 be

the determinant of the submatrix consisting of rows j and 3 of~27! and notethat bj ⫽ 0 implies detj 3⫽ 0+ Because detj 3 equals the determinant formedfrom the corresponding rows of the matrix in ~26!, one can use the samplemoments of~ [y i, [x i1, [x i 2! and Proposition 5 to test the hypothesis det13{det23⫽ 0+When this hypothesis is false, then both elements of b must be nonzero and

~27! must have full rank+ For the arbitrary J case, it is straightforward to show that Assumption 2 holds if the product of J analogous determinants, from the

matrix representation of the system ~A+4!–~A+5! in the Appendix, is zero+ It should be noted that the tests mentioned in this paragraph do not have

non-power for all points in the parameter space+ For example, if J⫽ 2 and h1 isindependent ofh2 then det13{det23⫽ 0 even if Assumption 2 holds, because

E~hi12hi 2! ⫽ E~h i1hi 22! ⫽ 0+ Because this last condition can also be tested,more powerful, multistage, tests should be possible; however, developing these

is beyond the scope of this paper+

2.2 Estimating a and the Population Coefficient of Determination

The subvector Zb of Zu can be substituted along with [m into ~4! to obtain anestimate [a+ The asymptotic distribution of ~ [a'

, Zb'! can be obtained by applyingthe “delta method” to the asymptotic distribution of~ [m'

, Zu'!+ However, the ter distribution is not a by-product of our two-step estimation procedure, be-cause Zu is not estimated jointly with [m+ Thus, for example, it is not immediatelyapparent how to find the asymptotic covariance between Zb and [m+ Fortunately,the necessary information can be recovered from the influence functions for [m

Trang 11

lat-and Zu+ The properties of these functions, given in Lemma 1 and tion 2~iii!, together with the Lindeberg–Levy central limit theorem and Slutsky’stheorem, imply

that satisfiesMn~ [g ⫺ g0!⫽ n⫺102(i n⫽1cgi ⫹ o p~1! for some constant vector g0

and some functioncgi+4Then the asymptotic distribution of~ [g'

, Zu'! is a mean multivariate normal with covariance matrix var~cgi'

zero-, cui'!, and the deltamethod can be used to obtain the asymptotic distribution ofp~ [g, Zu!, where p

is any function that is totally differentiable at~g0, u0!+ Inference can be ducted if var~cgi'

con-, cui'! has sufficient rank and can be consistently estimated+For an additional example, consider the population coefficient of determina-tion for~1!, which can be written

r2⫽ my'var~zi!my⫹ b'var~hi!b

my'var~z i!my⫹ b'var~hi!b⫹ E~u i2!+ (32)Substituting appropriate elements of Zu, [m, and Zvar~z i!⫽ n⫺1(i n⫽1~z i ⫺ Sz!'⫻

~z i ⫺ Sz! into ~32! gives an estimate [r2

+ To obtain its asymptotic bution, define Iz i by z i[ ~1, Iz i!, letvarZ ~Iz i!⫽ n⫺1(i n⫽1~Iz i ⫺ SIz!'~Iz i ⫺ SIz! and [s [

distri-vech@var~Z Iz i!# , where vech creates a vector from the distinct elements of asymmetric matrix, and then apply the delta method to the distribution of

The following result makes possible inference with [a, [r2

, and other tions of~ [s'

2.3 Testing Hypotheses about Residual Moments

Section 2+1 showed that Assumption 2 implies restrictions on the residual ments of the observable variables+ Such restrictions can be tested using the cor-responding sample moments and the distribution of Sg~ [m! in Lemma 2+ Wald-

mo-statistic null distributions are given in the next result; like Lemma 2, it dependsonly on Assumption 4+

PROPOSITION 5+ Suppose g i ~ m! is d ⫻ 1 Let v~w! be an m ⫻ 1 vector of continuously differentiable functions defined on R d such that m ⱕ d and V~w! [ ]v~w!0]w'has full row rank at w ⫽ E @g ~ m!# Also, let v [ v~E @g ~ m!#! , [v [

Trang 12

v ~ Sg~ [m!! , and ZV [ V~ Sg~ [m!! If Assumption 4 holds and V is nonsingular, then

n~[v ⫺ v0!'~ ZV ZV ZV'!⫺1~[v ⫺ v0! converges in distribution to a chi-square random variable with m degrees of freedom.

For an example, recall that equations ~10!–~17! satisfy Assumption 2 if

b⫽ 0 and E~h i3!⫽ 0, which by ~13! and ~14! is true if and only if the null

E~ _y2_x! ⫽ E~ _y _x2!⫽ 0 is false+ To test this hypothesis, let v0[ v~E @g i~ m!#! be

a 2⫻ 1 vector consisting of the left-hand sides of ~13! and ~14! and [v [ v~ Sg~ [m!!

be a 2⫻ 1 vector consisting of n⫺1(i n⫽1 [y i2 [x i and n⫺1(i n⫽1 [y i [x i2+

3 ALTERNATIVE GMM ESTIMATORS

In the introduction we alluded to asymptotically more efficient one-step mation+ One approach is to estimate m and u jointly+ Recall the definition of

esti-R i ~s! given in Lemma 1 and note that [m solves n⫺1(i n⫽1R i ~s!⫽ 0+ Therefore

[m is the GMM estimator implied by the moment condition E @R i ~s!# ⫽ 0 iff

s⫽ m+ This immediately suggests GMM estimation based on the “stacked” ment condition

mo-ER i ~s!

g i ~s! ⫺ c~t!⫽ 0 if and only if ~s, t! ⫽ ~ m, u!+ (33)

Minimum variance estimators~ Im, Du! are obtained by minimizing a quadratic form

in~n⫺1(i n⫽1R i ~s!'

, n⫺1(i n⫽1g i ~s!'⫺ c~t!'!'

, where the matrix of the quadratic

is a consistent estimate of the inverse of var~R i~ m!'

, g i~ m!'!+ The asymptotic periority of this estimator may not be accompanied by finite sample superiority,however+ We compare the performance of stacked and two-step estimators inthe Monte Carlo experiments of the next section and find that neither is supe-rior for all parameters+ The same experiments show that the difference between

su-the nominal and actual size of a test, particularly su-the J-test of overidentifying

restrictions, can be much larger for the stacked estimator+ Another practical coming of this estimator is that the computer code must be substantially re-written for each change in the number of perfectly measured regressors, whichmakes searches over alternative specifications costly+ Note also that calculating

short-n⫺1(i n⫽1R i~ miter ! and n⫺1(i n⫽1g i~ miter! for a new value miterat each iteration

of the minimization algorithm~in contrast to using the OLS value [m for everyiteration! greatly increases computation time, making bootstraps or Monte Carlosimulations very time consuming+ For example, our stacked estimator simula-tion took 31 times longer to run than the otherwise identical simulation using

two-step estimators+ Jointly estimating var~z i! with m and u, to obtain totically more efficient estimates ofr2or other parameters, would amplify theseproblems+

asymp-Another alternative estimator is given by Lewbel~1997!, who demonstratesthat GMM estimators can exploit information contained in perfectly measured

regressors+ To describe his idea for the case J⫽ 1, define f~z ! [ F~z!⫺

Ngày đăng: 25/08/2024, 06:46

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w