1. Trang chủ
  2. » Tài Chính - Ngân Hàng

Econometric theory and methods, Russell Davidson - Chapter 15 docx

49 331 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 49
Dung lượng 379,34 KB

Nội dung

Chapter 15 Testing the Specification of Econometric Models 15.1 Introduction As we first saw in Section 3.7, estimating a misspecified regression model generally yields biased and inconsistent parameter estimates. This is true for regression models whenever we incorrectly omit one or more regressors that are correlated with the regressors included in the model. Except in certain special cases, some of which we have discussed, it is also true for more general types of model and more general types of misspecification. This suggests that the specification of every econometric model should be thoroughly tested before we even tentatively accept its results. We have already discussed a large number of procedures that can be used as specification tests. These include t and F tests for omitted variables and for parameter constancy (Section 4.4), along with similar tests for nonlinear regression models (Section 6.7) and IV regression (Section 8.5), tests for het- eroskedasticity (Section 7.5), tests for serial correlation (Section 7.7), tests of common factor restrictions (Section 7.9), DWH tests (Section 8.7), tests of overidentifying restrictions (Sections 8.6, 9.4, 9.5, 12.4, and 12.5), and the three classical tests for models estimated by maximum likelihood, notably LM tests (Section 10.6). In this chapter, we discuss a number of other procedures that are designed for testing the specification of econometric models. Some of these procedures explicitly involve testing a model against a less restricted alternative. Others do not make the alternative explicit and are intended to have power against a large number of plausible alternatives. In the next section, we discuss a variety of tests that are based on artificial regressions. Then, in Section 15.3, we discuss nonnested hypothesis tests, which are designed to test the specification of a model when alternative models are available. In Section 15.4, we discuss model selection based on information criteria. Finally, in Section 15.5, we introduce the concept of nonparametric estimation. Nonparametric methods avoid specification errors caused by imposing an incorrect functional form, and the validity of parametric models can be checked by comparing them with nonparametric ones. Copyright c  1999, Russell Davidson and James G. MacKinnon 640 15.2 Specification Tests Based on Artificial Regressions 641 15.2 Specification Tests Based on Artificial Regressions In previous chapters, we have encountered numerous examples of artificial regressions. These include the Gauss-Newton regression (Section 6.7) and its heteroskedasticity-robust variant (Section 6.8), the OPG regression (Sec- tion 10.5), and the binary response model regression (Section 11.3). We can write any of these artificial regressions as r(θ) = R(θ)b + residuals, (15.01) where θ is a parameter vector of length k, r(θ) is a vector, often but by no means always of length equal to the sample size n, and R(θ) is a matrix with as many rows as r(θ) and k columns. For example, in the case of the GNR, r(θ) is a vector of residuals, written as a function of the data and parameters, and R(θ) is a matrix of derivatives of the regression function with respect to the parameters. In order for (15.01) to be a valid artificial regression, the vector r(θ) and the matrix R(θ) must satisfy certain properties, which all of the artificial regressions we have studied do satisfy. These properties are given in outline in Exercise 8.20, and we restate them more formally here. We use a notation that was introduced in Section 9.5, whereby M denotes a model, µ denotes a DGP which belongs to that model, and plim µ means a probability limit taken under the DGP µ. See the discussion in Section 9.5. An artificial regression of the form (15.01) corresponds to a model M with parameter vector θ, and to a root-n consistent asymptotically normal estima- tor ˆ θ of that parameter vector, if and only if the following three conditions are satisfied. For the last two of these conditions, ´ θ may be any root-n consistent estimator, not necessarily the same as ˆ θ. • The artificial regressand and the artificial regressors are orthogonal when evaluated at ˆ θ, that is, R  ( ˆ θ)r( ˆ θ) = 0. • Under any DGP µ ∈ M, the asymptotic covariance matrix of ˆ θ is given either by Var  plim n→∞ µ n 1/2 ( ˆ θ − θ µ )  = plim n→∞ µ  N −1 R  ( ´ θ)R( ´ θ)  −1 , (15.02) where θ µ is the true parameter vector for the DGP µ, n is the sample size, and N is the number of rows of r and R, or by Var  plim n→∞ µ n 1/2 ( ˆ θ − θ µ )  = plim n→∞ µ ´s 2  N −1 R  ( ´ θ)R( ´ θ)  −1 , (15.03) where ´s 2 is the OLS estimate of the error variance obtained by running regression (15.01) with θ = ´ θ. Copyright c  1999, Russell Davidson and James G. MacKinnon 642 Testing the Specification of Econometric Models • The artificial regression allows for one-step estimation, in the sense that, if ´ b denotes the vector of OLS parameter estimates obtained by running regression (15.01) with θ = ´ θ, then, under any DGP µ ∈ M, plim n→∞ µ n 1/2 ( ´ θ + ´ b − θ µ ) = plim n→∞ µ n 1/2 ( ˆ θ − θ µ ). (15.04) Equivalently, making use of the O p notation introduced in Section 14.2, the property (15.04) may be expressed as ´ θ + ´ b = ˆ θ + O p (n −1 ). The Gauss-Newton regression for a nonlinear regression model, together with the least-squares estimator of the parameters of the model, satisfies the above conditions. For the GNR, the asymptotic covariance matrix is given by equa- tion (15.03). The OPG regression for any model that can be estimated by maximum likelihood, together with the ML estimator of its parameters, also satisfies the above conditions, but the asymptotic covariance matrix is given by equation (15.02). See Davidson and MacKinnon (2001) for a more detailed discussion of artificial regressions. Now consider the artificial regression r( ´ θ) = R( ´ θ)b + Z( ´ θ)c + residuals, (15.05) where ´ Z ≡ Z( ´ θ) is a matrix with r columns that depends on the same sample data and parameter estimates as ´ r ≡ r( ´ θ) and ´ R ≡ R( ´ θ). We have previously encountered instances of regressions like (15.05), where both R(θ) and Z(θ) were matrices of derivatives, with R(θ) corresponding to the parameters of a restricted version of the model and Z(θ) corresponding to additional para- meters that appear only in the unrestricted model. In such a case, if the root-n consistent estimator ´ θ satisfies the restrictions, then running an arti- ficial regression like (15.05) and testing the hypothesis that c = 0 provides a way of testing those restrictions; recall the discussion in Section 6.7 in the context of the GNR. In many cases, ´ θ is conveniently chosen as the vector of estimates from the restricted model. A great many specification tests may be based on artificial regressions of the form (15.05). The null hypothesis under test is that the model M to which regression (15.01) corresponds is correctly specified. It is not necessary that the matrix ´ Z should explicitly be a matrix of derivatives. In fact, any matrix Z(θ) which satisfies the following three conditions can be used in (15.05) to obtain a valid specification test. R1. For every DGP µ ∈ M, plim n→∞ µ N −1 Z  (θ µ )r(θ µ ) = 0. (15.06) A sufficient condition for (15.06) to hold is E µ  Z t  (θ µ )r t (θ µ )  = 0 for all t = 1, . . . , N, where N is the number of elements of r, and Z t and r t are, respectively, the t th row and t th element of Z and r. Copyright c  1999, Russell Davidson and James G. MacKinnon 15.2 Specification Tests Based on Artificial Regressions 643 R2. Let r µ , R µ , and Z µ denote r(θ µ ), R(θ µ ), and Z(θ µ ), respectively. Then, for any µ ∈ M, if the asymptotic covariance matrix is given by (15.02), the matrix plim n→∞ µ 1 − n  R µ  R µ R µ  Z µ Z µ  R µ Z µ  Z µ  (15.07) is the covariance matrix of the plim of the vector n −1/2 [R µ  r µ . . . . Z µ  r µ ], which is required to be asymptotically multivariate normal. If instead the asymptotic covariance matrix is given by equation (15.03), then the ma- trix (15.07) must be multiplied by the probability limit of the estimated error variance from the artificial regression. R3. The Jacobian matrix containing the partial derivatives of the elements of the vector n −1 Z  (θ)r(θ) with respect to the elements of θ, evalu- ated at θ µ , is asymptotically equal, under the DGP µ, to −n −1 Z µ  R µ . Formally, this Jacobian matrix is equal to −n −1 Z µ  R µ + O p (n −1/2 ). Since a proof of the sufficiency of these conditions requires a good deal of algebra, we relegate it to a technical appendix. When these conditions are satisfied, we can test the correct specification of the model M against an alternative in which equation (15.06) does not hold by testing the hypothesis that c = 0 in regression (15.05). If the asymptotic covariance matrix is given by equation (15.02), then the difference between the explained sum of squares from regression (15.05) and the ESS from regression (15.01), evaluated at ´ θ, must be asymptotically distributed as χ 2 (r) under the null hypothesis. This is not true when the asymptotic covariance matrix is given by equation (15.03), in which case we can use an asymptotic t test if r = 1 or an asymptotic F test if r > 1. The RESET Test One of the oldest specification tests for linear regression models, but one that is still widely used, is the regression specification error test, or RESET test, which was originally proposed by Ramsey (1969). The idea is to test the null hypothesis that y t = X t β + u t , u t ∼ IID(0, σ 2 ), (15.08) where the explanatory variables X t are predetermined with respect to the error terms u t , against the rather vaguely specified alternative that E(y t |X t ) is a nonlinear function of the elements of X t . The simplest version of RESET involves regressing y t on X t to obtain fitted values X t ˆ β and then running the regression y t = X t β + γ(X t ˆ β) 2 + u t . (15.09) The test statistic is the ordinary t statistic for γ = 0. At first glance, the RESET procedure may not seem to be based on an artificial regression. But it is easy to show (Exercise 15.2) that the t statistic for γ = 0 Copyright c  1999, Russell Davidson and James G. MacKinnon 644 Testing the Specification of Econometric Models in regression (15.09) is identical to the t statistic for c = 0 in the GNR ˆu t = X t b + c(X t ˆ β) 2 + residual, (15.10) where ˆu t is the t th residual from regression (15.08). The test regression (15.10) is clearly a special case of the artificial regression (15.05), with ˆ β playing the role of ´ θ and (X t ˆ β) 2 playing the role of ´ Z. It is not hard to check that the three conditions for a valid specification test regression are satisfied. First, the predeterminedness of X t implies that E  (X t β 0 ) 2 (y t −X t β 0 )  = 0, where β 0 is the true parameter vector, so that condition R1 holds. Condition R2 is equally easy to check. For condition R3, let z(β) be the n vector with typical element (X t β) 2 . Then the derivative of n −1 z  (β)(y t − X t β) with respect to β i , for i = 1, . . . , k, evaluated at β 0 , is 2 − n n  t=1 X t β 0 x ti u t − 1 − n n  t=1 (X t β 0 ) 2 x ti . The first term above is n −1/2 times an expression which, by a central limit the- orem, is asymptotically normal with mean zero and finite variance. It is there- fore O p (n −1/2 ). The second term is an element of the vector −n −1 z  (β 0 )X. Thus condition R3 holds, and the RESET test, implemented either by either of the regressions (15.09) or (15.10), is seen to be asymptotically valid. Actually, the RESET test is not merely valid asymptotically. It is exact in finite samples whenever the model that is being tested satisfies the strong assumptions needed for t statistics to have their namesake distribution; see Section 4.4 for a statement of those assumptions. To see why, note that the vector of fitted values X ˆ β is orthogonal to the residual vector ˆ u, so that E( ˆ β  X  ˆ u) = 0. Under the assumption of normal errors, it follows that X ˆ β is independent of ˆ u. As Milliken and Graybill (1970) first showed, and as readers are invited to show in Exercise 15.3, this implies that the t statistic for c = 0 yields an exact test under classical assumptions. Like most specification tests, the RESET procedure is designed to have power against a variety of alternatives. However, it can also be derived as a test against a specific alternative. Suppose that y t = τ(δX t β) δ + u t , (15.11) where δ is a scalar parameter, and τ (x) may be any scalar function that is monotonically increasing in its argument x and satisfies the conditions τ(0) = 0, τ  (0) = 1, and τ  (0) = 0, where τ  (0) and τ  (0) are the first and second derivatives of τ(x), evaluated at x = 0. A simple example of such a function is τ(x) = x + x 2 . Copyright c  1999, Russell Davidson and James G. MacKinnon 15.2 Specification Tests Based on Artificial Regressions 645 We first encountered the family of functions τ(·) in Section 11.3, in connection with tests of the functional form of binary response models. By l’Hˆopital’s Rule, the nonlinear regression model (15.11) reduces to the linear regression model (15.08) when δ = 0. It is not hard to show, using equations (11.29), that the GNR for testing the null hypothesis that δ = 0 is y t − X t ˆ β = X t b + c(X t ˆ β) 2 1 − 2 τ  (0) + residual, which, since τ  (0)/2 is just a constant, is equivalent to regression (15.10). Thus RESET can be derived as a test of δ = 0 in the nonlinear regression model (15.11). For more details, see MacKinnon and Magee (1990), which also discusses some other specification tests that can be used to test (15.08) against nonlinear models involving transformations of the dependent variable. Some versions of the RESET procedure add the cube, and sometimes also the fourth power, of X t ˆ β to the test regression (15.09). This makes no sense if the alternative is (15.11), but it may give the test more power against some other alternatives. In general, however, we recommend the simplest version of the test, namely, the t test for γ = 0 in regression (15.09). Conditional Moment Tests If a model M is correctly specified, many random quantities that are functions of the dependent variable(s) should have expectations of zero. Often, these expectations are taken conditional on some information set. For example, in the linear regression model (15.08), the expectation of the error term u t , conditional on any variable in the information set Ω t relative to which the model is supposed to give the conditional mean of y t , should be equal to zero. For any z t that belongs to Ω t , therefore, we have E(z t u t ) = 0 for all observations t. This sort of requirement, following from the hypothesis that M is correctly specified, is known as a moment condition. A moment condition is purely theoretical. However, we can often calculate the empirical counterpart of a moment condition and use it as the basis of a conditional moment test. For a linear regression, of course, we already know how to perform such a test: We add z t to regression (15.08) and look at the t statistic for this additional regressor to have a coefficient of 0. More generally, consider a moment condition of the form E θ  m t (y t , θ)  = 0, t = 1, . . . , n, (15.12) where y t is the dependent variable, and θ is the vector of parameters for the model M. As the notation implies, the expectation in (15.12) is computed using a DGP in M with parameter vector θ. The t subscript on the moment function m t (y t , θ) indicates that, in general, moment functions also depend on exogenous or predetermined variables. Equation (15.12) implies that the Copyright c  1999, Russell Davidson and James G. MacKinnon 646 Testing the Specification of Econometric Models m t (y t , θ) are elementary zero functions in the sense of Section 9.5. We cannot test whether condition (15.12) holds for each observation, but we can test whether it holds on average. Since we will be interested in asymptotic tests, it is natural to consider the probability limit of the average. Thus we can replace (15.12) by the somewhat weaker condition plim n→∞ µ 1 − n n  t=1 m t (y t , θ) = 0. (15.13) The empirical counterpart of the left-hand side of condition (15.13) is m(y, ˆ θ) ≡ 1 − n n  t=1 m t (y t , ˆ θ), (15.14) where y denotes the vector with typical element y t , and ˆ θ denotes a vector of estimates of θ from the model under test. The quantity m(y, ˆ θ) is referred to as an empirical moment. We wish to test whether its value is significantly different from zero. In order to do so, we need an estimate of the variance of m(y, ˆ θ). It might seem that, since the empirical moment is just the sample mean of the m t (y t , ˆ θ), this variance could be consistently estimated by the usual sample variance, 1 n − 1 n  t=1  m t (y t , ˆ θ) −m(y, ˆ θ)  2 . (15.15) If ˆ θ were replaced by the true value θ 0 in expression (15.14), then we could indeed use the sample variance (15.15) with ˆ θ replaced by θ 0 to estimate the variance of the empirical moment. But, because the vector ˆ θ is random, on account of its dependence on y, we have to take this parameter uncertainty into account when we estimate the variance of m(y, ˆ θ). The easiest way to see the effects of parameter uncertainty is to consider conditional moment tests based on artificial regressions. Suppose there is an artificial regression of the form (15.01) in correspondence with the model M and the estimator ˆ θ which allows us to write the moment function m t (y t , θ) as the product z t (y t , θ)r t (y t , θ) of a factor z t and the regressand r t of the artificial regression. If the number N of artificial observations is not equal to the sample size n, some algebraic manipulation may be needed in order to express the moment functions in a convenient form, but we ignore such problems here and suppose that N = n. Now consider the artificial regression of which the typical observation is r t (y t , θ) = R t (y t , θ)b + cz t (y t , θ) + residual. (15.16) If the z t satisfy conditions R1–R3, then the t statistic for c = 0 is a valid Copyright c  1999, Russell Davidson and James G. MacKinnon 15.2 Specification Tests Based on Artificial Regressions 647 test statistic whenever equation (15.16) is evaluated at a root-n consistent estimate of θ, in particular, at ˆ θ. By applying the FWL Theorem to this equation and taking probability limits, it is not difficult to see that this t sta- tistic is actually testing the hypothesis that plim n→∞ 1 − n z 0  M R 0 r 0 = 0, (15.17) where z 0 ≡ z(θ 0 ), R 0 ≡ R(θ 0 ), and M R 0 is the matrix that projects orthogo- nally on to S ⊥ (R 0 ). Asymptotically, equation (15.17) is precisely the moment condition that we wish to test, as can be seen from the following argument: n 1/2 m( ˆ θ) = n −1/2 z  ( ˆ θ)r( ˆ θ) = n −1/2 z 0  r 0 + n −1 z 0  R 0 n 1/2 ( ˆ θ − θ 0 ) + O p (n −1/2 ) = n −1/2 z 0  M R 0 r 0 + O p (n −1/2 ), (15.18) where for notational ease we have suppressed the dependence on the dependent variable. The steps leading to (15.18) are very similar to the derivation of a closely related result in the technical appendix, and interested readers are urged to consult the latter. If there were no parameter uncertainty, the second term in the second line above would vanish, and the leading-order term in expression (15.18) would simply be n −1/2 z 0  r 0 . It is clear from expression (15.18) that, as we indicated above, the asymp- totic variance of n 1/2 m( ˆ θ) is smaller than that of n 1/2 m(θ 0 ), because the projection M R 0 appears in the leading-order term for the former empirical moment but not in the leading-order term for the latter one. The reduction in variance caused by the projection is a phenomenon analogous to the loss of degrees of freedom in Hansen-Sargan tests caused by the need to estimate parameters; recall the discussion in Section 9.4. Indeed, since moment func- tions are zero functions, conditional moment tests can be interpreted as tests of overidentifying restrictions. Examples of Conditional Moment Tests Suppose the model under test is the nonlinear regression model (6.01), and the moment functions can be written as m t (β) = z t (β)u t (β), (15.19) where u t (β) ≡ y t − x t (β) is the t th residual, and z t (β) is some function of exogenous or predetermined variables and the parameters. We are using β instead of θ to denote the vector of parameter estimates here because the regression function is x t (β). In this case, as we now show, a test of the moment condition (15.13) can be based on the following Gauss-Newton regression: u t ( ˆ β) = X t ( ˆ β)b + cz t ( ˆ β) + residual, (15.20) Copyright c  1999, Russell Davidson and James G. MacKinnon 648 Testing the Specification of Econometric Models where ˆ β is the vector of NLS estimates of the parameters, and X t (β) is the k vector of derivatives of x t (β) with respect to the elements of β. Since the NLS estimator ˆ β is root-n consistent and asymptotically normal under the usual regularity conditions for nonlinear regression, all we have to show is that conditions R1–R3 are satisfied by the GNR (15.20). Condi- tion R1 is trivially satisfied, since what it requires is precisely what we wish to test. Condition R2, for the covariance matrix (15.03), follows easily from the fact that X t (β) and z t (β) depend on the data only through exogenous or predetermined variables. Condition R3 requires a little more work, however. Let z(β) and u(β) be the n vectors with typical elements z t (β) and u t (β), respectively. The derivative of n −1 z  (β)u(β) with respect to any component β i of the vector β is 1 − n ∂z  (β) ∂β i u(β) − 1 − n z  (β) ∂x(β) ∂β i . (15.21) Since the elements of z(β) are predetermined, so are those of its derivative with respect to β i , and since u(β 0 ) is just the vector of error terms, it follows from a law of large numbers that the first term of expression (15.21) tends to zero as n → ∞. In fact, by a central limit theorem, this term is O p (n −1/2 ). The n ×k matrix X(β) has typical column ∂x(β)/∂β i . Therefore, the Jaco- bian matrix of n −1 z  (β)u(β) is asymptotically equal to −n −1 z  (β 0 )X(β 0 ), which is condition R3 for the GNR (15.20). Thus we conclude that this GNR can be used to test the moment condition (15.13). The above reasoning can easily be generalized to allow us to test more than one moment condition at a time. Let Z(β) denote an n × r matrix of func- tions of the data, each column of which is asymptotically orthogonal to the vector u under the null hypothesis that is to be tested, in the sense that plim n −1 Z  (β 0 )u = 0. Now consider the artificial regression u( ˆ β) = X( ˆ β)b + Z( ˆ β)c + residuals. (15.22) As readers are asked to show in Exercise 15.5, n times the uncentered R 2 from this regression is asymptotically distributed as χ 2 (r) under the null hypo- thesis. An ordinary F test for c = 0 is also asymptotically valid. Conditional moment tests based on the GNR are often useful for linear and nonlinear regression models, but they evidently cannot be used when the GNR itself is not applicable. With models estimated by maximum likelihood, tests can be based on the OPG regression that was introduced in Section 10.5. This artificial regression applies whenever there is a Type 2 MLE ˆ θ that is root-n consistent and asymptotically normal; see Section 10.3. The OPG regression was originally given in equation (10.72). It is repeated here for convenience with a minor change of notation: ι = G(θ)b + residuals. (15.23) Copyright c  1999, Russell Davidson and James G. MacKinnon 15.2 Specification Tests Based on Artificial Regressions 649 The regressand is an n vector of 1s, and the regressor matrix is the matrix of contributions to the gradient, with typical element defined by (10.26). The artificial regression corresponds to the model implicitly defined by the matrix G(θ), together with the ML estimator ˆ θ. Let m(θ) be the n vector with typical element the moment function m t (y t , θ) that is to be tested, where once more the notation hides the dependence on the data. Then the testing regression is simplicity itself: We add m(θ) to regression (15.23) as an extra regressor, obtaining ι = G(θ)b + cm(θ) + residuals. (15.24) The test statistic is the t statistic on the extra regressor. The regressors here can be evaluated at any root-n consistent estimator, but it is most common to use the MLE ˆ θ. If several moment conditions are to be tested simultaneously, then we can form the n × r matrix M(θ), each column of which is a vector of moment functions. The testing regression is then ι = G(θ) + M(θ)c + residuals. (15.25) When the regressors are evaluated at the MLE ˆ θ, several asymptotically valid test statistics are available, including the explained sum of squares, n times the uncentered R 2 , and the F statistic for the artificial hypothesis that c = 0. The first two of these statistics are distributed asymptotically as χ 2 (r) under the null hypothesis, as is r times the third. If the regressors in equation (15.25) are not evaluated at ˆ θ, but at some other root-n consistent estimate, then only the F statistic is asymptotically valid. The artificial regression (15.23) is valid for a very wide variety of models. Condition R2 requires that we be able to apply a central limit theorem to the scalar product n −1/2 m  (θ 0 )ι, where, as usual, θ 0 is the true parameter vector. If the expectation of each moment function m t (θ 0 ) is zero conditional on an appropriate information set Ω t , then it is normally a routine matter to find a suitable central limit theorem. Condition R3 is also satisfied under very mild regularity conditions. What it requires is that the derivatives of n −1 m  (θ)ι with respect to the elements of θ, evaluated at θ 0 , should be given by the elements of the vector n −1 m  (θ 0 )G(θ 0 ), up to a term of order n −1/2 . Formally, we require that 1 − n n  t=1 ∂m t (θ) ∂θ i     θ=θ 0 = 1 − n n  t=1 m t (θ 0 )G t (θ 0 ) + O p (n −1/2 ), (15.26) where G t (θ) is the t th row of G(θ). Readers are invited in Exercise 15.6 to show that equation (15.26) holds under the usual regularity conditions for ML estimation. This property and its use in conditional moment tests Copyright c  1999, Russell Davidson and James G. MacKinnon [...]... PZ Xβ0 + u MX PZ u, (15. 41) where β0 is the true parameter vector The left-hand side of this equation can easily be obtained by applying the FWL Theorem to the second line of equation (15. 40) The right-hand side follows when we replace y by Xβ0 + u There are only two terms on the right-hand side of the equation, because β0 X MX = 0 The first term on the right-hand side of equation (15. 41) is a weighted... tests and tests based on artificial comprehensive models Cox tests for linear and nonlinear Copyright c 1999, Russell Davidson and James G MacKinnon 15. 4 Model Selection Based on Information Criteria 665 regression models were derived by Pesaran (1974) and Pesaran and Deaton (1978), respectively These tests were shown to be asymptotically equivalent to the corresponding J and P tests by Davidson and MacKinnon... Exercise 15. 26 The factor of proportionality depends on the true distribution of the data Two popular choices for h are h = 1.059sn−1/5 , and (15. 63) h = 0.785(ˆ.75 − q.25 )n−1/5 , q ˆ (15. 64) where s is the standard deviation of the xt , and q.75 − q.25 is the difference ˆ ˆ between the estimated 75 and 25 quantiles of the data, which is known as Copyright c 1999, Russell Davidson and James G MacKinnon 15. 5... the Gaussian and Epanechnikov kernels generally give very similar estimates if they are based on similar values of h Choosing the Bandwidth The kernel density estimator (15. 61) is sensitive to the value of the bandwidth parameter h, and there is a very large and highly technical literature on how best to choose it See Silverman (1986), H¨rdle (1990), Wand and Jones a (1995), or Pagan and Ullah (1999)... G yt K , (15. 66) n h t=1 where K is a cumulative kernel (that is, the CDF of a distribution with mean 0 and variance 1), and h is a bandwidth parameter Defining gh (x) as ˆ the derivative of (15. 66) and using the kernel estimator (15. 61) to estimate the marginal density of X leads to the following estimator of µ(x): µh (x) = ˆ gh (x) ˆ ˆ fh (x) This is called the Nadaraya-Watson estimator, and it simplifies... equation (15. 38) identified is to replace the unknown vector γ by a vector of parameter estimates This idea was first suggested by ˆ Davidson and MacKinnon (1981), who proposed that γ be replaced by γ , the vector of OLS estimates of the H2 model Thus, if β is redefined appropriately, equation (15. 38) becomes ˆ y = Xβ + αZ γ + u = Xβ + αPZ y + u, Copyright c 1999, Russell Davidson and James G MacKinnon (15. 40)... but also differentiable and define the kernel function k(x), often simply called the kernel, as K (x) Then, if we differentiate equation (15. 60) with respect to x, we obtain the kernel density estimator 1 ˆ fh (x) = nh n k t=1 x − xt h Copyright c 1999, Russell Davidson and James G MacKinnon (15. 61) 670 Testing the Specification of Econometric Models Like the kernel CDF estimator (15. 60), the kernel density... τ4 , (15. 34) which is asymptotically distributed as χ2 (2) when the error terms are normally distributed The statistics τ3 , τ4 , and τ3,4 were proposed, in slightly different forms, by Jarque and Bera (1980) and Kiefer and Salmon (1983); see also Bera and Jarque (1982) Many regression packages calculate these statistics as a matter of course The statistics τ3 , τ4 , and τ3,4 defined in equations (15. 31),... regression (15. 46) may not be well identified, the J statistic can be difficult to compute This difficulty can Copyright c 1999, Russell Davidson and James G MacKinnon 660 Testing the Specification of Econometric Models be avoided in the usual way, that is, by running the GNR which corresponds ˆ to equation (15. 46), evaluated at α = 0 and β = β This GNR is ˆ ˆ ˆ ˆ y − x = Xb + a(z − x) + residuals, (15. 47) ˆ... ) ˆ 1 (θ1 ) (15. 52) is indeed asymptotically normally distributed, with mean 0 and a variance that can be estimated consistently using a formula given in his 1962 paper Copyright c 1999, Russell Davidson and James G MacKinnon 664 Testing the Specification of Econometric Models It turns out that the statistic (15. 52) is unnecessarily complicated As readers are invited to check in Exercise 15. 20, ˆ − Eˆ . the number of elements of r, and Z t and r t are, respectively, the t th row and t th element of Z and r. Copyright c  1999, Russell Davidson and James G. MacKinnon 15. 2 Specification Tests Based. + residuals. (15. 23) Copyright c  1999, Russell Davidson and James G. MacKinnon 15. 2 Specification Tests Based on Artificial Regressions 649 The regressand is an n vector of 1s, and the regressor. show (Exercise 15. 2) that the t statistic for γ = 0 Copyright c  1999, Russell Davidson and James G. MacKinnon 644 Testing the Specification of Econometric Models in regression (15. 09) is identical

Ngày đăng: 04/07/2014, 15:20

TỪ KHÓA LIÊN QUAN