foundations of econometrics phần 9 ppt

69 171 0
foundations of econometrics phần 9 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

550 Methods for Stationary Time-Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (2, −1)(−2, −1) (0, 1) ρ 1 ρ 2 Figure 13.1 The stationarity triangle for an AR(2) process The result (13.08) makes it clear that ρ 1 and ρ 2 are not the autocorrelations of an AR(2) pro cess. Recall that, for an AR(1) process, the same ρ that app ears in the defining equation u t = ρu t−1 + ε t is also the correlation of u t and u t−1 . This simple result does not generalize to higher-order processes. Similarly, the autocovariances and autocorrelations of u t and u t−i for i > 2 have a more complicated form for AR processes of order greater than 1. They can, however, be determined readily enough by using the Yule-Walker equations. Thus, if we multiply both sides of equation (13.04) by u t−i for any i ≥ 2, and take expectations, we obtain the equation v i = ρ 1 v i−1 + ρ 2 v i−2 . Since v 0 , v 1 , and v 2 are given by equations (13.08), this equation allows us to solve recursively for any v i with i > 2. Necessary conditions for the stationarity of the AR(2) process follow directly from equations (13.08). The 3 × 3 covariance matrix   v 0 v 1 v 2 v 1 v 0 v 1 v 2 v 1 v 0   (13.09) of any three consecutive elements of an AR(2) process must be a positive definite matrix. Otherwise, the solution (13.08) to the first three Yule-Walker equations, based on the hypothesis of stationarity, would make no sense. The denominator D evidently must not vanish if this solution is to be finite. In Exercise 12.3, readers are asked to show that the lines along which it vanishes in the plane of ρ 1 and ρ 2 define the edges of a stationarity triangle such that the matrix (13.09) is positive definite only in the interior of this triangle. The stationarity triangle is shown in Figure 13.1. Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 551 Moving Average Processes A q th order moving average, or MA(q), process with a constant term can be written as y t = µ + α 0 ε t + α 1 ε t−1 + . . . + α q ε t−q , (13.10) where the ε t are white noise, and the coefficient α 0 is generally normalized to 1 for purposes of identification. The expectation of the y t is readily seen to be µ, and so we can write u t ≡ y t − µ = ε t + q  j=1 α j ε j =  1 + α(L)  ε t , where the polynomial α is defined by α(z) =  q j=1 α j z j . The autocovariances of an MA pro cess are much easier to calculate than those of an AR process. Since the ε t are white noise, and hence uncorrelated, the variance of the u t is seen to be Var(u t ) = E(u 2 t ) = σ 2 ε  1 + q  j=1 α 2 j  . (13.11) Similarly, the j th order autocovariance is, for j > 0, E(u t u t−j ) =    σ 2 ε  α j +  q−j i=1 α j+i α i  for j < q, σ 2 ε α j for j = q, and 0 for j > q. (13.12) Using (13.12) and (13.11), we can calculate the autocorrelation ρ(j) between y t and y t−j for j > 0. 1 We find that ρ(j) = α j +  q−j i=1 α j+i α i 1 +  q i=1 α 2 i for j ≤ q, ρ(j) = 0 otherwise, (13.13) where it is understood that, for j = q, the numerator is just α j . The fact that all of the autocorrelations are equal to 0 for j > q is sometimes convenient, but it suggests that q may often have to be large if an MA(q) model is to be satisfactory. Expression (13.13) also implies that q must be large if an MA(q) model is to display any autocorrelation coefficients that are big in absolute value. Recall from Section 7.6 that, for an MA(1) model, the largest possible absolute value of ρ(1) is only 0.5. 1 The notation ρ is unfortunately in common use both for the parameters of an AR process and for the autocorrelations of an AR or MA process. We therefore distinguish between the parameter ρ i and the autocorrelation ρ(j). Copyright c  1999, Russell Davidson and James G. MacKinnon 552 Methods for Stationary Time-Series Data If we want to allow for nonzero autocorrelations at all lags, we have to allow q to be infinite. This means replacing (13.10) by the infinite-order moving average process u t = ε t + ∞  i=1 α i ε t−i =  1 + α(L)  ε t , (13.14) where α(L) is no longer a polynomial, but rather a (formal) infinite power series in L. Of course, this MA(∞) process is impossible to estimate in practice. Nevertheless, it is of theoretical interest, provided that Var(u t ) = σ 2 ε  1 + ∞  i=1 α 2 i  is a finite quantity. A necessary and sufficient condition for this to be the case is that the coefficients α j are square summable, which means that lim q→∞ q  i=1 α 2 i < ∞. (13.15) We will implicitly assume that all the MA(∞) processes we encounter satisfy condition (13.15). Any stationary AR(p) process can be represented as an MA(∞) process. We will not attempt to prove this fundamental result in general, but we can easily show how it works in the case of a stationary AR(1) process. Such a process can be written as (1 − ρ 1 L)u t = ε t . The natural way to solve this equation for u t as a function of ε t is to multiply both sides by the inverse of 1 − ρ 1 L. The result is u t = (1 −ρ 1 L) −1 ε t . (13.16) Formally, this is the solution we are seeking. But we need to explain what it means to invert 1 − ρ 1 L. In general, if A(L) and B(L) are power series in L, each including a constant term independent of L that is not necessarily equal to 1, then B(L) is the inverse of A(L) if B(L)A(L) = 1. Here the product B(L)A(L) is the infinite power series in L obtained by formally multiplying together the power series B(L) and A(L); see Exercise 13.5. The relation B(L)A(L) = 1 then requires that the result of this multiplication should be a series with only one term, the first. Moreover, this term, which corresponds to L 0 , must equal 1. We will not consider general methods for inverting a polynomial in the lag operator; see Hamilton (1994) or Hayashi (2000), among many others. In this particular case, though, the solution turns out to be (1 − ρ 1 L) −1 = 1 + ρ 1 L + ρ 2 1 L 2 + . . . . (13.17) Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 553 To see this, note that ρ 1 L times the right-hand side of equation (13.17) is the same series without the first term of 1. Thus, as required, (1 − ρ 1 L) −1 − ρ 1 L(1 − ρ 1 L) −1 = (1 −ρ 1 L)(1 − ρ 1 L) −1 = 1. We can now use this result to solve equation (13.16). We find that u t = ε t + ρ 1 ε t−1 + ρ 2 1 ε t−2 + . . . . (13.18) It is clear that (13.18) is a special case of the MA(∞) process (13.14), with α i = ρ i 1 for i = 0, . . . , ∞. Square summability of the α i is easy to check provided that |ρ 1 | < 1. In general, if we can write a stationary AR(p) process as  1 − ρ(L)  u t = ε t , (13.19) where ρ(L) is a polynomial of degree p in the lag operator, then there exists an MA(∞) process u t =  1 + α(L)  ε t , (13.20) where α(L) is an infinite series in L such that (1 −ρ(L))(1 + α(L)) = 1. This result provides an alternative way to the Yule-Walker equations to calculate the variance, autocovariances, and autocorrelations of an AR(p) process by using equations (13.11), (13.12), and (13.13), after we have solved for α(L). However, these methods make use of the theory of functions of a complex variable, and so they are not elementary. The close relationship between AR and MA processes goes both ways. If (13.20) is an MA(q) process that is invertible, then there exists a stationary AR(∞) process of the form (13.19) with  1 − ρ(L)  1 + α(L)  = 1. The condition for a moving average process to be invertible is formally the same as the condition for an autoregressive pro cess to be stationary; see the discussion around equation (7.36). We require that all the roots of the poly- nomial equation 1 + α(z) = 0 must lie outside the unit circle. For an MA(1) process, the invertibility condition is simply that |α 1 | < 1. ARMA Processes If our objective is to model the evolution of a time series as parsimoniously as possible, it may well be desirable to employ a stochastic pro cess that has both autoregressive and moving average components. This is the autoregressive moving average process, or ARMA process. In general, we can write an ARMA(p, q) process with nonzero mean as  1 − ρ(L)  y t = γ +  1 + α(L)  ε t , (13.21) Copyright c  1999, Russell Davidson and James G. MacKinnon 554 Methods for Stationary Time-Series Data and a process with zero mean as  1 − ρ(L)  u t =  1 + α(L)  ε t , (13.22) where ρ(L) and α(L) are, respectively, a p th order and a q th order polynomial in the lag operator, neither of which includes a constant term. If the process is stationary, the expectation of y t given by (13.21) is µ ≡ γ/  1 − ρ(1)  , just as for the AR(p) process (13.01). Provided the autoregressive part is stationary and the moving average part is invertible, an ARMA(p, q) process can always be represented as either an MA(∞) or an AR(∞) process. The most commonly encountered ARMA process is the ARMA(1,1) process, which, when there is no constant term, has the form u t = ρ 1 u t−1 + ε t + α 1 ε t−1 . (13.23) This process has one autoregressive and one moving average parameter. The Yule-Walker metho d can be extended to compute the autocovariances of an ARMA process. We illustrate this for the ARMA(1, 1) case and invite readers to generalize the procedure in Exercise 13.6. As before, we denote the i th autocovariance by v i , and we let E(u t ε t−i ) = w i , for i = 0, 1, . . Note that E(u t ε s ) = 0 for all s > t. If we multiply (13.23) by ε t and take expectations, we see that w 0 = σ 2 ε . If we then multiply (13.23) by ε t−1 and repeat the process, we find that w 1 = ρ 1 w 0 + α 1 σ 2 ε , from which we conclude that w 1 = σ 2 ε (ρ 1 + α 1 ). Although we do not need them at present, we note that the w i for i > 1 can be found by multiplying (13.23) by ε t−i , which gives the recursion w i = ρ 1 w i−1 , with solution w i = σ 2 ε ρ i−1 1 (ρ 1 + α 1 ). Next, we imitate the way in which the Yule-Walker equations are set up for an AR process. Multiplying equation (13.23) first by u t and then by u t−1 , and subsequently taking expectations, gives v 0 = ρ 1 v 1 + w 0 + α 1 w 1 = ρ 1 v 1 + σ 2 ε (1 + α 1 ρ 1 + α 2 1 ), and v 1 = ρ 1 v 0 + α 1 w 0 = ρ 1 v 0 + α 1 σ 2 ε , where we have used the expressions for w 0 and w 1 given in the previous paragraph. When these two equations are solved for v 0 and v 1 , they yield v 0 = σ 2 ε 1 + 2ρ 1 α 1 + α 2 1 1 − ρ 2 1 , and v 1 = σ 2 ε ρ 1 + ρ 2 1 α 1 + ρ 1 α 2 1 + α 1 1 − ρ 2 1 . (13.24) Finally, multiplying equation (13.23) by u t−i for i > 1 and taking exp ectations gives v i = ρ 1 v i−1 , from which we conclude that v i = σ 2 ε ρ i−1 1 (ρ 1 + ρ 2 1 α 1 + ρ 1 α 2 1 + α 1 ) 1 − ρ 2 1 . (13.25) Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 555 Equation (13.25) provides all the autocovariances of an ARMA(1, 1) process. Using it and the first of equations (13.24), we can derive the autocorrelations. Autocorrelation Functions As we have seen, the autocorrelation between u t and u t−j can be calculated theoretically for any known stationary ARMA process. The autocorrelation function, or ACF, expresses the autocorrelation as a function of the lag j for j = 1, 2 . . If we have a sample y t , t = 1, . . . , n, from an ARMA process of possibly unknown order, then the j th order autocorrelation ρ(j) can be estimated by using the formula ˆρ(j) =  Cov(y t , y t−j )  Var(y t ) , (13.26) where  Cov(y t , y t−j ) = 1 n − 1 n  t=j+1 (y t − ¯y)(y t−j − ¯y), (13.27) and  Var(y t ) = 1 n − 1 n  t=1 (y t − ¯y) 2 . (13.28) In equations (13.27) and (13.28), ¯y is the mean of the y t . Of course, (13.28) is just the special case of (13.27) in which j = 0. It may seem odd to divide by n − 1 rather than by n − j − 1 in (13.27). However, if we did not use the same denominator for every j, the estimated autocorrelation matrix would not necessarily be positive definite. Because the denominator is the same, the factors of 1/(n − 1) cancel in the formula (13.26). The empirical ACF, or sample ACF, expresses the ˆρ(j), defined in equation (13.26), as a function of the lag j. Graphing the sample ACF provides a convenient way to see what the pattern of serial dependence in any observed time series looks like, and it may help to suggest what sort of stochastic process would provide a good way to model the data. For example, if the data were generated by an MA(1) pro cess, we would expect that ˆρ(1) would be an estimate of α 1 and all the other ˆρ(j) would be approximately equal to zero. If the data were generated by an AR(1) process with ρ 1 > 0, we would expect that ˆρ(1) would be an estimate of ρ 1 and would be relatively large, the next few ˆρ(j) would b e progressively smaller, and the ones for large j would be approximately equal to zero. A graph of the sample ACF is sometimes called a correlogram; see Exercise 13.15. The partial autocorrelation function, or PACF, is another way to characterize the relationship between y t and its lagged values. The partial autocorrelation coefficient of order j is defined as the true value of the coefficient ρ (j) j in the linear regression y t = γ (j) + ρ (j) 1 y t−1 + . . . + ρ (j) j y t−j + ε t , (13.29) Copyright c  1999, Russell Davidson and James G. MacKinnon 556 Methods for Stationary Time-Series Data or, equivalently, in the minimization problem min γ (j) , ρ (j) i E  y t − γ (j) − j  i=1 ρ (j) i y t−i  2 . (13.30) The superscript “(j)” appears on all the coefficients in regression (13.29) to make it plain that all the coefficients, not just the last one, are functions of j, the number of lags. We can calculate the empirical PACF, or sample PACF, up to order J by running regression (13.29) for j = 1, . . . , J and retaining only the estimate ˆρ (j) j for each j. Just as a graph of the sample ACF may help to suggest what sort of stochastic pro cess would provide a good way to model the data, so a graph of the sample PACF, interpreted properly, may do the same. For example, if the data were generated by an AR(2) process, we would expect the first two partial autocorrelations to be relatively large, and all the remaining ones to be insignificantly different from zero. 13.3 Estimating AR, MA, and ARMA Models All of the time-series models that we have discussed so far are special cases of an ARMA(p, q) model with a constant term, which can b e written as y t = γ + p  i=1 ρ i y t−i + ε t + q  j=1 α j ε t−j , (13.31) where the ε t are assumed to be white noise. There are p + q + 1 parameters to estimate in the model (13.31): the ρ i , for i = 1, . . . , p, the α j , for j = 1, . . . , q, and γ. Recall that γ is not the unconditional expectation of y t unless all of the ρ i are zero. For our present purposes, it is perfectly convenient to work with models that allow y t to depend on exogenous explanatory variables and are therefore even more general than (13.31). Such models are sometimes referred to as ARMAX models. The ‘X’ indicates that y t depends on a row vector X t of exogenous variables as well as on its own lagged values. An ARMAX( p, q ) model takes the form y t = X t β + u t , u t ∼ ARMA(p, q), E(u t ) = 0, (13.32) where X t β is the mean of y t conditional on X t but not conditional on lagged values of y t . The ARMA model (13.31) can evidently be recast in the form of the ARMAX model (13.32); see Exercise 13.13. Estimation of AR Models We have already studied a variety of ways of estimating the model (13.32) when u t follows an AR(1) process. In Chapter 7, we discussed three estimation Copyright c  1999, Russell Davidson and James G. MacKinnon 13.3 Estimating AR, MA, and ARMA Models 557 methods. The first was estimation by a nonlinear regression, in which the first observation is dropped from the sample. The second was estimation by feasible GLS, possibly iterated, in which the first observation can be taken into account. The third was estimation by the GNR that corresp onds to the nonlinear regression with an extra artificial observation corresponding to the first observation. It turned out that estimation by iterated feasible GLS and by this extended artificial regression, both taking the first observation into account, yield the same estimates. Then, in Chapter 10, we discussed estimation by maximum likelihood, and, in Exercise 10.21, we showed how to extend the GNR by yet another artificial observation in such a way that it provides the ML estimates if convergence is achieved. Similar estimation methods exist for models in which the error terms follow an AR(p) process with p > 1. The easiest method is just to drop the first p observations and estimate the nonlinear regression model y t = X t β + p  i=1 ρ i (y t−i − X t−i β) + ε t by nonlinear least squares. If this is a pure time-series model for which X t β = β, then this is equivalent to OLS estimation of the model y t = γ + p  i=1 ρ i y t−i + ε t , where the relationship between γ and β is derived in Exercise 13.13. This approach is the simplest and most widely used for pure autoregressive models. It has the advantage that, although the ρ i (but not their estimates) must satisfy the necessary condition for stationarity, the error terms u t need not be stationary. This issue was mentioned in Section 7.8, in the context of the AR(1) model, where it was seen that the variance of the first error term u 1 must satisfy a certain condition for u t to be stationary. Maximum Likelihood Estimation If we are prepared to assume that u t is indeed stationary, it is desirable not to lose the information in the first p observations. The most convenient way to achieve this goal is to use maximum likelihood under the assumption that the white noise process ε t is normal. In addition to using more information, maximum likelihood has the advantage that the estimates of the ρ j are auto- matically constrained to satisfy the stationarity conditions. For any ARMA(p, q) process in the error terms u t , the assumption that the ε t are normally distributed implies that the u t are normally distributed, and so also the dependent variable y t , conditional on the explanatory variables. For an observed sample of size n from the ARMAX model (13.32), let y denote the n vector of which the elements are y 1 , . . . , y n . The expectation of y conditional on the explanatory variables is Xβ, where X is the n ×k matrix Copyright c  1999, Russell Davidson and James G. MacKinnon 558 Methods for Stationary Time-Series Data with typical row X t . Let Ω denote the autocovariance matrix of the vector y. This matrix can be written as Ω =       v 0 v 1 v 2 . . . v n−1 v 1 v 0 v 1 . . . v n−2 v 2 v 1 v 0 . . . v n−3 . . . . . . . . . . . . . . . v n−1 v n−2 v n−3 . . . v 0       , (13.33) where, as before, v i is the stationary covariance of u t and u t−i , and v 0 is the stationary variance of the u t . Then, using expression (12.121) for the multivariate normal density, we see that the log of the joint density of the observed sample is − n − 2 log 2π − 1 − 2 log |Ω| − 1 − 2 (y −Xβ)  Ω −1 (y −Xβ). (13.34) In order to construct the loglikelihood function for the ARMAX model (13.32), the v i must be expressed as functions of the parameters ρ i and α j of the ARMA(p, q) process that generates the error terms. Doing this allows us to replace Ω in the log density (13.34) by a matrix function of these parameters. Unfortunately, a loglikelihood function in the form of (13.34) is difficult to work with, because of the presence of the n × n matrix Ω. Most of the difficulty disappears if we can find an upper-triangular matrix Ψ such that Ψ Ψ  = Ω −1 , as was necessary when, in Section 7.8, we wished to estimate by feasible GLS a model like (13.32) with AR(1) errors. It then becomes possible to decompose expression (13.34) into a sum of contributions that are easier to work with than (13.34) itself. If the errors are generated by an AR(p) process, with no MA component, then such a matrix Ψ is relatively easy to find, as we will illustrate in a moment for the AR(2) case. However, if an MA component is present, matters are more difficult. Even for MA(1) errors, the algebra is quite complicated — see Hamilton (1994, Chapter 5) for a convincing demonstration of this fact. For general ARMA(p, q) processes, the algebra is quite intractable. In such cases, a technique called the Kalman filter can be used to evaluate the successive con- tributions to the loglikelihood for given parameter values, and can thus serve as the basis of an algorithm for maximizing the loglikelihood. This technique, to which Hamilton (1994, Chapter 13) provides an accessible introduction, is unfortunately beyond the scope of this book. We now turn our attention to the case in which the errors follow an AR(2) process. In Section 7.8, we constructed a matrix Ψ corresponding to the sta- tionary covariance matrix of an AR(1) process by finding n linear combina- tions of the error terms u t that were homoskedastic and serially uncorrelated. We perform a similar exercise for AR(2) errors here. This will show how to set about the necessary algebra for more general AR(p) processes. Copyright c  1999, Russell Davidson and James G. MacKinnon 13.3 Estimating AR, MA, and ARMA Models 559 Errors generated by an AR(2) process satisfy equation (13.04). Therefore, for t ≥ 3, we can solve for ε t to obtain ε t = u t − ρ 1 u t−1 − ρ 2 u t−2 , t = 3, . . . , n. (13.35) Under the normality assumption, the fact that the ε t are white noise means that they are mutually independent. Thus observations 3 through n make contributions to the loglikelihood of the form  t (y t , β, ρ 1 , ρ 2 , σ ε ) = − 1 − 2 log 2π −log σ ε − 1 2σ 2 ε  u t (β) − ρ 1 u t−1 (β) − ρ 2 u t−2 (β)  2 , (13.36) where y t is the vector that consists of y 1 through y t , u t (β) ≡ y t − X t β, and σ 2 ε is as usual the variance of the ε t . The contribution (13.36) is analogous to the contribution (10.85) for the AR(1) case. The variance of the first error term, u 1 , is just the stationary variance v 0 given by (13.08). We can therefore define ε 1 as σ ε u 1 / √ v 0 , that is, ε 1 =  D 1 − ρ 2  1/2 u 1 , (13.37) where D was defined just after equations (13.08). By construction, ε 1 has the same variance σ 2 ε as the ε t for t ≥ 3. Since the ε t are innovations, it follows that, for t > 1, ε t is independent of u 1 , and hence of ε 1 . For the loglikelihood contribution from observation 1, we therefore take the log density of ε 1 , plus a Jacobian term which is the log of the derivative of ε 1 with respect to u 1 . The result is readily seen to be  1 (y 1 , β, ρ 1 , ρ 2 , σ ε ) = − 1 − 2 log 2π −log σ ε + 1 − 2 log D 1 − ρ 2 − D 2σ 2 ε (1 − ρ 2 ) u 2 1 (β). (13.38) Finding a suitable expression for ε 2 is a little trickier. What we seek is a linear combination of u 1 and u 2 that has variance σ 2 ε and is independent of u 1 . By construction, any such linear combination is independent of the ε t for t > 2. A little algebra shows that the appropriate linear combination is σ ε  v 0 v 2 0 − v 2 1  1/2  u 2 − v 1 v 0 u 1  . Use of the explicit expressions for v 0 and v 1 given in equations (13.08) then shows that ε 2 = (1 −ρ 2 2 ) 1/2  u 2 − ρ 1 1 − ρ 2 u 1  , (13.39) Copyright c  1999, Russell Davidson and James G. MacKinnon [...]... of the amount by which ¯ ˜ the value of yt for the current quarter tends to differ from its average value Copyright c 199 9, Russell Davidson and James G MacKinnon 13.5 Seasonality 575 over the year Thus one way to define a seasonally adjusted series would be ∗ yt ≡ yt − yt + yt ¯ ˜ = 090 9yt−5 − 2424yt−4 + 090 9yt−3 + 090 9yt−2 + 090 9yt−1 + 7576yt + 090 9yt+1 + 090 9yt+2 (13.70) + 090 9yt+3 − 2424yt+4 + 090 9yt+5... the exponential GARCH model of Nelson ( 199 1) and the absolute GARCH model of Hentschel ( 199 5) These models are intended to explain empirical features of financial time series that the standard GARCH model cannot capture More detailed treatments may be found in Bollerslev, Chou, and Kroner ( 199 2), Bollerslev, Engle, and Nelson ( 199 4), Hamilton ( 199 4, Chapter 21), and Pagan ( 199 6) 13.7 Vector Autoregressions... approach is easy to 4 These data come from Statistics Canada The actual data, which start in 194 8, are from CANSIM series J6001, and the adjusted data, which start in 196 6, are from CANSIM series J9001 Copyright c 199 9, Russell Davidson and James G MacKinnon 13.5 Seasonality 573 Log of starts 10.0 9. 8 9. 6 9. 4 9. 2 9. 0 8.8 8.6 8.4 ... case of timeseries data, it is certainly false when the regressors include one or more lags of the dependent variable There has been some work on the consequences of using seasonally adjusted data in this case; see Jaeger and Kunst ( 199 0), Ghysels ( 199 0), and Ghysels and Perron ( 199 3), among others It appears that, in models with a single lag of the dependent variable, estimates of the coefficient of the... Actual Seasonally adjusted 197 0 197 5 198 0 198 5 199 0 199 5 2000 Figure 13.2 Urban housing starts in Canada, 196 6-2001 implement and easy to analyze, it has a number of disadvantages, and it is almost never used by official statistical agencies One problem with the simplest form of seasonal adjustment by regression is that it does not allow the pattern of seasonality to change over time However,... j=1 where Ut is a 1 × g vector of error terms, α is a 1 × g vector of constant terms, and the Φj , for j = 1, , p, are g × g matrices of coefficients, all of which are to be estimated If yti denotes the i th element of Yt and φj,ki denotes the kith element of Φj , then the i th column of (13.87) can be written as p m yti = αi + yt−j,k φj,ki + uti j=1 k=1 Copyright c 199 9, Russell Davidson and James... adjustment model is only one of many economic models that can be used to justify the inclusion of one or more lags of the dependent variables in regression functions Others are discussed in Dhrymes ( 197 1) and Hendry, Pagan, and Sargan ( 198 4) We now consider a general family of regression models that include lagged dependent and lagged independent variables Copyright c 199 9, Russell Davidson and James... Hendry and Anderson ( 197 7) and Davidson, Hendry, Srba, and Yeo ( 197 8) See Banerjee, Dolado, Galbraith, and Hendry ( 199 3) for a detailed treatment Copyright c 199 9, Russell Davidson and James G MacKinnon 570 Methods for Stationary Time-Series Data 13.5 Seasonality As we observed in Section 2.5, many economic time series display a regular pattern of seasonal variation over the course of every year Seasonality,... discussion of standard methods for estimating AR, MA, and ARMA models is beyond the scope of this book Detailed treatments may be found in Box, Jenkins, and Reinsel ( 199 4, Chapter 7), Hamilton ( 199 4, Chapter 5), and Fuller ( 199 5, Chapter 8), among others Indirect Inference There is another approach to estimating ARMA models, which is unlikely to be used by statistical packages but is worthy of attention... application of the method of indirect inference, which was developed by Smith ( 199 3) and Gouri´roux, Monfort, and Renault e ( 199 3) The idea is that, when a model is difficult to estimate, there may be an auxiliary model that is not too different from the model of interest but is much easier to estimate For any two such models, there must exist socalled binding functions that relate the parameters of the model of . with p substantially greater than the number of parameters in the model of interest. See Zinde-Walsh and Galbraith ( 199 4, 199 7) for implementations of this approach. Clearly, indirect inference. by Hendry and Anderson ( 197 7) and Davidson, Hendry, Srba, and Yeo ( 197 8). See Banerjee, Dolado, Galbraith, and Hendry ( 199 3) for a detailed treatment. Copyright c  199 9, Russell Davidson and. is not too small. It is an application of the method of indirect inference, which was developed by Smith ( 199 3) and Gouri´eroux, Monfort, and Renault ( 199 3). The idea is that, when a model is

Ngày đăng: 14/08/2014, 22:21

Mục lục

  • Methods for Stationary Time- Series Data

    • 13.3 Estimating AR, MA, and ARMA Models

    • 13.4 Single- Equation Dynamic Models

    • 13.5 Seasonality

    • 13.6 Autoregressive Conditional Heteroskedasticity

    • 13.7 Vector Autoregressions

    • 13.8 Final Remarks

    • 13.9 Exercises

    • Unit Roots and Cointegration

      • 14.1 Introduction

      • 14.2 Random Walks and Unit Roots

      • 14.3 Unit Root Tests

      • 14.4 Serial Correlation and Unit Root Tests

      • 14.5 Cointegration

Tài liệu cùng người dùng

Tài liệu liên quan