Econometric theory and methods, Russell Davidson - Chapter 13 potx

48 345 0
Econometric theory and methods, Russell Davidson - Chapter 13 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 13 Methods for Stationary Time-Series Data 13.1 Introduction Time-series data have special features that often require the use of special- ized econometric techniques. We have already dealt with some of these. For example, we discussed methods for dealing with serial correlation in Sections 7.6 through 7.9 and in Section 10.7, and we discussed heteroskedasticity and autocorrelation consistent (HAC) covariance matrices in Section 9.3. In this chapter and the next, we discuss a variety of techniques that are commonly used to model, and test hypotheses about, economic time series. A first point concerns notation. In the time series literature, it is usual to refer to a variable, series, or process by its typical element. For instance, one may speak of a variable y t or a set of variables Y t , rather than defining a vector y or a matrix Y. We will make free use of this convention in our discussion of time series. The methods we will discuss fall naturally into two groups. Some of them are intended for use with stationary time series, and others are intended for use with nonstationary time series. We defined stationarity in Section 7.6. Recall that a random process for a time series y t is said to be covariance stationary if the unconditional expectation and variance of y t , and the unconditional covariance between y t and y t−j , for any lag j, are the same for all t. In this chapter, we restrict our attention to time series that are covariance station- ary. Nonstationary time series and techniques for dealing with them will be discussed in Chapter 14. Section 13.2 discusses stochastic processes that can be used to model the way in which the conditional mean of a single time series evolves over time. These are based on the autoregressive and moving average processes that were introduced in Section 7.6. Section 13.3 discusses methods for estimating this sort of univariate time-series model. Section 13.4 then discusses single- equation dynamic regression models, which provide richer ways to model the relationships among time-series variables than do static regression models. Section 13.5 deals with seasonality and seasonal adjustment. Section 13.6 discusses autoregressive conditional heteroskedasticity, which provides a way Copyright c  1999, Russell Davidson and James G. MacKinnon 547 548 Methods for Stationary Time-Series Data to model the evolution of the conditional variance of a time series. Finally, Section 13.7 deals with vector autoregressions, which are a particularly simple and commonly used way to model multivariate time series. 13.2 Autoregressive and Moving Average Processes In Section 7.6, we introduced the concept of a stochastic process and briefly discussed autoregressive and moving average processes. Our purpose there was to provide methods for modeling serial dependence in the error terms of a regression model. But these processes can also be used directly to model the dynamic evolution of an economic time series. When they are used for this purpose, it is common to add a constant term, because most economic time series do not have mean zero. Autoregressive Processes In Section 7.6, we discussed the p th order autoregressive, or AR(p), process. If we add a constant term, such a process can be written, with slightly different notation, as y t = γ + ρ 1 y t−1 + ρ 2 y t−2 + . . . + ρ p y t−p + ε t , ε t ∼ IID(0, σ 2 ε ). (13.01) According to this specification, the ε t are homoskedastic and uncorrelated innovations. Such a process is often referred to as white noise, by a peculiar mixed metaphor, of long standing, which cheerfully mixes a visual and an auditory image. Throughout this chapter, the notation ε t refers to a white noise process with variance σ 2 ε . Note that the constant term γ in equation (13.01) is not the unconditional mean of y t . We assume throughout this chapter that the processes we con- sider are covariance stationary, in the sense that was given to that term in Section 7.6. This implies that µ ≡ E(y t ) does not depend on t. Thus, by equating the expectations of both sides of (13.01), we find that µ = γ + µ p  i=1 ρ i . Solving this equation for µ yields the result that µ = γ 1 −  p i=1 ρ i . (13.02) If we define u t = y t − µ, it is then easy to see that u t = p  i=1 ρ i u t−i + ε t , (13.03) which is exactly the definition (7.33) of an AR(p) process given in Section 7.6. In the lag operator notation we introduced in that section, equation (13.03) Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 549 can also be written as u t = ρ(L)u t + ε t , or as  1 −ρ(L)  u t = ε t , where the polynomial ρ is defined by equation (7.35), that is, ρ(z) = ρ 1 z + ρ 2 z 2 + . . . + ρ p z p . Similarly, the expression for the unconditional mean µ in equation (13.02) can be written as γ/(1 −ρ(1)). The covariance matrix of the vector u of which the typical element is u t was given in equation (7.32) for the case of an AR(1) process. The elements of this matrix are called the autocovariances of the AR(1) process. We introduced this term in Section 9.3 in the context of HAC covariance matrices, and its meaning here is similar. For an AR(p) process, the autocovariances and the corresponding autocorrelations can be computed by using a set of equations called the Yule-Walker equations. We discuss these equations in detail for an AR(2) process; the generalization to the AR(p) case is straightforward but algebraically more complicated. An AR(2) process without a constant term is defined by the equation u t = ρ 1 u t−1 + ρ 2 u t−2 + ε t . (13.04) Let v 0 denote the unconditional variance of u t , and let v i denote the covariance of u t and u t−i , for i = 1, 2, . . Because the process is stationary, the v i , which are by definition the autocovariances of the AR(2) process, do not depend on t. Multiplying equation (13.04) by u t and taking expectations of both sides, we find that v 0 = ρ 1 v 1 + ρ 2 v 2 + σ 2 ε . (13.05) Because u t−1 and u t−2 are uncorrelated with the innovation ε t , the last term on the right-hand side here is E (u t ε t ) = E(ε 2 t ) = σ 2 ε . Similarly, multiplying equation (13.04) by u t−1 and u t−2 and taking expectations, we find that v 1 = ρ 1 v 0 + ρ 2 v 1 and v 2 = ρ 1 v 1 + ρ 2 v 0 . (13.06) Equations (13.05) and (13.06) can be rewritten as a set of three simultaneous linear equations for v 0 , v 1 , and v 2 : v 0 − ρ 1 v 1 − ρ 2 v 2 = σ 2 ε ρ 1 v 0 + (ρ 2 − 1)v 1 = 0 ρ 2 v 0 + ρ 1 v 1 − v 2 = 0. (13.07) These equations are the first three Yule-Walker equations for the AR(2) pro- cess. As readers are asked to show in Exercise 13.1, their solution is v 0 = σ 2 ε D (1 −ρ 2 ), v 1 = σ 2 ε D ρ 1 , v 2 = σ 2 ε D  ρ 2 1 + ρ 2 (1 −ρ 2 )  , (13.08) where D ≡ (1 + ρ 2 )(1 + ρ 1 − ρ 2 )(1 −ρ 1 − ρ 2 ). Copyright c  1999, Russell Davidson and James G. MacKinnon 550 Methods for Stationary Time-Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (2, −1)(−2, −1) (0, 1) ρ 1 ρ 2 Figure 13.1 The stationarity triangle for an AR(2) process The result (13.08) makes it clear that ρ 1 and ρ 2 are not the autocorrelations of an AR(2) process. Recall that, for an AR(1) process, the same ρ that appears in the defining equation u t = ρu t−1 + ε t is also the correlation of u t and u t−1 . This simple result does not generalize to higher-order processes. Similarly, the autocovariances and autocorrelations of u t and u t−i for i > 2 have a more complicated form for AR processes of order greater than 1. They can, however, be determined readily enough by using the Yule-Walker equations. Thus, if we multiply both sides of equation (13.04) by u t−i for any i ≥ 2, and take expectations, we obtain the equation v i = ρ 1 v i−1 + ρ 2 v i−2 . Since v 0 , v 1 , and v 2 are given by equations (13.08), this equation allows us to solve recursively for any v i with i > 2. Necessary conditions for the stationarity of the AR(2) process follow directly from equations (13.08). The 3 ×3 covariance matrix   v 0 v 1 v 2 v 1 v 0 v 1 v 2 v 1 v 0   (13.09) of any three consecutive elements of an AR(2) process must be a positive definite matrix. Otherwise, the solution (13.08) to the first three Yule-Walker equations, based on the hypothesis of stationarity, would make no sense. The denominator D evidently must not vanish if this solution is to be finite. In Exercise 12.3, readers are asked to show that the lines along which it vanishes in the plane of ρ 1 and ρ 2 define the edges of a stationarity triangle such that the matrix (13.09) is positive definite only in the interior of this triangle. The stationarity triangle is shown in Figure 13.1. Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 551 Moving Average Processes A q th order moving average, or MA(q), process with a constant term can be written as y t = µ + α 0 ε t + α 1 ε t−1 + . . . + α q ε t−q , (13.10) where the ε t are white noise, and the coefficient α 0 is generally normalized to 1 for purposes of identification. The expectation of the y t is readily seen to be µ, and so we can write u t ≡ y t − µ = ε t + q  j=1 α j ε j =  1 + α(L)  ε t , where the polynomial α is defined by α(z) =  q j=1 α j z j . The autocovariances of an MA process are much easier to calculate than those of an AR process. Since the ε t are white noise, and hence uncorrelated, the variance of the u t is seen to be Var(u t ) = E(u 2 t ) = σ 2 ε  1 + q  j=1 α 2 j  . (13.11) Similarly, the j th order autocovariance is, for j > 0, E(u t u t−j ) =    σ 2 ε  α j +  q−j i=1 α j+i α i  for j < q, σ 2 ε α j for j = q, and 0 for j > q. (13.12) Using (13.12) and (13.11), we can calculate the autocorrelation ρ(j) between y t and y t−j for j > 0. 1 We find that ρ(j) = α j +  q−j i=1 α j+i α i 1 +  q i=1 α 2 i for j ≤ q, ρ(j) = 0 otherwise, (13.13) where it is understood that, for j = q, the numerator is just α j . The fact that all of the autocorrelations are equal to 0 for j > q is sometimes convenient, but it suggests that q may often have to be large if an MA(q) model is to be satisfactory. Expression (13.13) also implies that q must be large if an MA(q) model is to display any autocorrelation coefficients that are big in absolute value. Recall from Section 7.6 that, for an MA(1) model, the largest possible absolute value of ρ(1) is only 0.5. 1 The notation ρ is unfortunately in common use both for the parameters of an AR process and for the autocorrelations of an AR or MA process. We therefore distinguish between the parameter ρ i and the autocorrelation ρ(j). Copyright c  1999, Russell Davidson and James G. MacKinnon 552 Methods for Stationary Time-Series Data If we want to allow for nonzero autocorrelations at all lags, we have to allow q to be infinite. This means replacing (13.10) by the infinite-order moving average process u t = ε t + ∞  i=1 α i ε t−i =  1 + α(L)  ε t , (13.14) where α(L) is no longer a polynomial, but rather a (formal) infinite power series in L. Of course, this MA(∞) process is impossible to estimate in practice. Nevertheless, it is of theoretical interest, provided that Var(u t ) = σ 2 ε  1 + ∞  i=1 α 2 i  is a finite quantity. A necessary and sufficient condition for this to be the case is that the coefficients α j are square summable, which means that lim q→∞ q  i=1 α 2 i < ∞. (13.15) We will implicitly assume that all the MA(∞) processes we encounter satisfy condition (13.15). Any stationary AR(p) process can be represented as an MA(∞) process. We will not attempt to prove this fundamental result in general, but we can easily show how it works in the case of a stationary AR(1) process. Such a process can be written as (1 −ρ 1 L)u t = ε t . The natural way to solve this equation for u t as a function of ε t is to multiply both sides by the inverse of 1 −ρ 1 L. The result is u t = (1 −ρ 1 L) −1 ε t . (13.16) Formally, this is the solution we are seeking. But we need to explain what it means to invert 1 − ρ 1 L. In general, if A(L) and B(L) are power series in L, each including a constant term independent of L that is not necessarily equal to 1, then B(L) is the inverse of A(L) if B (L)A(L) = 1. Here the product B(L)A(L) is the infinite power series in L obtained by formally multiplying together the power series B(L) and A(L); see Exercise 13.5. The relation B(L)A(L) = 1 then requires that the result of this multiplication should be a series with only one term, the first. Moreover, this term, which corresponds to L 0 , must equal 1. We will not consider general methods for inverting a polynomial in the lag operator; see Hamilton (1994) or Hayashi (2000), among many others. In this particular case, though, the solution turns out to be (1 −ρ 1 L) −1 = 1 + ρ 1 L + ρ 2 1 L 2 + . . . . (13.17) Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 553 To see this, note that ρ 1 L times the right-hand side of equation (13.17) is the same series without the first term of 1. Thus, as required, (1 −ρ 1 L) −1 − ρ 1 L(1 −ρ 1 L) −1 = (1 −ρ 1 L)(1 −ρ 1 L) −1 = 1. We can now use this result to solve equation (13.16). We find that u t = ε t + ρ 1 ε t−1 + ρ 2 1 ε t−2 + . . . . (13.18) It is clear that (13.18) is a special case of the MA(∞) process (13.14), with α i = ρ i 1 for i = 0, . . . , ∞. Square summability of the α i is easy to check provided that |ρ 1 | < 1. In general, if we can write a stationary AR(p) process as  1 −ρ(L)  u t = ε t , (13.19) where ρ(L) is a polynomial of degree p in the lag operator, then there exists an MA(∞) process u t =  1 + α(L)  ε t , (13.20) where α(L) is an infinite series in L such that (1 −ρ(L))(1 + α(L)) = 1. This result provides an alternative way to the Yule-Walker equations to calculate the variance, autocovariances, and autocorrelations of an AR(p) process by using equations (13.11), (13.12), and (13.13), after we have solved for α(L). However, these methods make use of the theory of functions of a complex variable, and so they are not elementary. The close relationship between AR and MA processes goes both ways. If (13.20) is an MA(q) process that is invertible, then there exists a stationary AR(∞) process of the form (13.19) with  1 −ρ(L)  1 + α(L)  = 1. The condition for a moving average process to be invertible is formally the same as the condition for an autoregressive process to be stationary; see the discussion around equation (7.36). We require that all the roots of the poly- nomial equation 1 + α(z) = 0 must lie outside the unit circle. For an MA(1) process, the invertibility condition is simply that |α 1 | < 1. ARMA Processes If our objective is to model the evolution of a time series as parsimoniously as possible, it may well be desirable to employ a stochastic process that has both autoregressive and moving average components. This is the autoregressive moving average process, or ARMA process. In general, we can write an ARMA(p, q) process with nonzero mean as  1 −ρ(L)  y t = γ +  1 + α(L)  ε t , (13.21) Copyright c  1999, Russell Davidson and James G. MacKinnon 554 Methods for Stationary Time-Series Data and a process with zero mean as  1 −ρ(L)  u t =  1 + α(L)  ε t , (13.22) where ρ(L) and α(L) are, respectively, a p th order and a q th order polynomial in the lag operator, neither of which includes a constant term. If the process is stationary, the expectation of y t given by (13.21) is µ ≡ γ/  1 −ρ(1)  , just as for the AR(p) process (13.01). Provided the autoregressive part is stationary and the moving average part is invertible, an ARMA(p, q) process can always be represented as either an MA(∞) or an AR(∞) process. The most commonly encountered ARMA process is the ARMA(1,1) process, which, when there is no constant term, has the form u t = ρ 1 u t−1 + ε t + α 1 ε t−1 . (13.23) This process has one autoregressive and one moving average parameter. The Yule-Walker method can be extended to compute the autocovariances of an ARMA process. We illustrate this for the ARMA(1, 1) case and invite readers to generalize the procedure in Exercise 13.6. As before, we denote the i th autocovariance by v i , and we let E(u t ε t−i ) = w i , for i = 0, 1, . . Note that E(u t ε s ) = 0 for all s > t. If we multiply (13.23) by ε t and take expectations, we see that w 0 = σ 2 ε . If we then multiply (13.23) by ε t−1 and repeat the process, we find that w 1 = ρ 1 w 0 + α 1 σ 2 ε , from which we conclude that w 1 = σ 2 ε (ρ 1 + α 1 ). Although we do not need them at present, we note that the w i for i > 1 can be found by multiplying (13.23) by ε t−i , which gives the recursion w i = ρ 1 w i−1 , with solution w i = σ 2 ε ρ i−1 1 (ρ 1 + α 1 ). Next, we imitate the way in which the Yule-Walker equations are set up for an AR process. Multiplying equation (13.23) first by u t and then by u t−1 , and subsequently taking expectations, gives v 0 = ρ 1 v 1 + w 0 + α 1 w 1 = ρ 1 v 1 + σ 2 ε (1 + α 1 ρ 1 + α 2 1 ), and v 1 = ρ 1 v 0 + α 1 w 0 = ρ 1 v 0 + α 1 σ 2 ε , where we have used the expressions for w 0 and w 1 given in the previous paragraph. When these two equations are solved for v 0 and v 1 , they yield v 0 = σ 2 ε 1 + 2ρ 1 α 1 + α 2 1 1 −ρ 2 1 , and v 1 = σ 2 ε ρ 1 + ρ 2 1 α 1 + ρ 1 α 2 1 + α 1 1 −ρ 2 1 . (13.24) Finally, multiplying equation (13.23) by u t−i for i > 1 and taking expectations gives v i = ρ 1 v i−1 , from which we conclude that v i = σ 2 ε ρ i−1 1 (ρ 1 + ρ 2 1 α 1 + ρ 1 α 2 1 + α 1 ) 1 −ρ 2 1 . (13.25) Copyright c  1999, Russell Davidson and James G. MacKinnon 13.2 Autoregressive and Moving Average Processes 555 Equation (13.25) provides all the autocovariances of an ARMA(1, 1) process. Using it and the first of equations (13.24), we can derive the autocorrelations. Autocorrelation Functions As we have seen, the autocorrelation between u t and u t−j can be calculated theoretically for any known stationary ARMA process. The autocorrelation function, or ACF, expresses the autocorrelation as a function of the lag j for j = 1, 2 . . If we have a sample y t , t = 1, . . . , n, from an ARMA process of possibly unknown order, then the j th order autocorrelation ρ(j) can be estimated by using the formula ˆρ(j) =  Cov(y t , y t−j )  Var(y t ) , (13.26) where  Cov(y t , y t−j ) = 1 n −1 n  t=j+1 (y t − ¯y)(y t−j − ¯y), (13.27) and  Var(y t ) = 1 n −1 n  t=1 (y t − ¯y) 2 . (13.28) In equations (13.27) and (13.28), ¯y is the mean of the y t . Of course, (13.28) is just the special case of (13.27) in which j = 0. It may seem odd to divide by n − 1 rather than by n − j − 1 in (13.27). However, if we did not use the same denominator for every j, the estimated autocorrelation matrix would not necessarily be positive definite. Because the denominator is the same, the factors of 1/(n −1) cancel in the formula (13.26). The empirical ACF, or sample ACF, expresses the ˆρ(j), defined in equation (13.26), as a function of the lag j. Graphing the sample ACF provides a convenient way to see what the pattern of serial dependence in any observed time series looks like, and it may help to suggest what sort of stochastic process would provide a good way to model the data. For example, if the data were generated by an MA(1) process, we would expect that ˆρ(1) would be an estimate of α 1 and all the other ˆρ(j) would be approximately equal to zero. If the data were generated by an AR(1) process with ρ 1 > 0, we would expect that ˆρ(1) would be an estimate of ρ 1 and would be relatively large, the next few ˆρ(j) would be progressively smaller, and the ones for large j would be approximately equal to zero. A graph of the sample ACF is sometimes called a correlogram; see Exercise 13.15. The partial autocorrelation function, or PACF, is another way to characterize the relationship between y t and its lagged values. The partial autocorrelation coefficient of order j is defined as the true value of the coefficient ρ (j) j in the linear regression y t = γ (j) + ρ (j) 1 y t−1 + . . . + ρ (j) j y t−j + ε t , (13.29) Copyright c  1999, Russell Davidson and James G. MacKinnon 556 Methods for Stationary Time-Series Data or, equivalently, in the minimization problem min γ (j) , ρ (j) i E  y t − γ (j) − j  i=1 ρ (j) i y t−i  2 . (13.30) The superscript “(j)” appears on all the coefficients in regression (13.29) to make it plain that all the coefficients, not just the last one, are functions of j, the number of lags. We can calculate the empirical PACF, or sample PACF, up to order J by running regression (13.29) for j = 1, . . . , J and retaining only the estimate ˆρ (j) j for each j. Just as a graph of the sample ACF may help to suggest what sort of stochastic process would provide a good way to model the data, so a graph of the sample PACF, interpreted properly, may do the same. For example, if the data were generated by an AR(2) process, we would expect the first two partial autocorrelations to be relatively large, and all the remaining ones to be insignificantly different from zero. 13.3 Estimating AR, MA, and ARMA Models All of the time-series models that we have discussed so far are special cases of an ARMA(p, q) model with a constant term, which can be written as y t = γ + p  i=1 ρ i y t−i + ε t + q  j=1 α j ε t−j , (13.31) where the ε t are assumed to be white noise. There are p + q +1 parameters to estimate in the model (13.31): the ρ i , for i = 1, . . . , p, the α j , for j = 1, . . . , q, and γ. Recall that γ is not the unconditional expectation of y t unless all of the ρ i are zero. For our present purposes, it is perfectly convenient to work with models that allow y t to depend on exogenous explanatory variables and are therefore even more general than (13.31). Such models are sometimes referred to as ARMAX models. The ‘X’ indicates that y t depends on a row vector X t of exogenous variables as well as on its own lagged values. An ARMAX( p, q ) model takes the form y t = X t β + u t , u t ∼ ARMA(p, q), E(u t ) = 0, (13.32) where X t β is the mean of y t conditional on X t but not conditional on lagged values of y t . The ARMA model (13.31) can evidently be recast in the form of the ARMAX model (13.32); see Exercise 13.13. Estimation of AR Models We have already studied a variety of ways of estimating the model (13.32) when u t follows an AR(1) process. In Chapter 7, we discussed three estimation Copyright c  1999, Russell Davidson and James G. MacKinnon [...]... Copyright c 1999, Russell Davidson and James G MacKinnon (13. 60) 13. 4 Single-Equation Dynamic Models where λ≡ γ0 + γ1 1 − β1 569 (13. 61) This is the long-run derivative of y ◦ with respect to x◦, and it is an elasticity if both series are in logarithms An estimate of λ can be computed directly from the estimates of the parameters of (13. 59) Note that the result (13. 60) and the definition (13. 61) make sense... estimating Copyright c 1999, Russell Davidson and James G MacKinnon 562 Methods for Stationary Time-Series Data equations satisfied by γ and ρ are ˆ ˆ n n ut (γ, ρ) = 0 and t=2 yt−1 ut (γ, ρ) = 0 (13. 44) t=2 If yt is indeed generated by (13. 41) for particular values of µ and α1 , then we may define the pseudo-true values of the parameters γ and ρ of the auxiliary model (13. 43) as those values for which... (β) 1 − ρ2 (13. 40) Summing the contributions (13. 36), (13. 38), and (13. 40) gives the loglikelihood function for the entire sample It may then be maximized with respect 2 to β, ρ1 , ρ2 , and σε by standard numerical methods Exercise 13. 10 asks readers to check that the n×n matrix Ψ defined implicitly by the relation Ψ u = ε, where the elements of ε are defined by (13. 35), (13. 37), and (13. 39), is indeed... appears in (13. 62) is the error-correction term Of course, many ADL models in addition to the ADL(1, 1) model can be rewritten as error-correction models An important feature of error-correction models is that they can also be used with nonstationary data, as we will discuss in Chapter 14 3 Error-correction models were first used by Hendry and Anderson (1977) and Davidson, Hendry, Srba, and Yeo (1978)... becomes 1 plim −(Z Z)−1 Z ΩZ(Z Z)−1 , n n→∞ (13. 47) where Ω is the covariance matrix of the error terms ut , which are given by the ut (γ, ρ) evaluated at the pseudo-true values If we drop the probability Copyright c 1999, Russell Davidson and James G MacKinnon 13. 3 Estimating AR, MA, and ARMA Models 563 limit and the factor of n−1 in expression (13. 47) and replace Ω by a suitable estimate, we obtain... the left-hand sides of equations (13. 44) are zero These equations can thus be interpreted as correctly specified, albeit inefficient, estimating equations for the pseudo-true values The theory of Section 9.5 then shows that γ and ρ are consistent for ˆ ˆ the pseudo-true values and asymptotically normal, with asymptotic covariance matrix given by a version of the sandwich matrix (9.67) The pseudo-true values... µ(1 + α1 + α1 ) 2 1 + α1 and ρ = −α1 2 1 + α1 (13. 46) in terms of the true parameters µ and α1 Equations (13. 46) express the binding functions that link the parameters of model (13. 41) to those of the auxiliary model (13. 43) The indirect estimates µ and α1 are obtained by solving these equations with γ and ρ replaced by γ ˆ ˆ ˆ and ρ Note that, since the second equation of (13. 46) is a quadratic equation... simplicity, we will suppose that y = y◦ + ys and X = X◦ + Xs , where ys and Xs contain all the seasonal variation in y and X, respectively, and y◦ and X◦ contain all other economically interesting variation Suppose Copyright c 1999, Russell Davidson and James G MacKinnon 576 Methods for Stationary Time-Series Data further that the DGP is y◦ = X◦ β0 + u, u ∼ IID(0, σ 2 I) (13. 71) Thus the economic relationship... interested in the long-run impact of changes in the independent variable This long-run impact is q q βj = γ≡ j=0 j=0 ∂yt ∂xt−j (13. 52) We can estimate (13. 51) and then calculate the estimate γ using (13. 52), or ˆ we can obtain γ directly by reparametrizing regression (13. 51) as ˆ q yt = δ + γxt + βj (xt−j − xt ) + ut (13. 53) j=1 The advantage of this reparametrization is that the standard error of γ... choice of p and q by equation (13. 76) with ut replaced by yt − Xt β It therefore depends on β as well as on the αi and δj that appear in (13. 76), which we denote collectively by θ The density of yt conditional on Ωt−1 is then 1 yt − Xt β φ , σt (β, θ) σt (β, θ) (13. 85) where φ(·) denotes the standard normal density The first factor in (13. 85) is Copyright c 1999, Russell Davidson and James G MacKinnon . (13. 17) Copyright c  1999, Russell Davidson and James G. MacKinnon 13. 2 Autoregressive and Moving Average Processes 553 To see this, note that ρ 1 L times the right-hand side of equation (13. 17). Chapter 13 Methods for Stationary Time-Series Data 13. 1 Introduction Time-series data have special features that often require the use of special- ized econometric techniques alternative way to the Yule-Walker equations to calculate the variance, autocovariances, and autocorrelations of an AR(p) process by using equations (13. 11), (13. 12), and (13. 13), after we have solved

Ngày đăng: 04/07/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan