Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 31 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
31
Dung lượng
876,22 KB
Nội dung
P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 198 HandbookofEmpiricalEconomicsandFinance 7.4 Kernel Methods with Mixed Data Types So far we have presumed that the categorical variable is of the “unordered” (“nominal” data type). We shall now distinguish between categorical (dis- crete) data types and real-valued (continuous) data types. Also, for categor- ical data types we could have unordered or ordered (“ordinal” data type) variables. For an ordered discrete variable ˜x d , we could use Wang and van Ryzin (1981) kernel given by ˜ l ˜ X d i , ˜x d , = ⎧ ⎨ ⎩ 1 − , if ˜ X d i = ˜x d , (1 −) 2 | ˜ X d i −˜x d | , if ˜ X d i = ˜x d . We shall now refer to the unordered kernel defined in Equation 7.2 as ¯ l(·)so as to keep each kernel type separate notationally speaking. We shall denote the traditional kernels for continuous data types such as the Epanechnikov of Gaussian kernels by W(·). A generalized product kernel for one continuous, one unordered, and one ordered variable would be defined as follows, K(·) = W(·) × ¯ l(·) × ˜ l(·). (7.16) Using such productkernels, we can modify any existing kernel-based method to handle the presence of categorical variables, thereby extending the reach of kernel methods. We define K ␥ (X i ,x) to be this product, where ␥ = (h, ) is the vector of bandwidths for the continuous and categorical variables. 7.4.1 Kernel Estimation of a Joint Density Defined over Categorical and Continuous Data Estimating a joint probability/density function defined over mixed data fol- lows naturally using these generalized product kernels. For example, for one unordered discrete variable ¯x d and one continuous variable x c , our kernel estimator of the PDF would be ˆ f (¯x d ,x c ) = 1 nh x c n i=1 ¯ l( ¯ X d i , ¯x d )W X c i − x c h x c . This extends naturally to handle amix of ordered, unordered, and continuous data (i.e., bothquantitative andqualitative data).This estimatoris particularly well suited to “sparse data” settings. Li and Racine (2003) demonstrate that √ nh p ˆ f (z) − f (z) − ˆ h 2 B 1 (z) − ˆB 2 (z) → N(0,V(z)) in distribution, (7.17) where B 1 (z) = (1/2)tr{∇ 2 f (z)}[ W(v)v 2 dv], B 2 (z) = x ∈D,d x,x =1 [ f (x ,y) − f (x, y)], and V(z) = f (z)[ W 2 (v)dv]. P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 Nonparametric Kernel Methods for Qualitative and Quantitative Data 199 1234567 0.0 0.1 0.2 0.3 0.4 −1 0 1 2 3 4 Number of Dependents Log wage Joint Density FIGURE 7.1 Nonparametric kernel estimate of a joint density defined over one continuous and one discrete variable. 7.4.1.1 An Application We consider Wooldridge’s (2002) “wage1” dataset having n = 526 observa- tions, and model the joint density of two variables, one continuous (“lwage”) and one discrete (“numdep”). “lwage” is the logarithm of average hourly earnings for an individual. “numdep” the number of dependents (0, 1, ). Weuse likelihood cross-validationto obtain the bandwidths, and the resulting estimate is presented in Figure 7.1. Note that this is indeed a case of “sparse” data for some cells (see Table 7.4), and the traditional approach would require estimation of a nonparametric univariate density function based upon only two observations for the last cell (c = 6). TABLE 7.4 Summary of the Num- ber of Dependents in the Wooldridge(2002)“wage1” Dataset (“numdep”) numdep 0 252 1 105 299 345 416 57 62 P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 200 HandbookofEmpiricalEconomicsandFinance 7.4.2 Kernel Estimation of a Conditional PDF Let f (·) and (·) denote the joint and marginal densities of (X, Y) and X, respectively, where we allow Y and X to consist of continuous, unordered, and ordered variables. For what follows we shall refer to Y as a dependent variable (i.e., Y is explained), and to X as covariates (i.e., X is the explanatory variable). Weuse ˆ f and ˆ to denote kernel estimators thereof, and we estimate the conditional density g(y |x) = f (x, y)/(x)by ˆg(y|x) = ˆ f (x, y) ˆ(x) . (7.18) The kernel estimators of the joint and marginal densities f (x, y) and (x)are described in the previoussections; see Hall, Racine, and Li (2004) for details on thetheoreticalunderpinnings of a data-driven method of bandwidth selection for this method. 7.4.2.1 The Presence of Irrelevant Covariates Hall, Racine, and Li (2004) proposed the estimator defined in Equation 7.18, but choosing appropriate smoothing parameters in this setting can be tricky, not least because plug-in rules take a particularly complex form in the case of mixed data. One difficulty is that there exists no general formula for the op- timal smoothing parameters. A much bigger issue is that it can be difficult to determine which components of X are relevant to the problem of conditional inference.Forexample, if the jthcomponentof Xisindependent of Y then that component is irrelevant to estimating the density of Y given X, and ideally should be dropped before conducting inference. Hall, Racine, and Li (2004) show that a version of least-squares cross-validation overcomes these difficul- ties. It automatically determines which components are relevant and which are not, through assigning large smoothing parameters to the latter and con- sequently shrinking them toward the uniform distribution on the respective marginals. This effectively removes irrelevant components from contention, by suppressing their contribution to estimator variance; they already have very small bias, a consequence of their independence of Y. Cross-validation alsogives usimportant information about which componentsarerelevant;the relevant components are precisely those that cross-validation has chosen to smooth in a traditional way, by assigning them smoothing parameters of con- ventional size. Cross-validation produces asymptotically optimal smoothing for relevant components, while eliminating irrelevant components by over- smoothing. Hall, Racine, and Li (2004) demonstrate that, for irrelevant conditioning variables in X, their bandwidths in fact ought to behave exactly the oppo- site, namely, h →∞as n →∞for optimal smoothing. The same has been demonstrated for regression as well; see Hall, Li, and Racine (2007) for further details. Note that this result is closely related to the Bayesian results described in detail in Section 7.3. P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 Nonparametric Kernel Methods for Qualitative and Quantitative Data 201 7.4.3 Kernel Estimation of a Conditional CDF Li and Racine (2008) propose a nonparametric conditional CDF kernel estima- tor that admits a mix of discrete and categorical data along with an associated nonparametric conditional quantile estimator. Bandwidth selection for ker- nel quantile regression remains an open topic of research, and they employ a modification of the conditional PDF-based bandwidth selector proposed by Hall, Racine, and Li (2004). We use F(y|x) to denote the conditional CDF of Y given X = x, while f (x) is the marginal density of X. We can estimate F (y|x)by ˆ F(y|x) = n −1 n i=1 G y−Y i h 0 K ␥ (X i ,x) ˆ f (x) , (7.19) where G(·) is a kernel CDF chosen by the researcher, say, the standard normal CDF, h 0 is the smoothing parameter associated with Y, and K ␥ (X i ,x)isa product kernel such as that defined in Equation 7.16 where each univariate continuous kernel has been divided by its respectivebandwidthfor notational simplicity. Li and Racine (2008) demonstrate that (nh 1 h q ) 1/2 ˜ F(y|x) − F(y |x) − q s=1 h 2 s B 1s (y |x) − r s=1 s B 2s (y |x) →N(0,V(y |x)) in distribution, (7.20) where V(y |x) = q F(y|x)[1−F(y|x)]/(x), B 1s (y |x) = (1/2) 2 [2F s (y |x)× s (x)+(x)F ss (y |x)]/(x), B 2s (y |x) = (x) −1 z d ∈D I s (z d ,x d )[F(y|x c ,z d )× (x c ,z d ) − F(y|x)(x)]/(x), = W(v) 2 dv, 2 = W(v)v 2 dv, and D is the support of X d . 7.4.4 Kernel Estimation of a Conditional Quantile Estimating regression functions is a popular activity for practitioners. Some- times, however, the regression function is not representative of the impact of the covariates on the dependent variable. For example, when the dependent variable is left (or right) censored, the relationship given by the regression function is distorted. In such cases, conditional quantiles above (or below) the censoring point are robust to the presence of censoring. Furthermore, the conditional quantile function provides a more comprehensive picture of the conditional distribution of a dependent variable than the conditional mean function. Once we can estimate conditional CDFs, estimating conditional quantiles follows naturally. That is, having estimated the conditional CDF we simply invert it at the desired quantile as described below. A conditional ␣th quantile P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 202 HandbookofEmpiricalEconomicsandFinanceof a conditional distribution function F (·|x) is defined by (␣ ∈ (0, 1)) q ␣ (x) = inf{y : F (y|x) ≥ ␣}=F −1 (␣ |x). Or equivalently, F(q ␣ (x) |x) = ␣. We can directly estimate the conditional quantile function q ␣ (x) by inverting the estimated conditional CDF func- tion, i.e., ˆq ␣ (x) = inf{y : ˆ F(y|x) ≥ ␣}≡ ˆ F −1 (␣ |x). Li and Racine (2008) demonstrate that (nh 1 h q ) 1/2 [ˆq ␣ (x) −q ␣ (x) − B n,␣ (x)] → N(0,V ␣ (x)) in distribution, (7.21) where V ␣ (x) = ␣(1 − ␣) q /[ f 2 (q ␣ (x) |x)(x)] ≡ V(q ␣ (x) |x)/ f 2 (q ␣ (x) |x) (since ␣ = F(q ␣ (x) |x)). 7.4.5 Binary Choice and Count Data Models Another application of kernel estimates of PDFs with mixed data involves the estimation of conditional mode models. By way of example, consider some discrete outcome, say Y ∈ S ={0, 1, ,c− 1}, which might denote by way of example the number of successful patent applications by firms. We define the conditional mode of y |x by m(x) = max y g(y|x). (7.22) In order to estimate a conditional mode m(x), we need to model the con- ditional density. Let us call ˆm(x) the estimated conditional mode, which is given by ˆm(x) = max y ˆg(y|x), (7.23) where ˆg(y |x) is the kernel estimator of g(y |x) defined in Equation 7.18. 7.4.6 Kernel Estimation of Regression Functions The local constant (Nadaraya 1965; Watson 1964) and local polynomial (Fan 1992) estimators are perhaps the most well-known of all kernel methods. Racine and Li (2004) and Li and Racine (2004) propose local constant and local polynomial estimators of regression functions defined over categorical andcontinuousdata types. Toextendthesepopularestimators so that theycan handle both categorical and continuous regressors requires little more than replacing the traditional kernel function with the generalized kernel given in Equation 7.16. That is, the local constant estimator defined in Equation 7.7 would then be ˆg(x) = n i=1 Y i K ␥ (X i ,x) n i=1 K ␥ (X i ,x) . (7.24) P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 Nonparametric Kernel Methods for Qualitative and Quantitative Data 203 Racine and Li (2004) demonstrate that n ˆ h p ˆg(x) − g(x) − ˆ B( ˆ h, ˆ) / ˆ (x) → N(0, 1) in distribution. (7.25) See Racine and Li (2004) for further details. 7.5 Summary We survey recent developments in the kernel estimation of objects defined over categorical and continuous data types. We focus on theoretical underpin- nings, and focus first on kernel methods for categorical data only.Wepayclose attention to recent theoretical work that draws links between kernel methods and Bayesian methods and also highlight the behavior of kernel methods in the presence of irrelevant covariates. Each of these developments leads to ker- nel estimators that diverge from more traditional kernel methods in a number of ways, and sets the stage for mixed data kernel methods which we briefly discuss. We hope that readers are encouraged to pursue these methods, and draw the readers attention to an R package titled “np” (Hayfield and Racine 2008) that implements a range of the approaches discussed above. A number of relevant examples can also be found in Hayfield and Racine (2008), and we direct the interested reader to the applications contained therein. References Aitchison, J., and C. G. G. Aitken. 1976. Multivariate binary discrimination by the kernel method. Biometrika 63(3): 413–420. Efron, B., and C. Morris. 1973. Stein’s estimation rule and its competitors–an empirical Bayes approach. Journal of the American Statistical Association 68(341): 117–130. Fan, J. 1992. Design-adaptive nonparametric regression. Journal of the American Statis- tical Association 87: 998–1004. Hall, P., Q. Li, and J. S. Racine. 2007. Nonparametric estimation of regression func- tions in the presence of irrelevant regressors. The Review ofEconomicsand Statistics 89: 784–789. Hall, P., J. S. Racine, and Q. Li. 2004. Cross-validation and the estimation of conditional probability densities. Journal of the American Statistical Association 99(468): 1015– 1026. Hayfield, T., and J. S. Racine. 2008. Nonparametric econometrics: the np package. Journal of Statistical Software 27(5). http://www.jstatsoft.org/v27/i05/ Heyde, C. 1997. Quasi-Likelihood and Its Application. New York: Springer-Verlag. Kiefer, N. M., and J. S. Racine. 2009. The smooth colonel meets the reverend. Journal of Nonparametric Statistics 21: 521–533. Li, Q., and J. S. Racine. 2003. Nonparametric estimation of distributions with categor- ical and continuous data. Journal of Multivariate Analysis 86: 266–292. P1: GOPAL JOSHI November 3, 2010 17:12 C7035 C7035˙C007 204 HandbookofEmpiricalEconomicsandFinance Li, Q., and J. S. Racine. 2004. Cross-validated local linear nonparametric regression. Statistica Sinica 14(2): 485–512. Li, Q., and J. S. Racine. 2007. Nonparametric Econometrics: Theory and Practice. Princeton, NJ: Princeton University Press. Li, Q., and J. S. Racine. 2008. Nonparametric estimation of conditional CDF and quan- tile functions with mixed categorical and continuous data. Journal of Business and Economic Statistics. 26(4): 423–434. Lindley, D. V., and A. F. M. Smith. 1972. Bayes estimates for the linear model. Journal of the Royal Statistical Society 34: 1–41. Nadaraya, E. A. 1965. On nonparametric estimates of density functions and regression curves. Theory of Applied Probability 10: 186–190. Ouyang, D., Q. Li, and J. S. Racine. 2006. Cross-validation and the estimation of probability distributions with categorical data. Journal of Nonparametric Statistics 18(1): 69–100. Ouyang, D., Q. Li, and J. S. Racine. 2008. Nonparametric estimation of regression functions with discrete regressors. Econometric Theory. 25(1): 1–42. R Development Core Team. 2008. R: A Language and Environment for Statistical Comput- ing, R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0. http://www.R-project.org Racine, J. S. and Q. Li. 2004. Nonparametric estimation of regression functions with both categorical and continuous data. Journal of Econometrics 119(1): 99–130. Simonoff, J. S. 1996. Smoothing Methods in Statistics. New York: Springer Series in Statistics. Wand, M., and B. Ripley, 2008. KernSmooth: Functions for Kernel Smooth- ing. R package version 2.22-22. http://CRAN.R-project.org/package= KernSmooth Wang, M. C., and J. van Ryzin, 1981. A class of smooth estimators for discrete distri- butions. Biometrika 68: 301–309. Watson, G. S. 1964. Smooth regression analysis. Sankhya 26:(15): 359–372. Wooldridge, J. M. 2002. Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press. P1: GOPAL JOSHI November 12, 2010 17:8 C7035 C7035˙C008 8 The Unconventional Dynamics of Economic and Financial Aggregates Karim M. Abadir and Gabriel Talmain CONTENTS 8.1 Introduction 205 8.2 The Economic Origins of the Nonlinear Long-Memory 206 8.3 Modeling Co-Movements for Series with Nonlinear Long-Memory 209 8.3.1 Econometric Model 209 8.3.2 Empirical Implications 210 8.3.3 Special Case: co-CM 211 8.4 Further Developments 212 8.5 Acknowledgments 212 References 212 8.1 Introduction Time series models have provided econometricians with a rich toolbox from which to choose. Linear ARIMA models have been very influential and have enhanced our understanding of many empirical features ofeconomicsand finance. As with any scientific endeavor, data have emerged that show the need for refinements and improvements over existing models. Nonlinear models have gained popularity in recent times, but which one do we choose from? Once we move away from linear models, there is a huge variety on offer. Surely, economic theory should provide the guiding light, insofar as economicsand finance are the subject in question. Abadir and Talmain (2002) provided one possible answer. This chapter is mainly a sum- mary of the econometric aspects of the line of research started by that paper. The main result of that literature is that macroeconomic and aggregate financial series follow a nonlinear long-memory process that requires new econometric tools. It also shows that integrated series (which are a special case of the new process) are not the norm in our subject, and proposes a new approach to econometric modeling. 205 P1: GOPAL JOSHI November 12, 2010 17:8 C7035 C7035˙C008 206 HandbookofEmpiricalEconomicsandFinance 8.2 The Economic Origins of the Nonlinear Long-Memory Abadir and Talmain (AT) started with a micro-founded macro model. It was a standard real business cycle (RBC) model, except that it allowed for hetero- geneity: the “representative firm” assumption was dropped. They worked out the intertemporal general equilibrium solution for the economy, and the result was an explicit dynamic equation for GDP and all the variables that move along with it. It was well known, long before AT, that heterogeneity and aggregation led to long-memory; e.g., see Robinson (1978) and Granger (1980) for a start of the literature on linear aggregation of ARIMA models, and Granger and Joyeux (1980) and Hosking (1981) for the introduction of long-memory models. 1 But in economics, there is an inherent nonlinearity which makes linear ag- gregation results incomplete. Let us illustrate the nonlinearity in the sim- plest possible aggregation context; see AT for the more general CES-type aggregation. Decompose GDP, denoted by Y, into the outputs Y(1),Y(2), of firms (alternatively, sectors) in the economy as Y := Y(1) +Y(2) +···=e y(1) + e y(2) +···, where we write the expression in terms of y(i):= log Y(i)(i = 1, 2, )to consider percentage changes in Y(i) (and to make sure that models to be chosen for y(i) keep Y(i) > 0, but this can be achieved by other methods too). With probability 1, e y(1) + e y(2) +···= e y(1)+y(2)+··· , where the right-hand side is what linear aggregation entails. The right-hand side is the aggregation considered in the literature, typically with y(i) ∼ ARIMA ( p i ,d i ,q i ) , but it is not what is needed in macroeconomics. AT (espe- cially p. 765) show that important features are missed by linearization when aggregating dynamic series. One implication of the nonlinear aggregation is that the auto-correlation function (ACF) of the logarithm of GDP and other variables moving with it take the common form := cov(y t ,y t− ) √ var(y t )var(y t− ) = 1 − a [ 1 − cos ( ) ] 1 + b c , (8.1) 1 A time series is said to have long memory if its autocorrelations dampen very slowly, more so than the exponential decay rate of stationary autoregressive models but faster than the permanent memory of unit roots. Unlike the latter, long-memory series revert to their (possibly trending) means. P1: GOPAL JOSHI November 12, 2010 17:8 C7035 C7035˙C008 The Unconventional Dynamics of Economic and Financial Aggregates 207 0.75 0.8 0.85 0.9 0.95 1 0 24681012 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 Lags ACF GDP per capita (real) AT fit AR fit FIGURE 8.1 ACF of the log of U.S. real GDP per capita over 1929–2004. where the subscript of y denotes the time period and a, b, c, depend on the parameters of the underlying economy but differ across variables. 2 Abadir, Caggiano, and Talmain (2006) tried this on all the available macroeconomic and aggregate financial data, about twice as many as (and including the ones) in Nelson and Plosser (1982). The result was an overwhelming rejection of AR-type models and the shape they imply for ACFs, as opposed to the one implied by Equation 8.1. For example, for the ACF of the log of U.S. real GDP per capita over 1929–2004, Figure 8.1 presents the fit of the best AR(p) model (it turns out that p = 2 with one root of almost 1) by the undecorated solid line, compared to the fit of Equation 8.1 by nonlinear LS. Linear models, like ARIMA, are simply incapable of allowing for sharp turning points that we can see in the decay of memory. The empirical ACFs found that there is typically an initial period where persistence is high, almost like a unit-root with a virtually flat ACF, then a sudden loss of memory. We can illustrate this also in the time domain in Figure 8.2, where we see that the log of real GDP per capita is evolving around a linear time trend, well within small variance bands that don’t expand over time (unlike unit-root processeswhose variance expands linearly to infinity as time passes). ACFs of this shape have important implications for macroeconomic poli- cymakers, as Abadir, Caggiano, and Talmain (2006) show. For example, if an economy is starting to slow down, such ACFs predict that it will produce a long sequence of small signs of a slowdown followed by an abrupt decline. When only the small signs have appeared, no-one fitting a linear (e.g., AR) 2 The restrictions b, c, > 0 apply, but the restriction on a cannot be expressed explicitly. [...]... relation between yt and xt by making yt a function of {xt− j } j=0 If p was large, as might be the case for the effect of output (xt ) upon investment (yt ), then some restrictions were imposed upon the shape of the P1: GOPAL JOSHI November 12, 2010 218 17:9 C7035 C7035˙C009 HandbookofEmpirical Economics andFinance lagged effects of a change in xt upon yt A popular version of this was termed “Almon... presence or absence of stability relatively cheaply and easily 2 Wallis (1995) has a good discussion of these issues P1: GOPAL JOSHI November 12, 2010 220 17:9 C7035 C7035˙C009 HandbookofEmpiricalEconomicsandFinance 9.3 Second Generation (2G) Models These began to emerge in the early 1970s and stayed around for 10–20 years Partly stimulated by inflation, and partly by the oil price shocks of the early... from well-defined optimization choices for households and firms, and if rules were implemented to describe the policy decisions of monetary and fiscal authorities In relation to the latter external debt was taken to be a fixed proportion of GDP and fiscal policy P1: GOPAL JOSHI November 12, 2010 222 17:9 C7035 C7035˙C009 HandbookofEmpiricalEconomicsandFinance was varied to attain this Monetary authorities... equilibrium With these ideas, and expectations handled as described above, one might think of P1: GOPAL JOSHI November 12, 2010 17:9 C7035 C7035˙C009 224 HandbookofEmpiricalEconomicsandFinance the 3G Phillips curve as effectively having the form t = ␣1 E t t−1 + (1 − ␣1 ) E t t+1 + ␦ mc t + ( pt−1 − mc t−1 ), (9.11) where mc t was the log of nominal marginal cost and mc t−1 − pt−1 was lagged... C7035˙C009 HandbookofEmpirical Economics andFinance in the case where zt was the log of the price level, one could add on an output gap to the equation that came from the second stage optimization Over time this emphasis on “gaps” gave rise to the miniature models known as New Keynesian, and today these small models are often used for policy analysis and some forecasting, e.g., Berg, Karam, and Laxton... a function of financial wealth and labor income One reason for doing so is that it is easier P1: GOPAL JOSHI November 12, 2010 228 17:9 C7035 C7035˙C009 HandbookofEmpirical Economics andFinance to modify the model design through its Euler equations An example is the extra dynamics introduced into consumption decisions by the use of habit persistence This can take a number of forms, but often results... distributed-lag models (e.g., used in co-integration analysis) as one of the special cases The vector u contains the residual dynamics of the adjustment of z toward its fundamental P1: GOPAL JOSHI November 12, 2010 17:8 C7035 C7035˙C008 210 HandbookofEmpirical Economics andFinance value X By definition, u is centered around zero and is mean-reverting, otherwise z will not revert to its fundamental... Empirical Economics andFinance 8.4 Further Developments Work is currently being carried out on a number of developments of these models and the tools required to estimate them and test hypotheses about their parameters The topic is less than a decade old, at the time of writing this chapter, but we hope to have demonstrated its potential importance A simple time-domain parameterization of the CM(,... W Distaso, and L Giraitis 2007 Nonstationarity-extended local Whittle estimation Journal of Econometrics 141: 1353–1384 Abadir, K M., and G Talmain 2002 Aggregation, persistence and volatility in a macro model Review of Economic Studies 69: 749–779 Abadir, K M., and G Talmain 2005 Autocovariance functions of series andof their transforms Journal of Econometrics 124: 227–252 Abadir, K M., and G Talmain... methods (when the system was nonlinear) The software developed to do so was an important innovation of this generation of models Chris Higgins, one of Klein’s students, and later Secretary of the Australian Treasury, felt that any assurance on system performance required that modelers should “simulate early and simulate often.” For that, computer power and good software were needed It was also clear that, . JOSHI November 12, 2010 17 :8 C7035 C7035˙C0 08 2 08 Handbook of Empirical Economics and Finance 6 6.5 7 7.5 8 8.5 9 9.5 10 10.5 11 1929 1935 1941 1947 1953 1959 1965 1971 1977 1 983 1 989 1995 2001 Log (real. where the right-hand side variable had a zero coefficient and z t = u t . P1: GOPAL JOSHI November 12, 2010 17 :8 C7035 C7035˙C0 08 212 Handbook of Empirical Economics and Finance 8. 4 Further Developments Work. 2010 17 :8 C7035 C7035˙C0 08 The Unconventional Dynamics of Economic and Financial Aggregates 207 0.75 0 .8 0 .85 0.9 0.95 1 0 24 681 012 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52