1. Trang chủ
  2. » Tài Chính - Ngân Hàng

Handbook of Econometrics Vols1-5 _ Chapter 16 pdf

40 249 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 40
Dung lượng 2,13 MB

Nội dung

Ch 16: Monte Carlo Experimentation 1.1 939 Monte Carlo experimentation Introduction At the outset, it is useful to distinguish Monte Carlo methods from distribution sampling even though their application in econometrics may seem rather similar The former is a general approach whereby mathematical problems of an analytical nature which prove technically intractable (or their solution involves prohibitively expensive labour costs) can be “solved” by substituting an equivalent stochastic problem and solving the latter In contrast, distribution sampling is used to evaluate features of a statistical distribution by representing it numerically and drawing observations from that numerical distribution This last has been used in statistics from an early date and important examples of its application are Student (1908), Yule (1926) and Orcutt and Cochrane (1949) inter alia Thus, to investigate the distribution of the mean of random samples of T observations from a distribution which was uniform between zero and unity, one could simply draw a large number of samples of that size from (say) a set of one million evenly spaced numbers in the interval [O,l] and plot the resulting distribution Such a procedure (that is, numerically representing a known distribution and sampling therefrom) is invariably part of a Monte Carlo experiment [the name deriving from Metropolis and Ulam (1949)] but often only a small part To illustrate a Monte Carlo experiment, consider calculating: [‘f(X)dx = Jo (say), (1) for a complicated function f(x) whose integral is unknown Introduce a random variable YE [a, b] with a known density p( 0) and define 17= f(v)/p(v), then: (2) Thus, calculating E(q) will also provide Z and a “solution” is achieved by estimating E(q) [see Sobol’ (1974)], highlighting the switch from the initial deterministic problem (evaluate I) to the stochastic equivalent (evaluate the mean of a random variable) Quandt in Chapter 12 of this Handbook discusses the numerical evaluation of integrals in general Rather clearly, distribution sampling is involved in (2), but the example also points up important aspects which will be present in later problems Firstly, p( -) Ch 16: Monte Curlo Experimentation 941 As before, analytical calculation of E(k) is presumed intractable for the purposes of the illustration [but see, for example, Hurwicz (1950) Kendall (1954) White (1961), Shenton and Johnson (1965), Phillips (1977a) and Sawa (1978)] so that E(h) has to be estimated Again, the choice of estimator of E(&) arises, with some potential distribution of outcomes (imprecision); only estimating E(&) at a few points in x Fis referred to as a “pilot Monte Carlo Study” and can little more than provide a set of numbers of unknown generality (specificity) Since E(&) depends on and T, it must be re-estimated as and T vary, but the dependence can be expressed in a conditional expectations formula: Et&I@, = G,(B,T), T) (7) and frequently, the aim of a Monte Carlo study is to evaluate G,(8, T) over x However, since E(b) need not vary with all the elements of (B, T), it is important to note any invariance information; here, & is independent of us2which, therefore, is fixed at unity without loss of generality Also, asymptotic distributional results can help in estimating E(&) and in checking the experiments conducted; conversely, estimation of E(h) checks the accuracy of the asymptotic results for T E Thus, we note: (8) It is important to clarify what Monte Carlo can, and cannot, contribute towards evaluating G,(B, T) in (7) As perusal of recent finite sample distributional results will reveal (see, for example, Phillips, Chapter in this Handbook, and Rothenberg, Chapter 15 in this Handbook), functions such as G,(B, T) tend to be extremely complicated series of sub-functions of 8, T [for the model (3)-(6), see, for example, the expansions in Shenton and Johnson (1965)] There is a negligible probability of simulation results establishing approximations to G,(8, T) which are accurate in (say) a Taylor-series sense, such that if terms to O(T-“) are included, these approximate the corresponding terms of G,( ), with the remainder being small relative to retained terms [compare, for example, equations (68) and (69) below]: see White (1980a) for a general analysis of functional form mis-specification Indeed, draconian simplifications and a large value of N may be necessary to establish results to even 0( T-l), noting that many asymptotic results are accurate to O(T-‘1’) anyway Rather, the objective of establishing “analogues” of G,( ) [denoted by Hi(0, T)] is to obviate redoing a Monte Carlo for every new value of (0, T) E X (which is an expensive approach) by substituting the inexpensive computation of E(&]e, T) from HI(.) Consequently, one seeks functions H,( ) such that over X 7, the inaccuracy of predictions of E( &) are of the same order D F Hendry 942 as errors arising from direct estimation of E(&) by distribution sampling for a prespecificd desired accuracy dependent on N (see, for example, Table 6.1 below) In practice, much of the inter-experiment variation observed in Monte Carlo can be accounted for by asymptotic theory [see, for example, Hendry (1973)], and as shown below, often ZZi(a) can be so formulated as to coincide with G,( ) for sufficiently large T The approach herein seeks to ensure simulation findings which are at least as accurate as simply numerically evaluating the relevant asymptotic formulae If the coefficients of (0, T) in G,( -) are denoted by B, then by construction, ZZi(.) depends on a (many + few) reparameterization y = h(p) defined by orthogonalising excluded effects with respect to included ones, yet ensuring coincidence of ZZi(*) and G,( *) for large enough T For parsimonious specifications of y, simulation based ZZi(-) can provide simple yet acceptably accurate formulae for interpreting empirical econometric evidence Similar considerations apply to other moments, or functions of moments, of econometric techniques 1.2 Simulation experiments While it is not a universally agreed terminology, it seems reasonable to describe Monte Carlo experiments as “simulation” since they will be conducted by simulating random processes using random numbers (with properties analogous to those of the random processes) Thus, for calculating I, one needs random numbers ui E [a, b] drawn from a distribution p( -) with ui = f( ui)/p( q) In the second example, random numbers e, - ZN(O,l) and y,, - ~V(0,(1- a*)-‘) are required (see Section 3.1 for a brief discussion of random number generators) The basic naive experiment (which will remain a major component of more “sophisticated” approaches) proceeds as follows Consider a random sample (x x,,,) drawn from the relevant distribution d(-) where &xi) = p and E(x, - p)2 = a2; then: i=N-’ z xi hasE(X)=p and E(z-p)‘=a2/N i-l (9) This well-known result is applied in many contexts in Monte Carlo, often for { xi } which are very complicated functions of the original random variables Also, for large N, X is approximately normally distributed around Z.L, and if 6’=&i$t(~i-%)2, thenE(d2)=a2 Ch 16: Monte Carlo Ercperimentation 943 Consequently, unknown E(v) can be estimated using means of simple random samples, with an accuracy which is itself estimable (from N-b*) and which decreases (in terms of the standard error of 3, which has the same units as the {xi }) as fi increases, so that “reasonable” accuracy is easy to obtain, whereas high precision is hard to achieve Returning to the two examples, the relevant estimators are: ii=+ ,f ui, withS(ii)=E(n)=z, 01) r=l (12) where each d, is based on an independent set of ( y,e, er) Furthermore, letting E(&-E(t))* = V, then: jT= &j~l(6i-6)2 hasb(v)=Y (13) and &(E-E(h))* = V/N (14 Thus, the approximation Z ,5r,M(E(&), V/N) provides a basis for constructing confidence intervals and hypothesis tests about E(h) In what follows, an experiment usually denotes an exercise at one point in the space x Y(generally replicated N times) with K experiments conducted in total However, where the context is clear, “an experiment” may also denote the set of K sub-experiments investigating the properties of a single econometric method 1.3 Experimentation versus analysis The arguments in favour of using experimental simulations for studying econometric methods are simply that many problems are analytically intractable or analysis thereof is too expensive, and that the relative price of capital to labour has moved sharply and increasingly in favour of capital [see, for example, Summers (1965)] Generally speaking, compared to a mathematical analysis of a complicated estimator or test procedure, results based on computer experiments are inexpensive and easy to produce As a consequence, a large number of studies Ch 16: Monte Carlo Experimentation 945 There are several intermediate stages involved in achieving this objective Firstly, as complete an analysis as feasible of the econometric model should be undertaken (see Section 2) Then, that model should be embedded in a Monte Carlo Model which exploits all the information available to the experimenter, and provides an appropriate design for the experiments to be undertaken (see Section 3) Thirdly, simulation specific methods of intra-experiment control should be developed (see Section 4) and combined with covariance techniques for estimating response surfaces between experiments (Section 5) The simple autoregressive model in (3)-(6) is considered throughout as an illustration and in Section 6, results are presented relating to biases, standard errors and power functions of tests Finally, in Section 7, various loose ends are briefly discussed including applications of simulation techniques to studying estimated econometric systems (see Fair, Chapter 33 in this Handbook) and to the evaluation of integrals [see Quandt, Chapter 12 in this Handbook and Kloek and Van Dijk (1978)] Three useful background references on Monte Carlo are Goldfeld and Quandt (1972), Kleijnen (1974) and Naylor (1971) 2.1 The econometric model The data generation process The class of processes chosen for investigation defines, and thereby automatically restricts, the realm of applicability of the results Clearly, the class for which the analytical results are desired must be chosen for the simulation! For example, one type of data generation process (DGP) which is often used is the class of stationary, complete, linear, dynamic, simultaneous equations systems with (possibly) autocorrelated errors, or special cases thereof It is obvious that neither experimentation nor analysis of such processes can produce results applicable to (say) non-stationary or non-linear situations, and if the latter is desired, the DGP must encompass this possibility Moreover, either or both approaches may be further restricted in the number of equations or parameters or regions of the parameter space to which their results apply Denote the parameters of the DGP by (0, T) (retaining a separate identity for T because of its fundamental role in finite sample distributions) with the parameter space X It is important to emphasize that by the nature of computer experimentation, the DGP is fully known to the experimenter and in particular the forms of the equations, the numerical values of their parameters and the actual values of the random numbers are all known The use of such information in improving the efficiency of the experiments is discussed below, but its immediate use is that the correct likelihood function for the DGP parameters D F Hendo 962 5.3 Investigating validity In addition to the points noted above, each of the conjectured response surfaces entails also that yo, = in order to reproduce Gi( a) in large samples and this is potentially testable from the regression estimates Also, under the null that the error variance (u*) should be unity, the residual sum of squares will be distributed as x*(r,O) for r degrees of freedom in the relevant regression, since for correct specifications, r&*/u * - x*(r,O) Confidence limits for a* for various r have been tabulated [see, for example, Croxton, Cowden and Klein (1968, table L)] but are easily calculated in any case As with any regression analysis, the selected response surfaces can be tested by a variety of Lagrange Multiplier based diagnostics (see, for example, Engle, Chapter 13 in this Handbook) of which predictive tests are one of the more important If K experiments are conducted and K, used for selecting and estimating the response surfaces, K - K, should be used to test the validity of the results to ensure that some credibility beyond mere description attaches to the finally chosen surrogates for G,( *) [see, for example, Chow (1960)] Inappropriate choices of Z-Z,( could induce either or both of autocorrelation *) and heteroscedasticity in the residuals These problems might be detectable directly The former can be tested by, for example, the Durbin-Watson test when a suitable data ordering exists [as in Mizon and Hendry (1980) or Maasoumi and Phillips (1982)] A valuable diagnostic for the latter is the general test for functional mis-specification in White (1980b) who also derives a robust estimator of the estimated-parameter variances to support valid, if non-optimal, inference despite heteroscedasticity; both of these statistics are reported below Discrepancies between the conventional and “robust” coefficient variances are indicative of mis-specification and White (1980a) presents a test based on this notion Further tests against specific alternatives can be derived following the procedures in Engle (1982) As noted above, the main advantages of estimated response surfaces over tabulation are their ability to summarize large and non-memorizable quantities of information in simple approximations which in practice seem able to account for the bulk of inter-experiment variation in simulation outcomes (especially for inconsistent estimators) using formulae known to be correct for sufficiently large values of T A corresponding disadvantage is that the dependence of the approximation error on the invariants of the data generation process is unknown, but in a well defined parameter space should be estimable for the purposes of predicting outcomes at other points within the sampled set [i.e for experiments which could have been undertaken, but were not, as in Hendry and Harrison (1974)] Conversely, relative to analytical derivations, the advantages are the use of less restrictive data generating processes than existing techniques can study analytically as well as exploiting the falling relative price of capital to economise 963 Ch 16: Monte Carlo E.xperimentaiion scarce labour resources; whereas the disadvantages are the inherent inexactitude of estimated response surfaces and the absence of a complete understanding of the limitations of the experimentally determined numerical results As analytical methods improve, the frontier at which simulation is substituted for analysis will move outwards, but is unlikely to obviate any need for efficient Monte Carlo Equally, simulation based findings seems most helpful when they are tightly circumscribed by analytical results, a point illustrated by the experimental evidence reported in Section [for further discussion, see Hendry (1982)] on 6 I An application to a simple model Formulation of the experiments To illustrate the application of the experimental approach, we consider the model in (3)-(6) as this highlights the principles involved and indicates what can and cannot be achieved by experimentation The main objectives of the following experiments-considered as a substantive study-are to: (a) estimate and test response surfaces for qT1 = E(h), tiT2 = E(&-E(&))2, fi = E( p( L?)“~) ( = ESE), and P = Pr(Z 3.84) basing these on the ideas developed in Section 5; (b) investigate the efficiency gains from using the CVs o* for & and O*2 = T-t&; _ T-‘x2(T,0) for 6: [so that E(uz2) = u,’ and V(uz2) = 2uP/T]; ’ (c) relate simulation estimates of wk to their asymptotic counterparts; and (d) evaluate the usefulness of asymptotic results as inter-experiment controls To recapitulate, the main simulation estimators of the unknown #ri, etc are given by: (I& are computed as for \t, but with a: replacing hi); $T2=(N-l)-1~(&i-$T1)2 J/r2 = Jr2 - I//;~ + T-‘(lEz= N-‘C Ce2 N-‘~6cf = P= N-‘XI;;, and a’), (51) ( P(iii)y2, and (52) c2 = cYe2 N-‘~IJ~~ t + u,‘, (53) (54) D F Hendry 964 where if Z 3.84 (for testing H,: ff = 0), Direct estimation of the cumulative density function of &( !Pr(&)) was not an objective of this set of experiments, although it is obviously a legitimate objective in general [see, for example, Orcutt and Winokur (1969)J The sampling variances of the various simulation estimators were also estimated by the following formulae: p( $rr) = N-l$rz and (55) ~(~,,)=N-~((N-l)-1~(~,-af-~r1+“)2), from which the efficiency gain due to using (Y*is given by EG = v( 4,,)/ Next: v( G,,) q4,2> = N-l@,- $42) wherep,=(N-1)-1X(&-&r-,)4, (56) P(ESE)=N-~((N-~)-‘E(~~(&)“~-ESE)~), (57) (58) and with the efficiency gain from a,*’ being SEG = v( $)/v( 5:) Finally, v(j) follows from (39) but following Cox (1970, ch 6) and Mizon and Hendry (1980) (49) is formulated as: when for [ = (2N))’ and_L( P) is similar but replaces the second term by ln( P/(1 - P)) Observations with P = or are automatically deleted from the regression We 965 Ch 16: Monte Curlo Experimeniution also deleted those for which (1 - P*) < lo- 5, for P* in (42) when using (40) to approximate the unknown L(P) in (60) The properties of the experimental design are important for achieving the objectives of the study, and “iterative” designs based on a pilot study followed by intensive searches in regions of where the relevant Gj( ) are least “well-behaved” may be needed For example, it is difficult to design a single experiment which is “good” for estimating (say) both #ri( *) and P( -) Here, to “cover” the parameter space, a full factorial design based on a = (0, f0.3, f0.6, kO.9) and T = {10,20,30,40} was selected, yielding 28 experiments in all with u: = and N = 400 (so that P could be accurate to within 0.0025) It is important to note that the parameter space is now { 1I 0.9, u,’ = l} and that as q, = 0, X in (36) is a implicitly determined by \/Ta so that cp= Ta*/(l - a*) Six randomly chosen experiments from the 28 were used for predictive testing.7 Finally, first order autoregressive processes have been the subject of extensive analysis and experimentation (see inter alia, Bartlett (1946), Hurwicz (1950) Kendall (1954), White (1961), Shenton and Johnson (1965), Copas (1966) Orcutt and Winokur (1969), Phillips (1977a) and Sawa (1978); also Kendall and Stuart (1966, ch 48) provide a convenient summary of many of the relevant analytical results) Such known analytical results obviously “prejudice” the precise functions chosen to characterise the G;(e), and where this has been an important influence, it is noted below 6.2 Estimating I!$& T) a, Firstly, the CV a* yielded an average efficiency gain over distribution sampling of 6.4 for trivial extra computational cost Also, for Ia I # 0, Ho: E( &) = a was rejected in every experiment using $ri but on occasion was not for Ia I = 0.3 using Jr, The theoretical and simulation moments of a* matched well, checking the validity of the random numbers used and correlation (&, a*) varied from 0.597 to 0.978 as (a, T) went from (-0.9,10) to (0.0,40) Thus, by T = 40, the asymptotic theory worked fairly well for Ial I 0.6 Let T * = T(l - a*) denote the “effective sample size” [this concept is noted in Sims (1974) and is based on the asymptotic approximations in Hendry (1979)], then EG was described by: R*=0.93, S=0.21, ni(6)=1.6, n2=1.1, (61) 71 am grateful to Jan Podivinsky and Frank Srba for assistance in conducting and analysing these experiments D F Hendry 966 where ( )= conventional standard errors, [ -I= heteroscedasticity-consistent standard errors [see White (1980b)], S = residual standard error, q,(k)= heteroscedasticity/functional-form m&-specification test based on RE in the auxiliary regression with k quadratic variables using the form: Ri(T- k - 1)/((1- Ri).k) for I regressors in, for example, (61), approximately distributed as F(k, T - k - I) under the null [see White (198041, q2=Chow (1960) test of parameter constancy, distributed as F(6,22- I) under the null This is treated as a Lagrange Multiplier test, and so all regressions quoted are based on the 28 experiments From (61), EG increases almost proportionately to T* (estimating separate coefficients for In T and hr(1- a2) revealed these to be almost equal magnitude, same sign) Consequently, in experiments with small T*, CVs like a* may not yield useful efficiency gains, and conversely, large-T* is required for asymptotic theory to “work well” Next, the response surface estimates obtained for $,, and q,r were similar so only the latter are reported Using the simulation estimated standard errors from (55) (denoted by Sl) yielded for the simplest bias function: R2=0.97, S=1.67, ~~(1) =9.8, q2 =0.4 (62) While this accounts for 97% of the simulation variance in (& - cu)/Sl between experiments, nr(1) rejects homoscedasticity, and the value of S is significantly in excess of unity [27.S2 exceeds the 0.001 critical value of x2(27,0)]; this confirms that the diagnostic tests can detect mis-specification Adding the term a/T2 yields: (I& - a)/Sl = - R2=0.985, S=1.26, qr(3)=1.3, ~)~=0.8 (63) This is obviously a much better approximation (and is “close” to the theoretical result to 0(T-2) of -2a(l/T -2/T’)), although S remains significantly larger than unity at the 0.05 level Ch 16: Monte 961 Carlo Experimentation Very similar results were obtained on replacing Sl by ci = (T* N.EG)-‘/2 this is unsurprising given that: lnSl=- 0.96 lnJNT_- 0.58 lnEG- 1.3 [k%{ [::0o:i R2 = 0.991, S = 0.048, 1.0 /T+ a’/T*, [%j [E1’ n,(S) = 1.1, and n2 = 1.5 (64) Thus, while additional’ finite sample effects can be established, most of the between-experiment variance in Sl is attributable to the asymptotic result [note the dependence of EG on the other variables in (64) from (61); also these equations together imply that SlapP O((NT*2)-'/2) anticipated] as Noting that T-' -2Tp2 = (T+2)-', an attempt was made to establish the relevance of a3/T3 [based on Shenton and Johnson (1965)]: (~T1-a)/S1= - 84a/(T+2).Sl+ 43 a3/T3.S1, ‘$ [::oo:j R2=0.989, S=1.09, n1(2)=0.7, q2 =0.9 (65) Since the experimental design actually induced an extremely high correlation between successive odd powers of (Y,(65) seems a useful approximation to their series expansion: E( - a) = - ;;;l;;2; + 12a3+ 1@&+9;;5 (T+5)13' + 24(T+12)(T+10)(u7 + , (T+13)[51 where T["] T(T - 2) (T - 2n + 2) If a larger number of experiments had been = conducted, general response surfaces such as (65) might have been estimable directly given (34) The results herein certainly suggest that it can be worthwhile incorporating terms smaller than just O(T-') Finally, replacing Sl by t1 in (65) yielded S = 1.20 and ~~(2) = 1.3, so the fit was poorer but not significantly bad, and closely similar coefficient estimates resulted Table 6.1 provides some illustrative comparisons of the various regression predictions of biases together with both analytical results and the direct and CI/ simulation estimates, including one set of values for which experiments were not conducted Any of the results in (62)-(65) seems adequately accurate for practical D F Hendry 968 Table 6.1 (b) (4 &I ST, -0.10 - 0.091 - 0.086 -0.107 -0.15 -0.119 - 0.034 ~ 0.034 - 0.038 ~ 0.037 -0.046 - 0.051 - 0.051 -0.056 - 0.053 - - 0.087 (0.007) -0.104 (0.008) - 0.040 (0.003) - 0.052 (0.004) -0.123 -0.105 -0.101 -0.133 -0.112 -0.105 - 0.092 (0.013) - 0.102 (0.011) - 0.038 (0.008) ~ 0.049 (0.006) - (65) a T (62) (63 0.6, 10 -0.092 -0.079 -0.083 0.9 10 -0.139 -0.118 0.6, 30 -0.031 0.9, 30 0.8, 10 “(a) -2a/( purposes, E(& a)= 6.3 (a)” - 0.036 T + 2); (b) to 0( T- *)and (c) exact, both from Sawa (1978 Table la.) and the final numerical-analytical -1.&x/(T+2)+43a3/T3 summary is given by Estimating qTz(&) Very similar estimates were produced by $,, and $rz, and since variances were estimated only for the former, results are quoted just for these Firstly, for fi4, since SSD = & : lnfi4= 3.8 lnSSD+ [:::j R2=0.98, 0.8 + 2.4 /T*, (0.2) (0.3) [0.2] [0.3] S=O.18, 7)t(5)=1.1, n2=0.2 (66) Thus, the approximation that pL4 SSD4 has some support (note that In = = 1.09) However, letting S2 = /m from (56): R2 = 0.80, Consequently, S = 0.25, the asymptotic approximation ~~(1) =l.O, that V($,,) Q = 0.8 = 2u:/N is not very 969 Ch 16: Monte Carlo Experimentation accurate, and this is reflected in the regressions based on (44b): t21n( +,,/a,Z) = - (h.i)[,/T+ (i.$t$‘/T* Sc1.26, a2), PI [0:6] [0:3] R’z0.94, - $3)~2a2/Tz(1- 91(6)=1.5, ~2~0.6, [2=&P? (68) Although this regression accounts for much of the variance in $,, around u,’ (and almost all of the variance in GT2 itself), S is significantly in excess of unity Replacing m by &S2 reduced the response surface error variance to unity, but induced so much collinearity between the transformed regressors that sensible individual coefficient estimates could not be obtained In no case was the unrestricted coefficient of In u,’ significantly different from unity By way of comparison, Kendall and Stuart (1966, ch 48) quote an analytical result which suggests (to 0( T2)): ln($,,/u,2) - -2/T+8a2/T* -48a2/T*T* (69) Thus, both (68) and (69) reveal that \c/r2 = u,’ only for large T *, and the former yields: 6.4 Estimating VT ( S) The response surface based on (47) yielded: [sin 0.09 t,(lnu,)/T+ R2=0.64, S=1.86, n,(3) =7.9, ~*=1.2, (70) where 5s = u,/SSE While this explained 99.996% of the variability in Q (consistent with their conjecture that this was an artifact due to reusing the random numbers) A simple response surface based on (60) yielded: (L*(P)-L(P*))= $j#@T*, PI R* = 0.73, S =1.57, ~~(1) = 0.01, n2(4,10) = 0.5, (75) where = [N&l - P)/(l - IV-‘)]‘/* The terms T-’ and L( P*)/T were insignificant if added to (75), and the unrestricted coefficient of L(P*) was not significantly different from unity When a = 0, the rejection frequencies were: T P 10 0.053 20 0.058 30 0.048 40 0.045 mean, 0.051, all of which are close to the nominal significance level of = 0.05 Moreover, (1 + I#J)accounted for over 99.9% of the between-experiment variance in the mean of Z, consistent with E(x*(l, cp)) = + + Thus, although S in (75) is significantly in excess of unity, a reasonable summary power function is: Finally, the rejection frequency i), for the true value of (Y[i.e (Ye= (Yin (41) so $I = 0] was investigated: Fo= 0.050 + 0.024 /T*, (0.003) (0.016) [0.003] [0.008] R*=0.08, S=O.Oll, &)=1.0, n2=0.8, (77) D F Hendry 912 so that (Y,, (r is indeed rejected around 5% of the time at all values of = = 0.011 when P = 0.05, N = 400) Overall, the results in (61)-(77) highlight what Monte Carlo can achieve (e.g simple numerical-analytical formulae) and what it cannot (e.g provide little insight into what happens as (Y 1; + compare Phillips (1977a)) It is not a complete substitute for analysis, but may be a helpful partner, and is often a cost-effective solution which need not entail high consumption costs if adequate summarization is provided a(\lP(l - P)/N Some loose ends 7.1 Non-existent moments There are many “respectable” estimators which have no finite moments in small samples (e.g LIML and FIML) yet which have been investigated by Monte Carlo methods Possible approaches to this “ tail area” problem are: (a) (b) (c) (d) (e) pre-define an “acceptable” region for B and discard outliers; use non-parametric statistics [like medians, etc.; see Summers (1965)]; investigate the existence of moments by varying N (and possibly 0); report only #r(e); and only derive the CV, and not the simulation Sargan (1982) has investigated the outcome which is likely to occur if conventional simulation means are used to estimate non-existent moments (with and without CVs) and found that N could be chosen as a function of T such that the Monte Carlo provided reasonable estimates of the Nagar approximations to the moments (which in turn help in understanding the Edgeworth Approximation to the distribution function) Even so, some truncation bounds for deleting outliers seemed better than using none, supporting (a); no bounds could produce rather unreliable results, and non-parametric statistics (b) in effect operate by “discounting” discrepant results The natural alternative is direct estimation of &.( 0) In low-dimensional problems, numerical tabulation of qr.( ) for very large N can be useful [see Orcutt and Winokur (1969)] but otherwise, the function has to be estimated Sargan (1976) considers using CVs to improve the accuracy of estimating qr( ), but this requires that the exact distribution function of the CV is known, and Basmann et al (1974) test various hypotheses about forms of qr( ) in specific models Improved simulation methods in this area would be of great value, but at present it is rarely feasible to attempt estimation of distribution functions which depend on many parameters Ch 16: Monte 7.2, Curlo Experimentation 913 Evaluating integrals CYs for test powers would be a useful advance [closely related to estimating er( )I These can be derived in certain static models, but their use depends on knowing qr(fl:), not just its moments, and so test-power CVs are difficult to obtain in dynamic models Experiments in which significance levels rather than local alternatives were changed also would be interesting and helpful in understanding the behaviour of tests Returning to the example in equations (l), (2) and (ll), some cross-fertilization of ideas may prove fruitful Kloek and van Dijk (1978) discuss Monte Carlo integration for economic estimation, and demonstrate its feasibility using importance functions Also, Van Dijk and Kloek (1980) discuss the choice of importance function and implement nine-dimensional integration However, on the one hand, p(v) also might be of use in estimating integrals corresponding to test powers even though the density function is unknown (e.g by generating 0’s which are exactly distributed as the importance function which in turn is chosen to be the asymptotic distribution of the test) On the other hand, naive estimators such as E in (11) surely could be improved upon by using some functions of the {v,} as a Ck’: e.g calculating ii from u and f({ u;}) so as to correct for chance departures of ij from E(v) = /,hxp( x)d x which will in general be known (although this ad hoc suggestion may not guarantee efficiency gains) A further problem which is equivalent to computing an integral is estimating the mean stochastic simulation path of a non-linear econometric system Here, antithetic variates switching {a,} - ZN( 0, Z ) to { - E, } and creating w, = KE,ZN(0, 2) from = KK' seem to be of use The efficiency gains depend on the extent of the non-linearity and the relative “explanatory” power of the strongly exogenous variables compared to the endogenous dynamics, varying from infinite efficiency for linear, static systems to zero for closed, dynamic models with squared errors [see Fair, Chapter 33 in this Handbook, and Mariano and Brown (1983); and for an application, Calzolari (1979)] Much work remains to be done on determining factors which influence such simulation efficiency (e.g dependence of the data on such features as the sign and/or scale of the errors) and hence on deriving appropriate antithetic selections Recently, Calzolari and Sterbenz (1981) have derived control variates from local linearization of non-linear systems and find very large efficiency gains over straightforward random replications for the Klein-Goldberger model Manifestly, other applications are legion since very many problems in econometrics are equivalent to computing integrals which in turn can be estimated by averages, and hence are susceptible to efficiency improvements And notwithstanding all the above arguments, when only a couple of points in X Yare believed to be of empirical relevance, naive simulation “pilot” studies 974 D E Hendry remain an easy and inexpensive means of learning about finite sample properties in complicated models or methods References Bartlett, M S (1946) “On the Theoretical Specification and Sampling Properties of Autocorrelated Time Series”, Journal of the Royal Sfatistical Society, B, 8, 27-41 Basmann, R L., D H Richardson and R J Rohr (1974) “Finite Sample Distributions Associated with Stochastic Difference Equations-Some Experimental Evidence”, Econometrica, 42 825-840 Breusch, T S (1980) “Useful Invariance Results for Generalised Regression Models”, Journal of Econometrics, 13, 321-340 Calzolari, G (1979) “Antithetic Variates to Estimate the Simulation Bias in Non-linear Models”, Economics Letters, 4, 323-328 Calzolari, G and F Sterbenz (1981) “Efficient Computation of Reduced Form Variances in Nonlinear Econometric Models”, IBM, Pisa, mimeo Campos, J (1980) “The Form of Response Surface for a Simulation Standard Error in Monte Carlo Studies”, unpublished paper, London School of Economics Chow, G C (1960) “Tests of Equality Between Sets of Coefficients in Two Linear Regressions”, Econometrica, 28, 591-605 Cochran, W G and G M Cox (1957) Experimental Designs New York: John Wiley and Sons Conlisk, J (1974) “Optimal Response Surface Design in Monte Carlo Sampling Experiments”, Anna/s of Economic and Social Measurement, 3,463-473 Copas, J B (1966) “Monte Carlo Results for Estimation in a Stable Markov Time Series”, Journal of the Royal Statistical Society, A, 129, 110-116 Cox, D R (1970) Analysis of Binary Data London: Chapman and Hall Cramer, H (1946) Mathematical Methods of Statistics Uppsala: Almqvist and Wicksells Croxton, F E., D J Cowden and S Klein (1968) Applied General Statistics London: Sir Isaac Pitman and Sons Ltd., 3rd edn Davis, A W (1971) “Percentile Approximations for a Class of Likelihood Ration Criteria”, Biometrika, 58, 349-356 Engle, R F (1982) “A General Approach to Lagrange Multiplier Model Diagnostics”, Journal of Econometrics, 20, 83-104 Evans, G B A and N E Savin (1981) “Testing for Unit Roots: I”, Econometrica, 49, 753-779 Evans, G B A and N E Savin (1982) “Conflict Among the Criteria Revisited; the W, LR and LM Tests”, Econometrica, 50, 131-748 Goldberger, A S (1964) Econometric Theory New York: John Wiley and Sons Golder, E R (1976) “Algorithm AS98: The Spectral Test for the Evaluation of Congruential Pseudo-Random Generators”, Applied Statistics, 25, 173-180 Goldfeld, S M and R E Quandt (1972) Nonlinear Methods in Econometrics Amsterdam: North-Holland Gross, A M (1973) “A Monte Carlo Swindle for Estimators of Location”, Journal of the Royal Slatisfical Society, C, 22, 347-353 Hammersley, J M and D C Handscomb (1964) Monre Car/o Methods London: Metheun Hendry, D F (1973) “On Asymptotic Theory and Finite Sample Experiments”, Economica, 160, 210-217 Hendry, D F (1976) “The Structure of Simultaneous Equations Estimators”, Journal of Econometrics, 4, 51-88 Hendry, D F (1977) “On the Time Series Approach to Econometric Model Building”, in: C A Sims, Ed., New Methd in Business Cycle Research Federal Reserve Bank of Minneapolis, 183-208 Hendry D F (1979) “The Behaviour of Inconsistent Instrumental Variables Estimators in Dynamic Systems with Autocorrelated Errors”, Journal of Econometrics, 9, 295-314 Ch 16: Monte Hen&y, 915 Carlo Experimentation D F (1982) “A Reply to Professors Maasoumi and Phillips”, Journal of Econometrics, 19, 203-213 Hendry, D F and R W Harrison (1974) “Monte Carlo Methodology and the Finite Sample Behaviour of Ordinary and Two-Stage Least Squares”, Journal of Econometrics, 2, 151-174 Hendry, D F and F Srba (1977) “The Properties of Autoregressive Instrumental Variables Estimators in Dynamic Systems”, Econometrica, 45, 969-990 Hendry, D F and P K Trivedi (1972) “Maximum Likelihood Estimation of Difference Equations with Moving Average Errors: A Simulation Study”, The Review of Economic Studies, 39, 117-145 Hurwicz, L (1950) “Least Squares Bias in Time Series”, in: T C Koopmans (Ed.): Statistical Inference in Dynamic Economic Models Cowles Commission Monograph 10; New York: John Wiley and Sons, ch 15 Hylleberg, S (1977) “A Comparative Study of Finite Sample Properties of Band Spectrum Regression Estimators”, Journal of Econometrics, 5, 167-182 Johnson, N L and S Katz (1970) Continuous Univariate Distributions-l; Distributions in Stutistics New York: John Wiley and Sons Kakwani, N C (1967) “The Unbiasedness of Zellner’s Seemingly Unrelated Regression Equations Estimator”, Journal of the American Statistical Association, 62, 141-142 Kendall, M G (1954) “Note on Bias in the Estimation of Autocorrelation”, Biometrika, 41,403~404 Kendall, M G and A Stuart (1958, 1961, 1966) The Advanced Theory of Statistics Vols 1-3 New York: Charles Griffen Kennedy W J Jr and J E Gentle (1980) Statistical Computing New York: Marcel Dekker, Inc King, M L (1980) “Small Sample Properties of Econometric Estimators and Tests Assuming Ellinticallv Svmmetric Disturbances” Paner nresented to the Fourth World Congress of the Econometric Society, France - _ King, M L “A Note on the Burroughs B6700 Pseudo-Random Number Generator”, New Zealund Statisticion (forthcoming) Kleijnen, J P C (1974) Statistical Techniques in Simulation New York: Marcel Dekker Inc Kloek, T and H K Van Dijk (1978) “Bayesian Estimates of Equation System Parameters: An Application of Integration by Monte Carlo”, Econometrica, 46, 1-19 Maasoumi, E and P C B Phillips (1982) “On the Behaviour of Inconsistent Instrumental Variable Estimators”, Journal of Econometrics, 19, 183-201 Mariano, R S (1982) “Analytical Small-Sample Distribution Theory in Econometrics: The Simultaneous-Equations Case”, International Economic Review, 23, 503-533 Mariano, R S and B W Brown (1983) “Asymptotic Behaviour of Predictors in a Nonlinear Simultaneous System”, International Economic Review, 24, 523-536 Metropolis, N and S Ulam (1949) “The Monte Carlo Method”, Journal of the American Statistical Association, 44, 335-341 M&hail, W M (1972) “Simulating the Small Sample Properties of Econometric Estimators”, of the American Statistical Association, Journal 67, 620-624 Mikhail, W M (1975) “A Comparative Monte Carlo Study of the Properties of Econometric Estimators”, Journal of the American Statistical Association, IO, 91-104 Mizon, G E and D F Hendty (1980) “An Empirical Application and Monte Carlo Analvsis of Tests of Dynamic Specification”, Review of Economic Studies, 47, 21-45 Myers, R H and S J Lahoda (1975) “A Generalisation of the Response Surface Mean Square Error Criterion with a Specific Application to the Slope”, Technometrics, 17, 481-486 Nagar, A L (1959) “The Bias and Moment Matrix of the General k-Class Estimators of the Parameters in Simultaneous Eouations” Econometrica 27 575-595 Naylor, T H (1971) Computer Simulation' Experiments with models of Economic Systems New York: John Wiley and Sons, 1971 Neave, H R (1973) “On Using the Box-Miiller Transformation with Multiplicative Congruential Pseudo-Random Number Generators”, Applied Statistics, 22, 92-97 Nicholls, D F., A R Pagan and R D Terre11 (1975) “The Estimation and Use of Models with Moving Average Disturbance Terms: A Survey”, International Economic Review, 16, 113-134 O’Brien, R J (1979) “The Sensitivity of Econometric Estimators to Data Perturbations: II Instrumental Variables”, Unpublished Paper, Southampton University D F Hendty 916 Orcutt, G H and D Cochrane (1949) “A Sampling Study of the Merits of Autoregressive and Reduced Form Transformations in Regression Analysis”, Journal of the American Statistical Association, 44, 356-312 Orcutt, G H and H S Winokur (1969) “First Order Autoregression: Inference Estimation and Prediction”, Econometrica, 31, 1-14 Phillips, P C B (1977) “Approximations to Some Finite Sample Distributions Associated with a First Order Stochastic Difference Equation”, Econometrica, 45, 463-485 Phillips, P C B (1977) “A General Theorem in the Theory of Asymptotic Expansions as Approximations to Finite Sample Distributions of Econometric Estimators”, Econometrica, 45, 1517-1534 Phillips, P C B (1980) “Finite Sample Theory and the Distributions of Alternative Estimators of the Marginal Propensity to Consume”, The Review of Economic Studies, 47, 183-224 Rao, C R (1952) Adnanced Statistical Methods in Biometric Research New York: John Wiley and Sons Sargan, J D (1976) “Econometric Estimators and the Edgeworth Approximation”, Econometrica, 44, 421-448 Sargan, J D (1978) “ll~e Estimation of Edgeworth Approximations by Monte Carlo Methods”, Unpublished Paper, London School of Economics Sargan, J D (1982) “On Monte Carlo Estimates of Moments That are Infinite”, Adcunces in Econometrics, 1, 261-299 Sawa, T (1978) “The Exact Moments of the Least Squares Estimator for the Autoregressive Model”, Journul of Econometrics, 8, 159-172 Shenton, L R and W L Johnson (1965) “Moments of a Serial Correlation Coefficient”, Journul of the Royal Statistical Society, B, 27, 308-320 Sims, C A (1974) “Distributed Lags”, in: M D Intriligator and D A Kendrick, eds., Frontiers of Quantitative Economics, Vol II Amsterdam: North-Holland, ch Smith V K (1973) Monte Carlo Methods London: D C Heath Sobof, I M (1974) The Monte Carlo Method Popular Lectures in Mathematics, London: University of Chicago Press, Ltd Sowey, E R (1972) “A Chronological and Classified Bibliography on Random Number Generation and Testing”, International Statisticul Reoiew, 40, 355-371 Sowey, E R (1973) “A Classified Bibliography of Monte Carlo Studies in Econometrics”, Journul of Econometrics, 1, 371-395 Student (1908) “On the Probable Error of a Mean”, Biometriku, 6, l-25 Summers, R (1965) “A Capital Intensive Approach to the Small Sample Properties of Various Simultaneous Equations Estimators”, Econometricu, 33, 1-41 Sylwestrowicz, J D (1981) “Applications of the ICL Distributed Array Processor in Econometric Computations”, ICL Technical Journal, 280-286 Teichroew, D (1965) “A History of Distribution Sampling Prior to the Era of the Computer and Its Relevance to Simulation”, Journal of the American Stutisticul Association, 60, 27-49 Tocher, K D (1963) The Art of Simulation London: English Universities Press Tse Y K (1979) “Finite Samnle Annroximations to the Distribution of the Autoreeressive Coeffi cients in a First Order Stochastic Difference Equation with Exogenous Variables”, Unpublished u Paner London School of Economics Van bijk, H K and T Kloek (1980) “Further Experience in Bayesian Analysis using Monte Carlo Integration”, Journal of Econometrics, 14 307-328 White,-H (1980) “Using Least Squares to Approximate Unknown Regression Functions”, International Economic Review, 21, 149-170 White, H (1980) “A Heteroskedastic-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity”, Econometrica, 48, 817-838 White, J S (1961) “Asymptotic Expansions for the Mean and Variance of the Serial Correlation Coefficient”, Biometrika, 48, 85-95 Yule, G U (1926) “Why Do We Sometimes Get Nonsense-Correlations Between Time-Series?-A Study in Sampling and the Nature of Time-Series”, Journal of the Royal Stutistical Society, 89, l-64 I L & ... to the stochastic equivalent (evaluate the mean of a random variable) Quandt in Chapter 12 of this Handbook discusses the numerical evaluation of integrals in general Rather clearly, distribution... T) in (7) As perusal of recent finite sample distributional results will reveal (see, for example, Phillips, Chapter in this Handbook, and Rothenberg, Chapter 15 in this Handbook) , functions... discussion of the numerical aspects of random number generation, see Quandt, Chapter 12 in this Handbook) : Z r+l = bz; (mod r), i=O,1 ,-*-, m, 09) D F Hendry 948 n, = zr/r E [O,l] The choices of b

Ngày đăng: 02/07/2014, 22:20

TỪ KHÓA LIÊN QUAN