Corollary 1. In the context of Proposition 3, suppose that
6.1.5 Estimators in Structurally Misspecijied Models
(6.71) where, for clarity, we have added the superscript, 0, in order to distinguish the true model from the one we specify for estimation; the latter is the one we employed in the previous examples.
Itis clear that the resulting "quasi" -ML or, more appropriately, pseudo- ML estimators would be inconsistent. The objective here is to determine, if possible, the inconsistency involved, and the limiting distribution of the properly centered pseudo-ML estimators.Itis also evident that even though we deal with a specific small model, the procedure we shall develop is of general applicability.
All steps taken in the arguments of Examples 1, 2 and 3 remain valid up to the point of arguing about the consistency and limiting distribution of the resulting estimators. Since our "working" model is given by Eq. (6.8), rather than Eq. (6.71), the "LF" is given by Eq. (6.11), and the function to be minimized is given by Eq. (6.13). To determine the inconsistency involved we have two options: First, we can find the (a.c.) limit of the function HT , as in Eq. (6.14), and then obtain the values of al and b1 that correspond to the global minimum of that function. This, however, is a most cumbersome approach since we do not necessarily know, in the face of misspecification, that the function K ,employed extensively in the examples above, is necessarily a contrast function (in fact it is not), or that it is nonnegative and, most cumbersome of all, finding its global minimum is inordinately difficult.
Second, we may proceed to Eq. (6.50) and see how misspecification in- trudes in the derivation of the limiting distribution of the pseudo-estimators.
Ifwe follow this approach, the use of the function HT is completely su- perfluous; thus, we revert to the "LF". Partially maximizing with respect to ~, we find
l I T
~ = S(()), LT ( () ) = -[In(271')+1]- 2lnIS(())I- TLlnYtl,
t=l
where LT is the concentrated "LF". Differentiating with respect to () =
(6.72) (aI,bl ) ' ,yields
( 8LT ) ' _ 1 ( 822(al+lnYl)-82l(blfh+fh) ) .
8() - - 21S(())1 811(blSYlYl + SYIY2) - 812(alfh + Sydn Yl) , the equations above are to be solved, to obtain the pseudo-ML estimator, say e;the equations in question are nonlinear in the unknown parameters, and the solution can be obtained only by iteration. Ifwe now proceed as in Eq. (6.50) we have a problem; the mean value theorem we had applied therein "works" because the ML estimator is consistent. Thus, we have the assurance that when we evaluate the Hessian of the LF at a point, say ()* ,intermediate between eand ()o ,the Hessian so evaluated converges, at least in probability, to the (negative of the inverse of the) Fisher infor- mation matrix, evaluated at the true parameter point. Since here we are dealing with an inconsistent estimator, the same approach will not lead to the solution we seek. In misspecified models, the analog of expanding the gradient of the LF about the true parameter point in correctly specified models, is to expand about the probability limit of the pseudo-ML estimator. Thus, let 0* = ()o+(j be this probability limit, and consider the expansion13
8~;(e)= 8~;(0*)+ ~~~~(()**)(e _ ()o _ (j),
where I()** - 0*1 :s: Ie - 0*1ã It is clear that the Hessian in Eq. (6.72) converges, at least in probability, to the limit of the expectation of the Hessian evaluated at (j* ,which is a well defined entity. Moreover, rewriting Eq. (6.67), we have
(6.73) The equation above displays the nature of the problem in misspecified models as being essentially similar to that in correctly specified models.
The major difference is simply the point at which the mean value theorem expansion takes place. If
LT ( () )~. L*(()),
uniformly in (), it follows - see the discussion in Chapter 5 relating to MC estimators - that
SUpLT(())~ãsupL*(()).
!lEe !lEe
13 When we use the notation gO, in this context, we mean the restriction of the true parameter point to those parameters that actually appear in the model, and we exclude those that correspond to omitted variables.
350 6. Topics in NLSE Theory
Since we operate on the assertion that the global maximum can be found by differentiation, by assuming that ()o is an interior point of e,14 it follows that
8L* - 8L'T... a c
8() (()*)=0, and moreover 8() (()*) .':...; O. (6.74) It remains now to find the probability limit of the Hessian, evaluated at 8* and the limiting distribution of the vector in the right member of Eq.
(6.73). We note, however, that the exact derivation of the limit of the Hessian is unnecessary, since the latter can be consistently estimated, as simply the Hessian evaluated at the pseudo-ML estimate! Next, we deal with the limiting distribution of the vector, where
where
(6.75)
1 (822 -821 0
8~J ( 8~1 812 0 8~2)
18(8*)1 0 0 -812 0 821
( iii +Iny" )
biYtl+Yt2 8* (ai, bi)' , a.i¥t1+YtllnYtl '
b1Ytl +Yt1Yt2
B(8*) a.c.---+ 8(8*),
1 T
SYiYj = T 2:YtiYtj, i = 1,2,
t=l
and 8ij is the i,j element of 8(8*) .By construction, the right member of Eq. (6.75) represents the (asymptotic equivalent of the) gradient of the pseudoLF as proportional to a linear transformation of (1/VT) 'Ei'=1(~. ,
the latter being a sequence of independent random vectors, which on the surface do not appear to have mean zero. On the other hand, from Eq.
14Actually, to this we must add uniqueness of the global maximum.
(6.74), we find that
8L* ~ 1 T ,
0= 8(j ((j*) = -81T-+oolim -T L E((t.)ã
t=1
(6.76) Since the series of Eq. (6.76) converges, it follows that the tail vanishes.15 Consequently, we may write the second representation of Eq. (6.75) as
(6.77)
+
Vat e-a~Xt, Vlt xte-a2xt,0 V2t = e-2agxt
x e-2agxt
V4t x2e-2agxt
V5t = e-3agxt
t , t , ,
(6.78) Itmay be verified that the ~t., t ?:1 are a sequence of independent random vectors with mean zero and a finite covariance matrix, 3t ;moreover, they obey the Lindeberg condition, provided
1 T
lim - '"""Vkt = Vk, T-+ooT c:
t=1
(6.79) are finite entities, for as many k as the problem requires.16 With this
15Note, in addition, that not only
oLof)T (e)=0, but also To L Tof) (e)=0,
i.e. it is not division by T that renders the entity small.
16In this problem we require that, at least, k =7 .
proviso, we conclude that
(6.80)
"I'll0/ (-11S , S-12)~'::'(1) S(-11,S-12)',
"1'120/ (-11S , S-12),;:;,~(2)(-21S ,S-22)'(_- "I,0/21)
(- 2 1 -22)~ (-21 -22)' S ,S '::'(3) S ,S
T
lim L Cov(~~.)
T-->oo t=l
Consequently,
(6.81)
Remark 5. The practical significance of the result above is rather limited;
on the other hand, it allows us to gain an insight into the consequences of misspecification in the GNLSEM. With a number of assumptions, regarding the magnitude of the v 's and the coefficients of omitted variables, we might even produce an approximation to the proper covariance matrix of estimators in misspecified models. This may be of some help in assessing the sensitivity to misspecification of test of significance results.