Identification and the Reduced Form

Một phần của tài liệu Topics in advanced econometrics (Trang 190 - 196)

Corollary 1. In the context of Theorem 4, the system as a whole is identi- fied by the exclusion and normalization conditions L*o'vec(A*) = vec(H) ,

3.2.5 Identification and the Reduced Form

An alternative way in which the identification problem may be posed is the following: Ifthe matrix of reduced form coefficients, II, is given, can we determine the elements of (some of the columns of) B* and C, or are there infinitely many such matrices from which II may have arisen?

Indeed, a method of estimation called indirect least squares was inspired by exactly this sort of consideration. We shall examine this method at a later stage, but for the moment let us press on with an informal inquiry into the identification implications of the question just posed.

Since the reduced form is given by II = CB*-l, we may also write IIB* =C . Concentrating on the first equation, we may write

(3.19)

where IIj\ is Glx(ml+1), II12is G1x(mi-1), II21 is Gi x(ml+1), II22 is Gi x (mi -1) ,and Gi, mi are the complements of G1 and ml,

respectively, i.e. Gi =G-G1 and mi =m-rm., . The relations ofrelevance (to the first equation) are

(3.20) and the question is: Given II~l and II~h can we determine (3~1 and ')'ã1 ?

Clearly, the system in Eq. (3.20) is decomposable; thus, if from the sec- ond subset we can determine (3°1, ')'.1 is easily determined from the first subset.Ifno such determination can be made from the second subset then, evidently, we cannot determine ')'.1 either. We recall, from the discussion of the identification problem in a structural context, that identifiability is required only to within a multiplicative constant;1 thus, if (3°1 can be de- termined to within a multiplicative constant from the second subset, the same will be true of ')'.1 . For this to be the case, it must be true that the dimension of the column null space of II~l is unity which, by Lemma 1, implies the condition that the rank of that matrix must be mI. We for- malize this discussion below and we state the theorem in general terms, i.e.

in terms of the it h equation.

Theorem 5. In the context of the model in Eqs. (3.5) and (3.6), the fol- lowing statements are true:

1. absent any normalization, the it h equation is identified if and only if

ii. given the usual normalization convention, (i.e. assuming that it is possible to, and we do, set the coefficient of Yti in the it h equation equal to unity) the it h equation is identified if and only if

rank(L;~IIL1i) =rank(II2 i ) = tru, i = 1,2, ... , m.

Proof: We remind the reader that a complete discussion of selection and exclusion matrices may be found in Definitions 4, 5 and 6, of Chapter 1.

Moreover,

The last two entities represent the matrix of coefficients in the reduced form representation of lio and Yi, respectively. It is natural to define,

1This is true absent any normalization convention. When a normalization convention is imposed, no ambiguity remains, i.e. the parameters in question are uniquely determined.

The first of these denotes the matrix of coefficients of the predetermined variables excluded from the it h equation, i.e. the variables in Xt ,as they appear in the reduced form representation of the variables in 1'.;0; the second denotes the matrix of coefficients of the predetermined variables excluded from the it h equation, i.e. the variables in Xt, as they appear in the reduced form representation of the variables in Yi.

To prove i suppose (B*, C,I:), (B*, C,:E) are two observationally equiv- alent structures. Since the matrix of reduced form coefficients is invariant we must have the relations

IIOf3-O -

i -i ==Cãiã

Premultiplying each by (L 2i,L~i)' ,we find II~if3~

II~if3~

'ãi,

0,

II~dJ.i II~iiJi

;Yãi 0,

in the obvious notation. Now, suppose rank(II~i)= tn; . By Lemma 1, its column null space is of dimension one and, consequently, all solutions to the equation II~lP= 0 , are of the form P= cf3i ,where f3i is some basic solution. Hence, the equation system above implies that if

(B*, C,I:), (B*, C, :E)

are any two observationally equivalent structures, the it h columns of the two sets of coefficient matrices are scalar multiples of each other.

Hence, the it h equation is identified. Conversely, suppose the it h erquation is identified. Then, from the equation system above we conclude that iJi = cf3i and ;Y.i = Cr-i .Thus, by Lemma 1, we conclude that rank(II~i)= m; ,

which concludes the proof of part i.

As for part ii, proceeding in exactly the same fashion and noting that under normalization f3i = (1, -f3~i)f , we have, for two observationally equivalent structures, the relations

I] [f3.i]

o r-i = ti.ô, I] [rJi] o Yi = 7r.iã (3.21 )

Now, suppose rank(II2i) = tru ;then, clearly the matrix of the system(s) in Eq. (3.21) is of rank m; +Gi , due to the fact that the identity ma- trix is of order Gi .Hence, the matrix of the system(s) is nonsingular and, consequently, f3.i = rJi, 'ãi = ;Y'i . This implies that all observationally equivalent structures are connected by transformations, whose matrix rep- resentations have for their it h column, e., . But, by Definition 8, this means that the it h equation is identified.

Conversely, suppose that the it h equation is identified; by Definition 8, we must then have, in the two systems above, f3.i = rJi, 'ãi = ;Y.i . But

this implies that the matrix of the system is nonsingular and it is easy to see 2 that this implies that rank(Ib) =m; .

q.e.d.

Corollary 2. Let

The following statements are true:

i. rank(Si) = rank(Ei) ;

ii. rank(Ei) = m; +Gi , if and only if rank(L~~IIi) tru , 1,2, ...,m.

Proof: We first note that

and, thus, it is the matrix of the system(s) in the proof of part ii of Theorem 5, above. The proof of part i of the corollary is quite evident from the definition of Ei and the fact that (L 2i, L~i) is simply a permutation of the columns of an identity matrix of order G.

As for part ii, note that for a = (a\, a'2)', we have

Consequently, if rank(Ei) = m, +Gi, then the equation Eia = 0 is satisfied only for a = 0; therefore, the equation II2ia.l = 0 is satisfied only for a.l =0 . This implies that rank(II2i) =m, .

Conversely, suppose rank(II2i) = mi; then in the condition Eia = 0 , we must have a.l = O. But this immediately implies that a.2 = 0 and, hence, that Eia =0 is satisfied only with a =O. But this means that rank(Ei)= mi+Gi .

q.e.d.

Remark 9. The conditions we have derived in Theorem 5, and especially the condition rank(II~i)= m; ,is, by far, the most popular characterization of identification in the literature. In many ways this is most unfortunate since the reader is given no indication as to how such condition could possi- bly be connected with what one would normally understand by the concept of identification. The latter, in turn, is typically put in the following setting:

2For clarity see the corollary at the end of this proof.

in describing the working of an economic entity by means of a simultane- ous equations system, one begins with the proposition that "everything depends on everything else", i.e. one begins in a state of near complete ignorance. One then "obtains" identification by placing restrictions on the elements of the coefficient matrices, B* and C. These restrictions reflect the economist's substantive knowledge of the entity under study. What

"makes" one equation different, or distinguishable, from another is that it is subjected to a different set of restrictions than are other equations. The condition we have obtained earlier, viz.

makes this aspect quite clear, in contrast to the conditions obtained in Theorem 5.

We observe that, referring to the first equation, the matrix above is (G+

m - ml - G 1-1) x (m-1) .We also note that B22 , C22 are the matrices of coefficients, in the remainder of the system, of the dependent and pre- determined variables, respectively, excluded from the first equation. The condition requires that the rank of the matrix be m - 1, which is the maximum possible. This, of course, implies the order condition

G+m - ml - G1 - 1~ m - 1.

Notice that this means that in each equation we must impose a minimum of m restrictions, including the normalization condition, a fact we had already noted in Remark 8, above. This is just another way of saying that we cannot "include too many" variables or, alternatively, we must place a

"sufficient" number of restrictions in each equation. The rank condition, however, states that given the restrictions on the equation we cannot place the "wrong kind" or "too many" restrictions on the remaining equations, in the sense that "too many" of the variables excluded from the first equation are also excluded from them as well, lest they be disturbingly similar to the first. This is the essential meaning of the rank condition. The economist, presumably, has considerable intuition, at the structural level, as to what variables mayor may not be included in any given equation. He has, how- ever, relatively little information as to his intuition's implications for the reduced form.

We have now obtained two characterizations of identification, one in terms of the structural, and another in terms of the reduced form. The reader should note that since identification is (separately) equivalent to these two conditions they, in turn, should be equivalent to each other.

For pedagogical reasons, however, we offer an independent proof of equiv- alence. To reduce excessive discussion for a minor point we confine our at- tention to the case where no normalization convention has been imposed.

Theorem 6. Consider the system in Eqs. (3.5) and (3.6) and the problem of the identification of the it h equation by exclusion restrictions. The two characterizations

rank [B22] =m - 1,

C 22

are equivalent.

(3.22) Proof: In view of Eqs. (3.19) and (3.20), we can write (for the specific case i = 1),

[II~l

For notational convenience, put

A- [B3 - C22]

Al = [II~21 22 '

and note that, in view of Eq. (3.22) and the nonsingularity of B* ,we must

have

[ 0 B*]

rank(AI) = rank IIo f30 C2 2 = rank(A2 ) = rank(A3 ) . (3.23)

21.1 22

Moreover, note that Al is (mi +Gi) xm, and by Eq. (3.23), its column null space is of the same dimension as that of A2 . Thus, we have the characterization

v(AI) = {a: Ala= O}= {a: a =(a:1, 0)' , II~la'l =O},

i.e. the null space of Al consists of vectors of the form ¢* =(¢', 0)' , such that ¢ EV(II~l).Now suppose

rank [B 22] =m - 1; (3.24)

C2 2

we need to show that

rank(m1) =mI. (3.25)

From Eq. (3.24) we conclude that the dimension of the column null space of Al is one; this implies that the dimension of the columnnull space of II~l is also one and, thus, Eq. (3.25) holds.

Conversely, suppose that Eq. (3.25) holds; we will show that Eq. (3.24) is true. Since Eq. (3.25) holds, the dimension of the column null space of the matrix is unity and, by the previous argument, the same is true of Al . Thus,3

rank(AI) =m - 1.

3This assumes implicitly, as does the entire discussion of Theorem 6, that the order condition for identification is satisfied, i.e. that mi + Gi 2: m - 1 .

The conclusion, then, follows immediately from Eq. (3.23).

q.e.d.

Một phần của tài liệu Topics in advanced econometrics (Trang 190 - 196)

Tải bản đầy đủ (PDF)

(424 trang)