A Course in Mathematical Statistics phần 6 pdf

54 474 0
A Course in Mathematical Statistics phần 6 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

262 11 Sufficiency and Related Theorems each one of the ( n t ) different ways in which the t successes can occur. Then, if there are values of θ for which particular occurrences of the t successes can happen with higher probability than others, we will say that knowledge of the positions where the t successes occurred is more informative about θ than simply knowledge of the total number of successes t. If, on the other hand, all possible outcomes, given the total number of successes t, have the same probability of occurrence, then clearly the positions where the t successes occurred are entirely irrelevant and the total number of successes t provides all possible information about θ . In the present case, we have PX x X x T t PX x X x T t PT t PX x X x PT t xxt nn nn nn n θ θ θ θ θ 11 11 11 1 = ⋅⋅⋅ == () = = ⋅⋅⋅ == () = () = = ⋅⋅⋅ = () = () +⋅⋅⋅+ = ,, | ,, , ,, if and zero otherwise, and this is equal to θθ θθ θθ θθ θθ x x x x t nt t nt t nt n n n t n t n t 1 1 11 1 1 1 1 11 − () ⋅⋅⋅ − () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − () = − () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − () = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ −− − − − if x 1 + ···+ x n = t and zero otherwise. Thus, we found that for all x 1 , , x n such that x j = 0 or 1, j = 1, . . . , n and xtPX x X xTt n t j j n nn = ∑ == ⋅⋅⋅ == () = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ 1 11 1,,,| θ independent of θ , and therefore the total number of successes t alone provides all possible information about θ . This example motivates the following definition of a sufficient statistic. DEFINITION 1 Let X j , j = 1, , n be i.i.d. r.v.’s with p.d.f. f(·; θθ θθ θ), θθ θθ θ= ( θ 1 , , θ r )′∈Ω⊆ ޒ r , and let T = (T 1 , , T m )′, where TTX X j m jj n = ⋅⋅⋅ () = ⋅⋅⋅ 1 1,, , ,, are statistics. We say that T is an m-dimensional sufficient statistic for the family F = {f(·; θθ θθ θ); θθ θθ θ∈ ΩΩ ΩΩ Ω}, or for the parameter θθ θθ θ, if the conditional distribution of (X 1 , , X n )′, given T = t, is independent of θθ θθ θ for all values of t (actually, for almost all (a.a.)t, that is, except perhaps for a set N in ޒ m of values of t such that P θθ θθ θ (T ∈ N) = 0 for all θθ θθ θ∈ ΩΩ ΩΩ Ω, where P θθ θθ θ denotes the probability function associated with the p.d.f. f(·; θθ θθ θ)). REMARK 1 Thus, T being a sufficient statistic for θθ θθ θ implies that every (meas- urable) set A in ޒ n , P θθ θθ θ [(X 1 , , X n )′∈A|T = t] is independent of θθ θθ θ for a.a. 11.1 Sufficiency: Definition and Some Basic Results 263 t. Actually, more is true. Namely, if T* = (T* 1 , , T* k )′ is any k-dimensional statistic, then the conditional distribution of T*, given T = t, is independent of θθ θθ θ for a.a. t. To see this, let B be any (measurable) set in ޒ k and let A = T* −− −− −1 (B). Then PB PX XA nθθθθ T* Tt Tt∈= () = ⋅⋅⋅ () ′ ∈= ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ 1 ,, and this is independent of θθ θθ θ for a.a. t. We finally remark that X = (X 1 , , X n )′ is always a sufficient statistic for θθ θθ θ. Clearly, Definition 1 above does not seem appropriate for identifying a sufficient statistic. This can be done quite easily by means of the following theorem. THEOREM 1 (Fisher–Neyman factorization theorem) Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θθ θθ θ), θθ θθ θ= ( θ 1 , , θ r )′∈ ΩΩ ΩΩ Ω⊆ ޒ r . An m-dimensional statistic TT= ⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ ⋅⋅⋅ () () ′ XXTXX TXX nnmn111 1 ,, ,, ,, ,, is sufficient for θθ θθ θ if and only if the joint p.d.f. of X 1 , , X n factors as follows, fx x g x x hx x nnn111 ,,; ,,; ,,, ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () θθθθT where g depends on x 1 , , x n only through T and h is (entirely) independent of θθ θθ θ. PROOF The proof is given separately for the discrete and the continuous case. Discrete case: In the course of this proof, we are going to use the notation T(x 1 , , x n ) = t. In connection with this, it should be pointed out at the outset that by doing so we restrict attention only to those x 1 ,···, x n for which T(x 1 , , x n ) = t. Assume that the factorization holds, that is, fx x g x x hx x nnn111 ,,; ,,; ,,, ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () θθθθT with g and h as described in the theorem. Clearly, it suffices to restrict atten- tion to those t’s for which P θθ θθ θ (T = t) > 0. Next, PPXX PXxXx nnnθθθθθθ Tt T t= () = ⋅⋅⋅ () = [] == ′ ⋅⋅⋅ = ′ () ∑ 111 ,, ,, , where the summation extends over all (x′ 1 , , x′ n )′ for which T(x′ 1 , , x′ n ) = t. Thus Pfxfxghxx ghxx nn n θθ θθθθθθ θθ Tt t t = () = ′ () ⋅⋅⋅ ′ () = () ′ ⋅⋅⋅ ′ () = () ′ ⋅⋅⋅ ′ () ∑∑ ∑ 11 1 ;;;,, ;,,. Hence 264 11 Sufficiency and Related Theorems PX x X x PX x X x P PX x X x P ghx x ghx nn nn nn n θθ θθ θθ θθ θθ θθ θθ 11 11 11 1 1 = ⋅⋅⋅ == () = = ⋅⋅⋅ == () = () = = ⋅⋅⋅ = () = () = () ⋅⋅⋅ () () ∑ ,, ,, , ,, ;,, ;, Tt Tt Tt Tt t t ⋅⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ () ∑ , ,, ,,x hx x hx x n n n 1 1 and this is independent of θθ θθ θ. Now, let T be sufficient for θθ θθ θ. Then P θθ θθ θ (X 1 = x 1 , , X n = x n |T = t) is independent of θθ θθ θ; call it k[x 1 , , x n , T(x 1 , , x n )]. Then PX x X x PX x X x P kx x x x nn nn nn θθ θθ θθ 11 11 11 = ⋅⋅⋅ == () = = ⋅⋅⋅ = () = () = ⋅⋅⋅ ⋅⋅⋅ () [] ,, ,, ,,, ,, Tt Tt T if and only if fx fx P X x X x Pkxxxx nnn nn 111 11 ;; ,, ,,, ,, . θθθθ θθ θθ () ⋅⋅⋅ () == ⋅⋅⋅ = () == () ⋅⋅⋅ ⋅⋅⋅ () [] Tt T Setting gx x P hx x kx x x x nn nn TTt T 11 11 ,,; ,, ,,, ,, , ⋅⋅⋅ () [] == () ⋅⋅⋅ () = ⋅⋅⋅ ⋅⋅⋅ () [] and θθ θθ we get fx fx g x x hx x nnn111 ;;,,;,,, θθθθθθ () ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () T as was to be seen. Continuous case: The proof in this case is carried out under some further regularity conditions (and is not as rigorous as that of the discrete case). It should be made clear, however, that the theorem is true as stated. A proof without the regularity conditions mentioned above involves deeper concepts of measure theory the knowledge of which is not assumed here. From Remark 1, it follows that m ≤ n. Then set T j = T j (X 1 , , X n ), j = 1, , m, and assume that there exist other n − m statistics T j = T j (X 1 , , X n ), j = m + 1, , n, such that the transformation tTx x j n jj n = ⋅⋅⋅ () = ⋅⋅⋅ 1 1,,, ,, , is invertible, so that xx t t j n t t jjm n m = ⋅⋅⋅ () = ⋅⋅⋅ ⋅⋅⋅ () ′ + tt, ,,, ,, ,, . ,= 11 1 11.1 Sufficiency: Definition and Some Basic Results 265 It is also assumed that the partial derivatives of x j with respect to t i , i, j = 1, , n, exist and are continuous, and that the respective Jacobian J (which is independent of θθ θθ θ) is different from 0. Let first fx fx g x x hx x nnn111 ;;,,;,,. θθθθθθ () ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () T Then ftt ghxt t xt tJ ghtt t TTm n mnnmn mn mn T t tt t t , ,,, ,,; ; , ,,,, , ,, ;*,,, + ⋅⋅⋅ ⋅⋅⋅ () = () ⋅⋅⋅ () ⋅⋅⋅ ⋅⋅⋅ () [] = () ⋅⋅⋅ ( + ++ + 1 1 11 1 1 θθ θθ θθ )) , where we set ht thxt t xt tJ mn mnnmn *, ,, , ,,,, , ,, .tt t ++ + ⋅⋅⋅ () = ⋅⋅⋅ () ⋅⋅⋅ ⋅⋅⋅ () [] 111 1 Hence fghttdtdtgh mnmnT ttt tt;;*,,, ;**,θθθθθθ () = ⋅⋅⋅ () ⋅⋅⋅ () ⋅⋅⋅ = () () −∞ ∞ −∞ ∞ ++ ∫∫ 11 where h h t t dt dt mnmn ** * , , , .tt () = ⋅⋅⋅ ⋅⋅⋅ () ⋅⋅⋅ + −∞ ∞ −∞ ∞ + ∫∫ 11 That is, f T (t; θθ θθ θ) = g(t; θθ θθ θ)h**(t) and hence ft t ght gh ht t h mn mn mn + ++ ⋅⋅⋅ () = () ⋅⋅⋅ () () () = ⋅⋅⋅ () () 1 11 ,,; ;*,,, ** *, , , **; t t tt t t θθ θθ θθ tt which is independent of θθ θθ θ . That is, the conditional distribution of T m+1 , , T n , given T = t, is independent of θθ θθ θ. It follows that the conditional distribution of T, T m+1 ,···,T n , given T = t, is independent of θθ θθ θ. Since, by assumption, there is a one-to-one correspondence between T, T m+1 , , T n , and X 1 , , X n , it follows that the conditional distribution of X 1 , , X n , given T = t, is indepen- dent of θθ θθ θ. Let now T be sufficient for θθ θθ θ. Then, by using the inverse transformation of the one used in the first part of this proof, one has fx x f t t J ft t f J nTTmn mn mn 11 1 1 1 1 ,,; ,,, ,,; ,,; ; . , ⋅⋅⋅ () = ⋅⋅⋅ ⋅⋅⋅ () = ⋅⋅⋅ () () + + − + − θθθθ θθθθ T T t tt But f(t m+1 , , t n |t; θθ θθ θ) is independent of θθ θθ θ by Remark 1. So we may set ft t J h t t hx x mn mn n+ − + ⋅⋅⋅ () = ⋅⋅⋅ () = ⋅⋅⋅ () 1 1 11 ,,; * ,,; ,,. ttθθ If we also set fgxx nT tT;,,;, θθθθ () = ⋅⋅⋅ () [] 1 266 11 Sufficiency and Related Theorems we get fx x g x x hx x nnn111 ,,; ,,; ,,, ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () θθ T θθ as was to be seen. ▲ COROLLARY Let φ : ޒ m → ޒ m ((measurable and independent) of θθ θθ θ) be one-to-one, so that the inverse φ −1 exists. Then, if T is sufficient for θθ θθ θ, we have that ˜ T = φ (T) is also sufficient for θθ θθ θ and T is sufficient for ˜ θθ = ψ ( θθ θθ θ), where ψ : ޒ r → ޒ r is one-to-one (and measurable). PROOF We have T = φ −1 [ φ (T)] = φ −1 ( ˜ T ). Thus fx x g x x hx x gxxhxx nnn nn 111 1 11 ,,; ,,; ,, ˜ ,, ; ,, ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () = ⋅⋅⋅ () [] {} ⋅⋅⋅ () − θθθθ θθ T T φ which shows that ˜ T is sufficient for θθ θθ θ. Next, θθθθθθ= () [] = () −− ψψ ψ 11 ˜ . Hence fx x g x x hx x nnn111 ,,; ,,; ,, ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () θθθθT becomes ˜ ,,; ˜ ˜ ,,; ˜ ,,,fx x g x x hx x nnn111 ⋅⋅⋅ () = ⋅⋅⋅ () [] ⋅⋅⋅ () θθθθT where we set ˜ ,,; ˜ ,,; ˜ fx x fx x nn11 1 ⋅⋅⋅ () = ⋅⋅⋅ () [] − θθθθ ψ and ˜ ,,; ˜ ,,; ˜ .gx x gx x nn TT 11 1 ⋅⋅⋅ () [] = ⋅⋅⋅ () () [] − θθθθ ψ Thus, T is sufficient for the new parameter ˜ θθ . ▲ We now give a number of examples of determining sufficient statistics by way of Theorem 1 in some interesting cases. EXAMPLE 6 Refer to Example 1, where f n xx I r x r x A r xx; ! !! .θθ () = ⋅⋅⋅ ⋅⋅⋅ () 1 1 1 θθ Then, by Theorem 1, it follows that the statistic (X 1 , , X r )′ is sufficient for θθ θθ θ = ( θ 1 , , θ r )′. Actually, by the fact that ∑ r j=1 θ j = 1 and ∑ r j=1 x j = n, we also have f n xnx x I jr j r x r x r nx A r j r j x x ; ! !! θθ () = − −⋅⋅⋅− () × ⋅⋅⋅ − −⋅⋅⋅− ()() − = − −− − ∏ − = − 11 1 1 111 1 11 1 1 1 θθ θ θ Σ 11.1 Sufficiency: Definition and Some Basic Results 267 from which it follows that the statistic (X 1 , , X r−1 )′ is sufficient for ( θ 1 , , θ r−1 )′. In particular, for r = 2, X 1 = X is sufficient for θ 1 = θ . EXAMPLE 7 Let X 1 , , X n be i.i.d. r.v.’s from U( θ 1 , θ 2 ). Then by setting x = (x 1 , , x n )′ and θθ θθ θ= ( θ 1 , θ 2 )′, we get fIxIx gx gx n n n n x; ,,, ,, θθ θθθθ () = − () () () = − () [][] ∞ [ )() −∞ ( ] () () ( ) 1 1 21 1 21 1 1 2 12 θθ θθ θθ where g 1 [x (1) , θθ θθ θ] = I [ θ 1 , ∞) (x (1) ), g 2 [x (n) , θθ θθ θ] = I (−∞, θ 2 ] (x (n) ). It follows that (X (1) , X (n) )′ is sufficient for θθ θθ θ. In particular, if θ 1 = α is known and θ 2 = θ , it follows that X (n) is sufficient for θθ θθ θ. Similarly, if θ 2 = β is known and θ 1 = θ , X (1) is sufficient for θ . EXAMPLE 8 Let X 1 , , X n be i.i.d. r.v.’s from N( μ , σ 2 ). By setting x = (x 1 , , x n )′, μ = θ 1 , σ 2 = θ 2 and θθ θθ θ= ( θ 1 , θ 2 )′, we have fx n j j n x; exp .θθ () = ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ −− () ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ = ∑ 1 2 1 2 2 2 1 2 1 πθ θ θ But xxxxxxnx j j n j j n j j n − () =− () +− () [] =− () +− () == = ∑∑ ∑ θθ θ 1 2 1 1 2 1 2 1 2 1 , so that fxx n x n j j n x; exp .θθ () = ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ −− () −− () ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ = ∑ 1 2 1 22 2 2 2 1 2 1 2 πθ θθ θ It follows that ( X , ∑ n j=1 (X j − X ) 2 )′ is sufficient for θθ θθ θ . Since also f n xx n j j n j j n x; exp exp ,θθ () = ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ == ∑∑ 1 2 2 1 2 2 1 2 2 1 2 1 2 2 1 πθ θ θ θ θθ it follows that, if θ 2 = σ 2 is known and θ 1 = θ , then ∑ n j=1 X j is sufficient for θ , whereas if θ 1 = μ is known and θ 2 = θ , then ∑ n j =1 (X j − μ ) 2 is sufficient for θ , as follows from the form of f(x; θθ θθ θ ) at the beginning of this example. By the corollary to Theorem 1, it also follows that ( X , S 2 )′ is sufficient for θθ θθ θ , where S n XX n X j j n j j n 2 2 1 2 1 11 =− () − () == ∑∑ , and μ is sufficient for θ 2 = θ if θ 1 = μ is known. REMARK 2 In the examples just discussed it so happens that the dimensionality of the sufficient statistic is the same as the dimensionality of the 268 11 Sufficiency and Related Theorems parameter. Or to put it differently, the number of the real-valued statistics which are jointly sufficient for the parameter θθ θθ θ coincides with the number of independent coordinates of θθ θθ θ. However, this need not always be the case. For example, if X 1 , , X n are i.i.d. r.v.’s from the Cauchy distribution with param- eter θθ θθ θ= ( μ , σ 2 )′, it can be shown that no sufficient statistic of smaller dimensionality other than the (sufficient) statistic (X 1 , , X n )′ exists. If m is the smallest number for which T == == = (T 1 , , T m )′, T j = T j (X 1 , , X n ), j = 1, , m, is a sufficient statistic for θ = ( θ 1 , , θ r )′, then T is called a minimal sufficient statistic for θθ θθ θ. REMARK 3 In Definition 1, suppose that m = r and that the conditional distribution of (X 1 , , X n )′, given T j = t j , is independent of θ j . In a situation like this, one may be tempted to declare that T j is sufficient for θ j . This outlook, however, is not in conformity with the definition of a sufficient statistic. The notion of sufficiency is connected with a family of p.d.f.’s F = {f(·; θθ θθ θ); θθ θθ θ∈ ΩΩ ΩΩ Ω}, and we may talk about T j being sufficient for θ j , if all other θ i , i ≠ j, are known; otherwise T j is to be either sufficient for the above family F or not sufficient at all. As an example, suppose that X 1 , , X n are i.i.d. r.v.’s from N( θ 1 , θ 2 ). Then ( X , S 2 )′ is sufficient for ( θ 1 , θ 2 )′, where S n XX j j n 2 2 1 1 =− () = ∑ . Now consider the conditional p.d.f. of (X 1 , , X n−1 )′, given ∑ n j = 1 X j = y n . By using the transformation yxj n y x jj n j j n == ⋅⋅⋅ −= = ∑ ,,,, , 11 1 one sees that the above mentioned conditional p.d.f. is given by the quotient of the following p.d.f.’s: 1 2 1 2 2 2 11 2 11 2 111 2 πθ θ θθ θ ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ − ⎧ ⎨ ⎩ − () +⋅⋅⋅+ − () ⎡ ⎣ ⎢ + − −⋅⋅⋅− − () ⎤ ⎦ ⎥ ⎫ ⎬ ⎭ − − n n nn yy yy y exp and 1 2 1 2 2 2 1 2 πθ θ θ n n yn n exp .−− () ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ This quotient is equal to 2 2 1 2 2 2 2 1 2 11 2 11 2 111 2 πθ πθ θ θθ θ θ n n y n ny ny ny y y n nn nn () − () −− () −⋅⋅⋅− − () ⎡ ⎣ ⎢ ⎧ ⎨ ⎩ − − −⋅⋅⋅− − () ⎤ ⎦ ⎥ ⎫ ⎬ ⎭ − − exp 11.1 Sufficiency: Definition and Some Basic Results 269 and y n ny ny ny y y yny y yy y nnnn nnnn − () −− () −⋅⋅⋅− − () − − −⋅⋅⋅− − () = − +⋅⋅⋅+ + − −⋅⋅⋅− () ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ −− −− θθ θ θ 1 2 11 2 11 2 111 2 2 1 2 1 2 11 2 , independent of θ 1 . Thus the conditional p.d.f. under consideration is indepen- dent of θ 1 but it does depend on θ 2 . Thus ∑ n j=1 X j , or equivalently, X is not sufficient for ( θ 1 , θ 2 )′. The concept of X being sufficient for θ 1 is not valid unless θ 2 is known. Exercises 11.1.1 In each one of the following cases write out the p.d.f. of the r.v. X and specify the parameter space Ω of the parameter involved. i) X is distributed as Poisson; ii) X is distributed as Negative Binomial; iii) X is distributed as Gamma; iv) X is distributed as Beta. 11.1.2 Let X 1 , , X n be i.i.d. r.v.’s distributed as stated below. Then use Theorem 1 and its corollary in order to show that: i) ∑ n j =1 X j or X is a sufficient statistic for θ , if the X’s are distributed as Poisson; ii) ∑ n j =1 X j or X is a sufficient statistic for θ , if the X’s are distributed as Negative Binomial; iii) (Π n j=1 X j , ∑ n j =1 X j )′ or (Π n j =1 X j , X )′ is a sufficient statistic for ( θ 1 , θ 2 )′= ( α , β )′ if the X’s are distributed as Gamma. In particular, Π n j =1 X j is a sufficient statistic for α = θ if β is known, and ∑ n j =1 X j or X is a sufficient statistic for β = θ if α is known. In the latter case, take α = 1 and conclude that ∑ n j=1 X j or X is a sufficient statistic for the parameter ˜ θ = 1/ θ of the Negative Exponential distribution; iv) (Π n j=1 X j , Π n j =1 (1 − X j ))′ is a sufficient statistic for ( θ 1 , θ 2 )′= ( α , β )′ if the X’s are distributed as Beta. In particular, Π n j =1 X j or −∑ n j=1 log X j is a sufficient statistic for α = θ if β is known, and Π n j=1 (1 − X j ) is a sufficient statistic for β = θ if α is known. 11.1.3 (Truncated Poisson r.v.’s) Let X 1 , X 2 be i.i.d. r.v.’s with p.d.f. f(·; θ ) given by: fef ef ee fx x 01 21 0012 ;,; ,; , ;, ,,, θθθθθ θ θθ θθ () = () = () =− − () =≠ −− −− where θ > 0. Then show that X 1 + X 2 is not a sufficient statistic for θ . Exercises 269 270 11 Sufficiency and Related Theorems 11.1.4 Let X 1 , , X n be i.i.d. r.v.’s with the Double Exponential p.d.f. f(·; θ ) given in Exercise 3.3.13(iii) of Chapter 3. Then show that ∑ n j =1 |X j | is a sufficient statistic for θ . 11.1.5 If X j = (X 1j , X 2j )′, j = 1, , n, is a random sample of size n from the Bivariate Normal distribution with parameter θθ θθ θ as described in Example 4, then, by using Theorem 1, show that: XX X X XX j j n j j n jj j n 12 1 2 1 2 2 1 12 1 ,, , , === ∑∑∑ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ′ is a sufficient statistic for θθ θθ θ. 11.1.6 If X 1 , , X n is a random sample of size n from U(− θ , θ ), θ ∈(0, ∞), show that (X (1) , X (n) )′ is a sufficient statistic for θ . Furthermore, show that this statistic is not minimal by establishing that T = max(|X 1 |, , |X n |) is also a sufficient statistic for θ . 11.1.7 If X 1 , , X n is a random sample of size n from N( θ , θ 2 ), θ ∈ ޒ , show that XX XX j j n j j n j j n , == = ∑∑ ∑ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ′ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ′ 1 2 1 2 1 or , is a sufficient statistic for θ . 11.1.8 If X 1 , , X n is a random sample of size n with p.d.f. fx e I x x ;,, , θθ θ θ () = () ∈ −− () ∞ () ޒ show that X (1) is a sufficient statistic for θ . 11.1.9 Let X 1 , , X n be a random sample of size n from the Bernoulli distribution, and set T 1 for the number of X’s which are equal to 0 and T 2 for the number of X’s which are equal to 1. Then show that T = (T 1 , T 2 )′ is a sufficient statistic for θ . 11.1.10 If X 1 , , X n are i.i.d. r.v.’s with p.d.f. f(·; θ ) given below, find a sufficient statistic for θ . i i i i ); , ,; ); , ,; ); , ,; , , , fx x I x fx xI x fx xe I x x θθ θ θ θ θθ θ θ θ θ θ θ () = () ∈∞ () () =− ()() ∈∞ () () = () ∈∞ () − () () − ∞ () 1 01 2 0 4 3 0 0 2 0 1 6 0 i ii v )); , ,. , fx c c x Ix c θ θ θ θ () = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ () ∈∞ () + ∞ () 1 0 11.1 Sufficiency: Definition and Some Basic Results 271 11.2 Completeness In this section, we introduce the (technical) concept of completeness which we also illustrate by a number of examples. Its usefulness will become apparent in the subsequent sections. To this end, let X be a k-dimensional random vector with p.d.f. f(·; θθ θθ θ ), θθ θθ θ ∈ ΩΩ ΩΩ Ω⊆R r , and let g: ޒ k → ޒ be a (measurable) function, so that g(X) is an r.v. We assume that E θθ θθ θ g(X) exists for all θ ∈Ω and set F = {f(.; θ ); θ ∈Ω}. DEFINITION 2 With the above notation, we say that the family F (or the random vector X) is complete if for every g as above, E θ g(X) = 0 for all θθ θθ θ ∈ ΩΩ ΩΩ Ω implies that g(x) = 0 except possibly on a set N of x’s such that P θθ θθ θ (X ∈N) = 0 for all θθ θθ θ ∈ ΩΩ ΩΩ Ω. The examples which follow illustrate the concept of completeness. Mean- while let us recall that if ∑ n j = 0 c n−j x n−j = 0 for more than n values of x, then c j = 0, j = 0, , n. Also, if ∑ ∞ n=0 c n x n = 0 for all values of x in an interval for which the series converges, then c n = 0, n = 0, 1, . . . . EXAMPLE 9 Let F =⋅ ()( ) = ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − () () ∈ () ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ − ffx n x Ix x nx A ;; ; , , , θθθθ θ 101 where A = {0, 1, . . . , n}. Then F is complete. In fact, EgX gx n x gx n x x n x nx n x n x θ θθ θ ρ () = () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − () =− () () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = − = ∑∑ 00 11 , where ρ = θ /(1 − θ ). Thus E θ g(X) = 0 for all θ ∈ (0, 1) is equivalent to gx n x x n x () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = = ∑ 0 0 ρ for every ρ ∈ (0, ∞), hence for more than n values of ρ , and therefore gx n x xn () ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ == ⋅⋅⋅ 001,,,, which is equivalent to g(x) = 0, x = 0, 1, . . . , n. EXAMPLE 10 Let F =⋅ ()( ) = () ∈∞ () ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ − ffxe x Ix x A ;; ; ! ,,, θθ θ θ θ 0 where A = {0, 1, . . .}. Then F is complete. In fact, EgX gxe x e gx x x x x x θ θθ θ θ () = () = () = = ∞ −− = ∞ ∑∑ 00 0 !! for θ ∈ (0, ∞) implies g(x)/x! = 0 for x = 0, 1, . . . and this is equivalent to g(x) = 0 for x = 0, 1, . . . . 11.2 Completeness 271 [...]... the class of all unbiased estimators with finite variance, the problem arises as to how one would go about searching for a UMVU estimator (if such an estimator exists) There are two approaches which may be used The first is appropriate when complete sufficient statistics are available and provides us with a UMVU estimator Using the second approach, one would first determine a lower bound for the variances... one-parameter exponential form and identify the various quantities appearing in a one-parameter exponential family i) X is distributed as Poisson; ii) X is distributed as Negative Binomial; iii) X is distributed as Gamma with β known; 11.1 11.5 Some Multiparameter Generalizations Sufficiency: Definition and Some Basic Results 281 iii′) X is distributed as Gamma with α known; iv) X is distributed as Beta... finding a UMVU estimator is settled as in Section 3 When such statistics do not exist, or it is not easy to identify them, one may use the approach described here in searching for a UMVU estimator According to this method, we first establish a lower bound for the variances of all unbiased estimators and then we attempt to identify an unbiased estimator with variance equal to the lower bound found If that... solved again At any rate, we do have a lower bound of the variances of a class of estimators, which may be useful for comparison purposes The following regularity conditions will be employed in proving the main result in this section We assume that Ω ⊆ ‫ ޒ‬and that g is real-valued and differentiable for all θ ∈ Ω 12.4.1 Regularity Conditions Let X be an r.v with p.d.f f(·; θ ), θ ∈ Ω ⊆ ‫ ޒ‬Then it is assumed... be estimable An estimator U = U(X1, , Xn) is said to be a uniformly minimum variance unbiased (UMVU) estimator of g(θ ) if it is unbiased and θ has the smallest variance within the class of all unbiased estimators of g(θ ) θ under all θ ∈ Ω That is, if U1 = U1(X1, , Xn) is any other unbiased estimator 2 2 of g(θ ), then σ θ U1 ≥ σ θ U for all θ ∈ Ω θ In many cases of interest a UMVU estimator does... Minimum Variance From Definition 1, it is obvious that in order to obtain a meaningful estimator of g(θ ), one would have to choose that estimator from a specified class of θ estimators having some optimal properties Thus the question arises as to how a class of estimators is to be selected In this chapter, we will devote ourselves to discussing those criteria which are often used in selecting a class of... variances of all estimators in the class under consideration, and then would try to determine an estimator whose variance is equal to this lower bound In the second method just described, the Cramér–Rao inequality, to be established below, is instrumental The second approach is appropriate when a complete sufficient statistic is not readily available (Regarding sufficiency see, however, the corollary to Theorem... estimators The interest in the members of this class stems from the interpretation of the expectation as an average value Thus if U = U(X1, , Xn) is an unbiased estimator of g(θ ), then, no matter what θ ∈ Ω is, θ the average value (expectation under θ ) of U is equal to g(θ ) θ Although the criterion of unbiasedness does specify a class of estimators with a certain property, this class is, as a rule,... (for a fixed θ ) u 2 86 12 Point Estimation Following this line of reasoning, one would restrict oneself first to the class of all unbiased estimators of g(θ ) and next to the subclass of unbiased estimaθ tors which have finite variance under all θ ∈ Ω Then, within this restricted class, one would search for an estimator with the smallest variance Formalizing this, we have the following definition DEFINITION... may be formulated but this matter will not be dealt with here Exercises 11.5.1 In each one of the following cases, show that the distribution of the r.v X and the random vector X is of the multiparameter exponential form and identify the various quantities appearing in a multiparameter exponential family i) X is distributed as Gamma; ii) X is distributed as Beta; iii) X = (X1, X2)′ is distributed as . one-parameter exponential form and identify the various quantities appearing in a one-parameter exponential family. i) X is distributed as Poisson; ii) X is distributed as Negative Binomial; iii). parameter space Ω of a one-parameter exponential family of p.d.f.’s contains a non-degenerate interval, it can be shown that the family is com- plete. More precisely, the following result can. class of all unbiased statistics of θ is called a uniformly minimum variance (UMV) unbi- ased statistic of θ (the term “uniformly” referring to the fact that the variance is minimum for all

Ngày đăng: 23/07/2014, 16:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan