1. Trang chủ
  2. » Ngoại Ngữ

A Course in Mathematical Statistics phần 7 ppsx

54 327 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 54
Dung lượng 358,3 KB

Nội dung

316 12 Point Estimation REMARK 7 We know (see Remark 4 in Chapter 3) that if α = β = 1, then the Beta distribution becomes U(0, 1). In this case the corresponding Bayes esti- mate is δ xx x n n j j n 1 1 1 2 , , , () = + + = ∑ as follows from (21). EXAMPLE 15 Let X 1 , , X n be i.i.d. r.v.’s from N( θ , 1). Take λ to be N( μ , 1), where μ is known. Then Ifx fx d xd x n n j j n n j j n 11 1 2 1 2 1 22 1 1 2 1 22 1 2 1 2 = () ⋅⋅⋅ ()() = () −− () ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ − − () ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ = () −+ ⎛ ⎝ ⎜ ⎞ ⎠ ∫ ∑ ∫ ∑ + = −∞ ∞ + = ;; exp exp exp θθλθθ π θ θμ θ π μ Ω ⎟⎟ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ×−+ () −+ () [] ⎧ ⎨ ⎩ ⎫ ⎬ ⎭ −∞ ∞ ∫ exp . 1 2 12 2 nnxd θμθθ But nnxn nx n n nx n nx n nx n n nx n + () −+ () =+ () − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ =+ () − + + + + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ =+ () − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ 12 1 2 1 12 11 1 1 1 22 2 22 θμθθ μ θ θ μ θ μμ θ μ 222 1 − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ nx n μ . Therefore I n x nx n n n nx n d n n j j n 1 22 2 1 2 2 1 1 1 2 1 21 1 21 1 1 21 1 1 1 = + () −+− + () + ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ × + () − + () − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ = = −∞ ∞ ∑ ∫ π μ μ π θ μ θ exp exp ++ () −+− + () + ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ = ∑ 1 1 2 1 21 22 2 1 π μ μ n j j n x nx n exp . (22) 12.3 The Case of Availability of Complete Sufficient Statistics 317 Next, Ifx fx d xd n x nx n n j j n n j 21 1 2 1 2 22 1 2 1 22 1 1 1 2 1 2 = () ⋅⋅⋅ ()() = () −− () ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ − − () ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ = + () −+− + ( ∫ ∑ ∫ + = −∞ ∞ θθ θλθθ π θθ θμ θ π μ μ ;; exp exp exp Ω )) + ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ × + () − + () − + + ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ = + () −+− + () = −∞ ∞ ∑ ∫ 2 1 2 2 22 2 1 1 21 1 1 21 1 1 1 1 1 2 1 2 n n n nx n d n x nx n j n n j π θθ μ θ π μ μ exp exp ++ ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎧ ⎨ ⎪ ⎩ ⎪ ⎫ ⎬ ⎪ ⎭ ⎪ + + = ∑ 11 1j n nx n μ . (23) By means of (22) and (23), one has, on account of (15), δ μ xx nx n n1 1 , , . () = + + (24) Exercises 12.7.1 Refer to Example 14 and: iii) Determine the posterior p.d.f. h( θ |x); iii) Construct a 100(1 − α )% Bayes confidence interval for θ ; that is, deter- mine a set { θ ∈ (0, 1); h( θ |x) ≥ c(x)}, where c(x) is determined by the requirement that the P λ -probability of this set is equal to 1 − α ; iii) Derive the Bayes estimate in (21) as the mean of the posterior p.d.f. h( θ |x). (Hint: For simplicity, assign equal probabilities to the two tails.) 12.7.2 Refer to Example 15 and: iii) Determine the posterior p.d.f. h( θ |x); iii) Construct the equal-tail 100(1 − α )% Bayes confidence interval for θ ; iii) Derive the Bayes estimate in (24) as the mean of the posterior p.d.f. h( θ |x). Exercises 317 318 12 Point Estimation 12.7.3 Let X be an r.v. distributed as P( θ ), and let the prior p.d.f. λ of θ be Negative Exponential with parameter τ . Then, on the basis of X: iii) Determine the posterior p.d.f. h( θ |x); iii) Construct the equal-tail 100(1 − α )% Bayes confidence interval for θ ; iii) Derive the Bayes estimates δ (x) for the loss functions L( θ ; δ ) = [ θ − δ (x)] 2 as well as L( θ ; δ ) = [ θ − δ (x)] 2 / θ ; iv) Do parts (i)–(iii) for any sample size n. 12.7.4 Let X be an r.v. having the Beta p.d.f. with parameters α = θ and β = 1, and let the prior p.d.f. λ of θ be the Negative Exponential with parameter τ . Then, on the basis of X: iii) Determine the posterior p.d.f. h( θ |x); iii) Construct the equal-tail 100(1 − α )% Bayes confidence interval for θ ; iii) Derive the Bayes estimates δ (x) for the loss functions L( θ ; δ ) = [ θ − δ (x)] 2 as well as L( θ ; δ ) = [ θ − δ (x)] 2 / θ ; iv) Do parts (i)–(iii) for any sample size n; iv) Do parts (i)–(iv) for any sample size n when λ is Gamma with parameters k (positive integer) and β . (Hint: If Y is distributed as Gamma with parameters k and β , then it is easily seen that 2Y β ∼ χ 2 2k .) 12.8 Finding Minimax Estimators Although there is no general method for deriving minimax estimates, this can be achieved in many instances by means of the Bayes method described in the previous section. Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θ ), θ ∈Ω (⊆ ޒ ) and let λ be a prior p.d.f. on Ω. Then the posterior p.d.f. of θ , given X = (X 1 , , X n )′= (x 1 , , x n )′=x, h(·|x), is given by (16), and as has been already observed, the Bayes estimate of θ (in the decision-theoretic sense) is given by δθθθ xxhd n1 , , , () = () ∫ x Ω provided λ is of the continuous type. Then we have the following result. THEOREM 7 Suppose there is a prior p.d.f. λ on Ω such that for the Bayes estimate δ defined by (15) the risk R( θ ; δ ) is independent of θ . Then δ is minimax. PROOF By the fact that δ is the Bayes estimate corresponding to the prior λ , one has 12.3 The Case of Availability of Complete Sufficient Statistics 319 R R θδλθ θ θδ λθ θ ;;* ()() ≤ ()() ∫∫ dd ΩΩ for any estimate δ *. But R( θ ; δ ) = c by assumption. Hence sup ; ; ; * sup ; * ;R R R θδ θ θδ λθ θ θδ θ () ∈ [] =≤ ()() ≤ () ∈ [] ∫ ΩΩ Ω cd for any estimate δ *. Therefore δ is minimax. The case that λ is of the discrete type is treated similarly. ▲ The theorem just proved is illustrated by the following example. EXAMPLE 16 Let X 1 , , X n and λ be as in Example 14. Then the corresponding Bayes estimate δ is given by (21). Now by setting X =∑ n j=1 X j and taking into consid- eration that E θ X = n θ and E θ X 2 = n θ (1 − θ + n θ ), we obtain R θδ θ α αβ αβ αβ θ α αβ θα θ ; . () =− + ++ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ = ++ () + () − ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ −+− () + ⎧ ⎨ ⎩ ⎫ ⎬ ⎭ E X n n nn 2 2 2 22 2 1 22 By taking α = β = 1 2 √ – n and denoting by δ * the resulting estimate, we have αβ α αβ + () −= + −= 2 2 02 2 0nn,, so that R θδ α αβ ;* . () = ++ () = + () = + () 2 222 4 1 41 n n nn n Since R( θ ; δ *) is independent of θ , Theorem 6 implies that δ * , ,xx xn nn nx n n j j n 1 1 1 2 21 21 () = + + = + + () = ∑ is minimax. EXAMPLE 17 Let X 1 , , X n be i.i.d. r.v.’s from N( μ , σ 2 ), where σ 2 is known and μ = θ . It was shown (see Example 9) that the estimator X ¯ of θ was UMVU. It can be shown that it is also minimax and admissible. The proof of these latter two facts, however, will not be presented here. Now a UMVU estimator has uniformly (in θ ) smallest risk when its competitors lie in the class of unbiased estimators with finite variance. How- ever, outside this class there might be estimators which are better than a UMVU estimator. In other words, a UMVU estimator need not be admissible. Here is an example. 12.8 Finding Minimax Estimators 319 320 12 Point Estimation EXAMPLE 18 Let X 1 , , X n be i.i.d. r.v.’s from N(0, σ 2 ). Set σ 2 = θ . Then the UMVU estimator of θ is given by U n X j j n = = ∑ 1 2 1 . (See Example 9.) Its variance (risk) was seen to be equal to 2 θ 2 /n; that is, R( θ ; U) = 2 θ 2 /n. Consider the estimator δ = α U. Then its risk is R θδ α θ α θ α θ θ αα θθ ;. () =− () =− () +− () [] =+ () −+ [] EU E U n nnn 2 2 2 2 122 The value α = n/(n + 2) minimizes this risk and the minimum risk is equal to 2 θ 2 /(n + 2) < 2 θ 2 /n for all θ . Thus U is not admissible. Exercise 12.8.1 Let X 1 , , X n be independent r.v.’s from the P( θ ) distribution, and consider the loss function L( θ ; δ ) = [ θ − δ (x)] 2 / θ . Then for the estimate δ (x) = ¯ x, calculate the risk R( θ ; δ ) = 1/ θ E θ [ θ − δ (X)] 2 , and conclude that δ (x) is minimax. 12.9 Other Methods of Estimation Minimum chi-square method. This method of estimation is applicable in situations which can be described by a Multinomial distribution. Namely, consider n independent repetitions of an experiment whose possible outcomes are the k pairwise disjoint events A j , j = 1, , k. Let X j be the number of trials which result in A j and let p j be the probability that any one of the trials results in A j . The probabilities p j may be functions of r parameters; that is, pp j k jj r = () = () ′ =θθθθ, , , , , , θθ 1 1 . Then the present method of estimating θθ θθ θ consists in minimizing some measure of discrepancy between the observed X’s and the expected values of them. One such measure is the following: χ 2 2 1 = − () [] () = ∑ Xnp np jj j j k θθ θθ . Often the p’s are differentiable with respect to the θ ’s, and then the minimiza- tion can be achieved, in principle, by differentiation. However, the actual solution of the resulting system of r equations is often tedious. The solution may be easier by minimizing the following modified χ 2 expression: 12.3 The Case of Availability of Complete Sufficient Statistics 321 χ mod , 2 2 1 = − () [] = ∑ Xnp X jj j j k θθ provided, of course, all X j > 0, j = 1, , k. Under suitable regularity conditions, the resulting estimators can be shown to have some asymptotic optimal properties. (See Section 12.10.) The method of moments. Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θθ θθ θ) and for a positive integer r, assume that EX r = m r is finite. The problem is that of estimating m r . According to the present method, m r will be estimated by the corresponding sample moment 1 1 n X j r j n = ∑ , The resulting moment estimates are always unbiased and, under suitable regularity conditions, they enjoy some asymptotic optimal properties as well. On the other hand the theoretical moments are also functions of θ = ( θ 1 , , θ r )′. Then we consider the following system 1 1 1 1 n Xm k r j k j n kr = ∑ = () = θθ , , , , , , the solution of which (if possible) will provide estimators for θ j , j = 1, , r. EXAMPLE 19 Let X 1 , , X n be i.i.d. r.v.’s from N( μ , σ 2 ), where both μ and σ 2 are unknown. By the method of moments, we have X n XX n XX j j n j j n = =+ = = − () ⎧ ⎨ ⎪ ⎩ ⎪ == ∑∑ μ σμ μ σ 11 2 1 22 2 2 1 , ˆ , ˆ .hence EXAMPLE 20 Let X 1 , , X n be i.i.d. r.v.’s from U( α , β ), where both α and β are unknown. Since EX X 1 2 1 2 212 = + () = − () αβ σ αβ and (see Chapter 5), we have X n X X S j j n = + = − () + + () += −= ⎧ ⎨ ⎪ ⎩ ⎪ ⎧ ⎨ ⎪ ⎪ ⎩ ⎪ ⎪ = ∑ αβ αβ αβ βα βα 2 1 12 4 2 12 2 1 22 , , or where 12.9 Other Methods of Estimation 321 322 12 Point Estimation S n XX j j n =− () = ∑ 1 2 1 . Hence ˆ , ˆ . αβ =− =+XS XS33 REMARK 8 In Example 20, we see that the moment estimators ˆ α , ˆ β of α , β , respectively, are not functions of the sufficient statistic (X (1) , X (n) )′ of ( α , β )′. This is a drawback of the method of moment estimation. Another obvious disadvantage of this method is that it fails when no moments exist (as in the case of the Cauchy distribution), or when not enough moments exist. Least square method. This method is applicable when the underlying distribution is of a certain special form and it will be discussed in detail in Chapter 16. Exercises 12.9.1 Let X 1 , , X n be independent r.v.’s distributed as U( θ − a, θ + b), where a, b > 0 are known and θ ∈Ω= ޒ . Find the moment estimator of θ and calculate its variance. 12.9.2 If X 1 , , X n are independent r.v.’s distributed as U(− θ , θ ), θ ∈Ω= (0, ∞), does the method of moments provide an estimator for θ ? 12.9.3 If X 1 , , X n are i.i.d. r.v.’s from the Gamma distribution with param- eters α and β , show that ˆ α = X ¯ 2 /S 2 and ˆ β = S 2 /X ¯ are the moment estimators of α and β , respectively, where S n XX j j n 2 2 1 1 =− () = ∑ . 12.9.4 Let X 1 , X 2 be independent r.v.’s with p.d.f. f(·; θ ) given by fx xI x;,,. , θ θ θθ θ () =− ()() ∈= ∞ () () 2 0 2 0 Ω Find the moment estimator of θ . 12.9.5 Let X 1 , , X n be i.i.d. r.v.’s from the Beta distribution with param- eters α , β and find the moment estimators of α and β . 12.9.6 Refer to Exercise 12.5.7 and find the moment estimators of θ 1 and θ 2 . 12.10 Asymptotically Optimal Properties of Estimators So far we have occupied ourselves with the problem of constructing an estima- tor on the basis of a sample of fixed size n, and having one or more of the 12.3 The Case of Availability of Complete Sufficient Statistics 323 following properties: Unbiasedness, (uniformly) minimum variance, minimax, minimum average risk (Bayes), the (intuitively optimal) property associated with an MLE. If however, the sample size n may increase indefinitely, then some additional, asymptotic properties can be associated with an estimator. To this effect, we have the following definitions. Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θ ), θ ∈Ω⊆ ޒ . DEFINITION 14 The sequence of estimators of θ , {V n } = {V(X 1 , , X n )}, is said to be consistent in probability (or weakly consistent) if V n P θ ⎯→⎯ θ as n →∞, for all θ ∈Ω. It is said to be a.s. consistent (or strongly consistent) if V n as P ⎯→⎯ θ θ as n →∞, for all θ ∈Ω. (See Chapter 8.) From now on, the term “consistent” will be used in the sense of “weakly consistent.” The following theorem provides a criterion for a sequence of estimates to be consistent. THEOREM 8 If, as n →∞, E θ V n → θ and σ 2 θ V n → 0, then V n P θ ⎯→⎯ θ . PROOF For the proof of the theorem the reader is referred to Remark 5, Chapter 8. ▲ DEFINITION 15 The sequence of estimators of θ , {V n } = {V(X 1 , , X n )}, properly normalized, is said to be asymptotically normal N(0, σ 2 ( θ )), if, as n →∞, √ – n(V n − θ ) d P ⎯→⎯ () θ X for all θ ∈Ω, where X is distributed (under P θ ) as N(0, σ 2 ( θ )). (See Chapter 8.) This is often expressed (loosely) by writing V n ≈ N( θ , σ 2 ( θ )/n). If nV N n n d P − () ⎯→⎯ () () →∞ () θσθ θ 0 2 ,, , as it follows that V n P n θ ⎯→⎯ →∞ θ (see Exercise 12.10.1). DEFINITION 16 The sequence of estimators of θ , {V n } = {V(X 1 , , X n )}, is said to be best asymptotically normal (BAN) if: ii) iIt is asymptotically normal and ii) The variance σ 2 ( θ ) of its limiting normal distribution is smallest for all θ ∈Ω in the class of all sequences of estimators which satisfy (i). A BAN sequence of estimators is also called asymptotically efficient (with respect to the variance). The relative asymptotic efficiency of any other se- quence of estimators which satisfies (i) only is expressed by the quotient of the smallest variance mentioned in (ii) to the variance of the asymptotic normal distribution of the sequence of estimators under consideration. In connection with the concepts introduced above, we have the following result. 12.10 Asymptotically Optimal Properties of Estimators 323 324 12 Point Estimation THEOREM 9 Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θ ), θ ∈Ω ⊆ ޒ . Then, if certain suitable regularity conditions are satisfied, the likelihood equation ∂ ∂θ θ log , ,LX X n1 0 () = has a root θ * n = θ *(X 1 , , X n ), for each n, such that the sequence { θ * n } of estimators is BAN and the variance of its limiting normal distribution is equal to the inverse of Fisher’s information number IE fX θ ∂ ∂θ θ θ () = () ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ log ; , 2 where X is an r.v. distributed as the X’s above. In smooth cases, θ * n will be an MLE or the MLE. Examples have been constructed, however, for which { θ ∗ n } does not satisfy (ii) of Definition 16 for some exceptional θ ’s. Appropriate regularity conditions ensure that these exceptional θ ’s are only “a few” (in the sense of their set having Lebesgue measure zero). The fact that there can be exceptional θ ’s, along with other considerations, has prompted the introduction of other criteria of asymptotic efficiency. However, this topic will not be touched upon here. Also, the proof of Theorem 9 is beyond the scope of this book, and therefore it will be omitted. EXAMPLE 21 iii) Let X 1 , , X n be i.i.d. r.v.’s from B(1, θ ). Then, by Exercise 12.5.1, the MLE of θ is X ¯ , which we denote by X ¯ n here. The weak and strong consistency of X ¯ n follows by the WLLN and SLLN, respectively (see Chapter 8). That √ – n(X ¯ n − θ ) is asymptotically normal N(0, I −1 ( θ )), where I( θ ) = 1/[ θ (1 − θ )] (see Example 7), follows from the fact that nX n − () − () θθθ 1 is asymptotically N(0, 1) by the CLT (see Chapter 8). iii) If X 1 , , X n are i.i.d. r.v.’s from P( θ ), then the MLE X ¯ = X ¯ n of θ (see Example 10) is both (strongly) consistent and asymptotically normal by the same reasoning as above, with the variance of limiting normal distribu- tion being equal to I −1 ( θ ) = θ (see Example 8). iii) The same is true of the MLE X ¯ = X ¯ n of μ and (1/n)∑ n j=1 (X j − μ ) 2 of σ 2 if X 1 , , X n are i.i.d. r.v.’s from N( μ , σ 2 ) with one parameter known and the other unknown (see Example 12). The variance of the (normal) distribu- tion of √ – n(X — n − μ ) is I −1 ( μ ) = σ 2 , and the variance of the limiting normal distribution of n n XI j j n 1 2 2 2 1 12 4 − () − ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ () = () = − ∑ μσ σ σ is see Example 9 . It can further be shown that in all cases (i)–(iii) just considered the regu- larity conditions not explicitly mentioned in Theorem 9 are satisfied, and therefore the above sequences of estimators are actually BAN. 12.3 The Case of Availability of Complete Sufficient Statistics 325 Exercise 12.10.1 Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θ ); θ ∈Ω⊆ ޒ and let {V n } = {V n (X 1 , , X n )} be a sequence of estimators of θ such that √ – n(V n − θ ) d P ⎯→⎯ () θ Y as n →∞, where Y is an r.v. distributed as N(0, σ 2 ( θ )). Then show that V n P n θ ⎯→⎯ →∞ θ . (That is, asymptotic normality of {V n } implies its consistency in probability.) 12.11 Closing Remarks The following definition serves the purpose of asymptotically comparing two estimators. DEFINITION 17 Let X 1 , , X n be i.i.d. r.v.’s with p.d.f. f(·; θ ), θ ∈Ω ⊆ ޒ and let UUXX VVXX nn n nn n {} = () {} {} = () {} 11 , , , , and be two sequences of estimators of θ . Then we say that {U n } and {V n } are asymptotically equivalent if for every θ ∈Ω, nU V nn P n − () ⎯→⎯ →∞ θ 0. For an example, suppose that the X’s are from B(1, θ ). It has been shown (see Exercise 12.3.3) that the UMVU estimator of θ is U n = X ¯ n (= X ¯ ) and this coincides with the MLE of θ (Exercise 12.5.1). However, the Bayes estimator of θ , corresponding to a Beta p.d.f. λ , is given by V X n n j j n = + ++ = ∑ α αβ 1 , (25) and the minimax estimator is W Xn nn n j j n = + + = ∑ 2 1 . (26) That is, four different methods of estimation of the same parameter θ pro- vided three different estimators. This is not surprising, since the criteria of optimality employed in the four approaches were different. Next, by the CLT, √ – n(U n − θ ) d P ⎯→⎯ () θ Z, as n →∞, where Z is an r.v. distributed as N(0, θ (1 − θ )), and it can also be shown (see Exercise 11.1), that √ – n(V n − θ ) d P ⎯→⎯ () θ Z, as n →∞, for any arbitrary but fixed (that is, not functions of n) values of α and β . It can also be shown (see Exercise 12.11.2) that √ – n(U n − V n ) P n θ ⎯→⎯ →∞ 0. Thus {U n } and {V n } are asymptotically equivalent according to Defi- nition 17. As for W n , it can be established (see Exercise 12.11.3) that √ – n(W n − θ ) d P ⎯→⎯ () θ W, as n →∞, where W is an r.v. distributed as N( 1 2 − θ , θ (1 − θ )). 12.11 Closing Remarks 325 [...]... r.v taking the value 1 if head appears and 0 if tail appears Then the statement is: The coin is biased; iii) X is an r.v whose expectation is equal to 5 13.2 Testing a Simple Hypothesis Against a Simple Alternative In the present case, we take Ω to consist of two points only, which can be labeled as θ0 and θ1; that is, Ω = {θ0, θ1} In actuality, Ω may consist of more θ than two points but we focus attention... using the Binomial tables (if possible) and also by using the CLT 13.3.10 The number X of fatal traffic accidents in a certain city during a year may be assumed to be an r.v distributed as P(λ) For the latest year x = 4, whereas for the past several years the average was 10 Test whether it has been an improvement, at level of significance α = 0.01 First, write out the expression for the exact determination... 13.1.) At this point, there are two cases to consider First, a( C0) = a( C0 −); that is, C0 is a continuity point of the function a Then, α = a( C0) and if in (2) C is replaced by C0 and γ = 0, the resulting test is of level α In fact, in this case (4) becomes 0 ( ) ( ) ( ) Eθ φ Z = Pθ Y > C0 = a C0 = α , 0 0 as was to be seen Next, we assume that C0 is a discontinuity point of a In this case, take again... unique In fact, if C = C0 is a discontinuity point of a, then both C and γ are uniquely defined the way it was done in the proof of the theorem Next, if the (straight) line through the point (0, α) and parallel to the C-axis has only one point in common with the graph of a, then γ = 0 and C is the unique point for which a( C) = α Finally, if the above (straight) line coincides with part of the graph of a. .. alternative A is a (measurable) function φ defined on ‫ ޒ‬n, taking values in [0, 1] and having the following interpretation: If (x1, , xn)′ is the observed value of (X1, , Xn)′ and φ(x1, , xn) = y, then a coin, whose probability of falling 3 27 328 13 Testing Hypotheses heads is y, is tossed and H is rejected or accepted when heads or tails appear, respectively In the particular case where y can be... him/ her than that of wrongly accepting it For example, suppose a pharmaceutical company is considering the marketing of a newly developed drug for treatment of a disease for which the best available drug in the market has a cure rate of 60% On the basis of limited experimentation, the research division claims that the new drug is more effective If, in fact, it fails to be more 329 13.213.3 UMPaTests for... for testing the hypothesis H : θ ≤ θ0 against the alternative A : θ > θ0 at level of significance α ii) What does the test in (i) become for n = 10, θ0 = 0.25 and α = 0.05? iii) Compute the power at θ1 = 0. 375 , 0.500, 0.625, 0 .75 0, 0. 875 Now let θ0 = 0.125 and α = 0.1 and suppose that we are interested in securing power at least 0.9 against the alternative θ1 = 0.25 iv) Determine the minimum sample size... 12 Point Estimation Thus {Un} and {Wn} or {Vn} and {Wn} are not even comparable on the basis of Definition 17 Finally, regarding the question as to which estimator is to be selected in a given case, the answer would be that this would depend on which kind of optimality is judged to be most appropriate for the case in question Although the preceding comments were made in reference to the Binomial case,... alternative A : μ < 1,800 at level of significance α = 0.01 348 13 Testing Hypotheses 13.3.4 The rainfall X at a certain station during a year may be assumed to be a normally distributed r.v with s.d σ = 3 inches and unknown mean μ For the past 10 years, the record provides the following rainfalls: x1 = 30.5, x2 = 34.1, x3 = 27. 9, x4 = 29.4, x5 = 35.0, x6 = 26.9, x7 = 30.2, x8 = 28.3, x9 = 31 .7, x10... minimum sample size n required to obtain power at least 0.95 against the alternative θ1 = 500 when θ0 = 1,000 and α = 0.05 13.4 UMPU Tests for Testing Certain Composite Hypotheses In Section 13.3, it was stated that under the assumptions of Theorem 3, for testing H : θ ∈ω = {θ ∈Ω; θ ≤ θ1 or θ ≥ θ2} against A : θ ∈ω c, a UMP test exists It is then somewhat surprising that, if the roles of H and A are . values of the X’s. A randomized (statistical) test (or test function) for testing H against the alternative A is a (measurable) function φ defined on ޒ n , taking values in [0, 1] and having. probabilities. It is also plain that one would desire to make α as small as possible (preferably 0) and at the same time to make the power as large as possible (preferably 1). Of course, maximizing. case, take again C = C 0 in (2) and also set γ α = − () − () − () aC aC aC 0 00 (so that 0 ≤ γ ≤ 1). Again we assert that the resulting test is of level α . In the present case, (4) becomes as

Ngày đăng: 23/07/2014, 16:21

TỪ KHÓA LIÊN QUAN