Chapter 10 Hypothesis Tests Involving a Sample Mean or Proportion Fat-Free or Regular Pringles: Can Tasters Tell the Difference? © Michael Newman/PhotoEdit When the makers of Pringles potato chips came out with new Fat-Free Pringles, they wanted the fat-free chips to taste just as good as their already successful regular Pringles Did they succeed? In an independent effort to answer this question, USA Today hired registered dietitian Diane Wilke to give 44 people a chance to see whether they could tell the difference between the two kinds of Pringles Each tester was given two bowls of chips—one containing Fat-Free Pringles, the other containing regular Pringles—and nobody was told which was which On average, if the two kinds of chips really taste the same, we’d expect such testers to have a 50% chance of correctly identifying the bowl containing the fat-free chips However, 25 of the 44 testers (56.8%) successfully identified the bowl with the fat-free chips Does this result mean that Pringles failed in its attempt to make the products taste the same, or could the difference between me? lf the ti the observed 56.8% and the theoretical than re o m y cantl 50% have happened just by t signifi r correc te s ta e Is th chance? Actually, if the chips really taste the same and we were to repeat this type of test many times, pure chance would lead to about 1͞5 of the tests yielding a sample percentage at least as high as the 56.8% observed here Thus, this particular test would not allow us to rule out the possibility that the chips taste the same After reading Sections 10.3 and 10.6 of this chapter, you’ll be able to verify how we reached this conclusion For now, just trust us and read on Thanks Source: Beth Ashley, “Taste Testers Notice Little Difference Between Products,” USA Today, September 30, 1996, p 6D Interested readers may also refer to Fiona Haynes, “Do Low-Fat Foods Really Taste Different?”, http://lowfatcooking.about.com, August 9, 2006 310 Part 4: Hypothesis Testing learning objectives After reading this chapter, you should be able to: 10.1 • • Describe the meaning of a null and an alternative hypothesis • Describe what is meant by Type I and Type II errors, and explain how these can be reduced in hypothesis testing • Carry out a hypothesis test for a population mean or a population proportion, interpret the results of the test, and determine the appropriate business decision that should be made • • • Determine and explain the p-value for a hypothesis test • Determine and explain the operating characteristic curve for a hypothesis test and a given decision rule Transform a verbal statement into appropriate null and alternative hypotheses, including the determination of whether a two-tail test or a one-tail test is appropriate Explain how confidence intervals are related to hypothesis testing Determine and explain the power curve for a hypothesis test and a given decision rule INTRODUCTION In statistics, as in life, nothing is as certain as the presence of uncertainty However, just because we’re not 100% sure of something, that’s no reason why we can’t reach some conclusions that are highly likely to be true For example, if a coin were to land heads 20 times in a row, we might be wrong in concluding that it’s unfair, but we’d still be wise to avoid engaging in gambling contests with its owner In this chapter, we’ll examine the very important process of reaching conclusions based on sample information — in particular, of evaluating hypotheses based on claims like the following: • • Titus Walsh, the director of a municipal transit authority, claims that 35% of the system’s ridership consists of senior citizens In a recent study, independent researchers find that only 23% of the riders observed are senior citizens Should the claim of Walsh be considered false? Jackson T Backus has just received a railroad car of canned beets from his grocery supplier, who claims that no more than 20% of the cans are dented Jackson, a born skeptic, examines a random sample from the shipment and finds that 25% of the cans sampled are dented Has Mr Backus bought a batch of botched beets? Each of the preceding cases raises a question of “believability” that can be examined by the techniques of this chapter These methods represent inferential statistics, because information from a sample is used in reaching a conclusion about the population from which the sample was drawn Null and Alternative Hypotheses The first step in examining claims like the preceding is to form a null hypothesis, expressed as H0 (“H sub naught”) The null hypothesis is a statement about the value of a population parameter and is put up for testing in the face of numerical evidence The null hypothesis is either rejected or fails to be rejected Chapter 10: Hypothesis Tests Involving a Sample Mean or Proportion The null hypothesis tends to be a “business as usual, nothing out of the ordinary is happening” statement that practically invites you to challenge its truthfulness In the philosophy of hypothesis testing, the null hypothesis is assumed to be true unless we have statistically overwhelming evidence to the contrary In other words, it gets the benefit of the doubt The alternative hypothesis, H1 (“H sub one”), is an assertion that holds if the null hypothesis is false For a given test, the null and alternative hypotheses include all possible values of the population parameter, so either one or the other must be false There are three possible choices for the set of null and alternative hypotheses to be used for a given test Described in terms of an (unknown) population mean (), they might be listed as shown below Notice that each null hypothesis has an equality term in its statement (i.e., “ϭ,” “Ն,” or “Յ”) Null Hypothesis Alternative Hypothesis H0: ϭ $10 H0: Ն $10 H0: Յ $10 H1: $10 H1: Ͻ $10 H1: Ͼ $10 ( is $10, or it isn’t.) ( is at least $10, or it is less.) ( is no more than $10, or it is more.) Directional and Nondirectional Testing A directional claim or assertion holds that a population parameter is greater than (Ͼ), at least (Ն), no more than (Յ), or less than (Ͻ) some quantity For example, Jackson’s supplier claims that no more than 20% of the beet cans are dented A nondirectional claim or assertion states that a parameter is equal to some quantity For example, Titus Walsh claims that 35% of his transit riders are senior citizens Directional assertions lead to what are called one-tail tests, where a null hypothesis can be rejected by an extreme result in one direction only A nondirectional assertion involves a two-tail test, in which a null hypothesis can be rejected by an extreme result occurring in either direction Hypothesis Testing and the Nature of the Test When formulating the null and alternative hypotheses, the nature, or purpose, of the test must also be taken into account To demonstrate how (1) directionality versus nondirectionality and (2) the purpose of the test can guide us toward the appropriate testing approach, we will consider the two examples at the beginning of the chapter For each situation, we’ll examine (1) the claim or assertion leading to the test, (2) the null hypothesis to be evaluated, (3) the alternative hypothesis, (4) whether the test will be two-tail or one-tail, and (5) a visual representation of the test itself Titus Walsh Titus’ assertion: “35% of the riders are senior citizens.” Null hypothesis: H0: ϭ 0.35, where ϭ the population proportion The null hypothesis is identical to his statement since he’s claimed an exact value for the population parameter Alternative hypothesis: H1: 0.35 If the population proportion is not 0.35, then it must be some other value 311 312 Part 4: Hypothesis Testing FIGURE 10.1 Hypothesis tests can be two-tail (a) or one-tail (b), depending on the purpose of the test A one-tail test can be either left-tail (not shown) or right-tail (b) H0: p = 0.35 H1: p ≠ 0.35 Reject H0 Do not reject H0 Reject H0 0.35 Proportion of senior citizens in a random sample of transit riders (a) Titus Walsh: “35% of the transit riders are senior citizens” H0: p ≤ 0.20 H1: p > 0.20 Do not reject H0 Reject H0 0.20 Proportion of dented containers in a random sample of beet cans (b) Jackson Backus' supplier: “No more than 20% of the cans are dented” A two-tail test is used because the null hypothesis is nondirectional As part (a) of Figure 10.1 shows, ϭ 0.35 is at the center of the hypothesized distribution, and a sample with either a very high proportion or a very low proportion of senior citizens would lead to rejection of the null hypothesis Accordingly, there are reject areas at both ends of the distribution Jackson T Backus Supplier’s assertion: “No more than 20% of the cans are dented.” Null hypothesis: H0: Յ 0.20, where ϭ the population proportion In this situation, the null hypothesis happens to be the same as the claim that led to the test This is not always the case when the test involves a directional claim or assertion Alternative hypothesis: H1: Ͼ 0.20 Jackson’s purpose in conducting the test is to determine whether the population proportion of dented cans could really be greater than 0.20 A one-tail test is used because the null hypothesis is directional As part (b) of Figure 10.1 shows, a sample with a very high proportion of dented cans would lead to the rejection of the null hypothesis A one-tail test in which the rejection area is at the right is known as a right-tail test Note that in part (b) of Figure 10.1, the center of the hypothesized distribution is identified as ϭ 0.20 This is the highest value for which the null hypothesis could be true From Jackson’s standpoint, this may be viewed as somewhat conservative, but remember that the null hypothesis tends to get the benefit of the doubt Chapter 10: Hypothesis Tests Involving a Sample Mean or Proportion 313 TABLE 10.1 A VERBAL STATEMENT IS AN EQUALITY, “”؍ Example: “Average tire life is 35,000 miles.” H0: H1: ϭ 35,000 miles 35,000 miles B VERBAL STATEMENT IS “Ն” OR “Յ” (NOT Ͼ OR Ͻ) Example: “Average tire life is at least 35,000 miles.” H0: H1: Ն 35,000 miles Ͻ 35,000 miles Example: “Average tire life is no more than 35,000 miles.” H0: H1: Յ 35,000 miles Ͼ 35,000 miles In directional tests, the directionality of the null and alternative hypotheses will be in opposite directions and will depend on the purpose of the test For example, in the case of Jackson Backus, Jackson was interested in rejecting H0: Յ 0.20 only if evidence suggested to be higher than 0.20 As we proceed with the examples in the chapter, we’ll get more practice in formulating null and alternative hypotheses for both nondirectional and directional tests Table 10.1 offers general guidelines for proceeding from a verbal statement to typical null and alternative hypotheses Errors in Hypothesis Testing Whenever we reject a null hypothesis, there is a chance that we have made a mistake — i.e., that we have rejected a true statement Rejecting a true null hypothesis is referred to as a Type I error, and our probability of making such an error is represented by the Greek letter alpha (␣) This probability, which is referred to as the significance level of the test, is of primary concern in hypothesis testing On the other hand, we can also make the mistake of failing to reject a false null hypothesis — this is a Type II error Our probability of making it is represented by the Greek letter beta () Naturally, if we either fail to reject a true null hypothesis or reject a false null hypothesis, we’ve acted correctly The probability of rejecting a false null hypothesis is called the power of the test, and it will be discussed in Section 10.7 The four possibilities are shown in Table 10.2 (page 314) In hypothesis testing, there is a necessary trade-off between Type I and Type II errors: For a given sample size, reducing the probability of a Type I error increases the probability of a Type II error, and vice versa The only sure way to avoid accepting false claims is to never accept any claims Likewise, the only sure way to avoid rejecting true claims is to never reject any claims Of course, each of these extreme approaches is impractical, and we must usually compromise by accepting a reasonable risk of committing either type of error Categories of verbal statements and typical null and alternative hypotheses for each 314 Part 4: Hypothesis Testing TABLE 10.2 A summary of the possibilities for mistakes and correct decisions in hypothesis testing The probability of incorrectly rejecting a true null hypothesis is ␣, the significance level The probability that the test will correctly reject a false null hypothesis is (1 Ϫ ), the power of the test THE NULL HYPOTHESIS (H0) IS REALLY TRUE “Do not reject H0” FALSE Correct decision Incorrect decision (Type II error) Probability of making this error is  Incorrect decision (Type I error) Probability of making this error is ␣, the significance level Correct decision Probability (1 Ϫ ) is the power of the test Hypothesis tests says “Reject H0” exercises 10.1 What is the difference between a null hypothesis and an alternative hypothesis? Is the null hypothesis always the same as the verbal claim or assertion that led to the test? Why or why not? 10.2 For each of the following pairs of null and alterna- tive hypotheses, determine whether the pair would be appropriate for a hypothesis test If a pair is deemed inappropriate, explain why a H0: Ն 10, H1: Ͻ 10 b H0: ϭ 30, H1: 30 c H0: Ͼ 90, H1: Յ 90 d H0: Յ 75, H1: Յ 85 e H0: x– Ն 15, H1: x– Ͻ 15 f H0: x– ϭ 58, H1: x– 58 10.3 For each of the following pairs of null and alterna- tive hypotheses, determine whether the pair would be appropriate for a hypothesis test If a pair is deemed inappropriate, explain why a H0: Ն 0.30, H1: Ͻ 0.35 b H0: ϭ 0.72, H1: 0.72 c H0: Յ 0.25, H1: Ͼ 0.25 d H0: Ն 0.48, H1: Ͼ 0.48 e H0: Յ 0.70, H1: Ͼ 0.70 f H0: p Ն 0.65, H1: p Ͻ 0.65 10.4 The president of a company that manufactures central home air conditioning units has told an investigative reporter that at least 85% of its homeowner customers claim to be “completely satisfied” with the overall purchase experience If the reporter were to subject the president’s statement to statistical scrutiny by questioning a sample of the company’s residential customers, would the test be one-tail or two-tail? What would be the appropriate null and alternative hypotheses? 10.5 On CNN and other news networks, guests often express their opinions in rather strong, persuasive, and sometimes frightening terms For example, a scientist who strongly believes that global warming is taking place will warn us of the dire consequences (such as rising sea levels, coastal flooding, and global climate change) she foresees if we not take her arguments seriously If the scientist is correct, and the world does not take her seriously, would this be a Type I error or a Type II error? Briefly explain your reasoning 10.6 Many law enforcement agencies use voice-stress analysis to help determine whether persons under interrogation are lying If the sound frequency of a person’s voice changes when asked a question, the presumption is that the person is being untruthful For this situation, state the null and alternative hypotheses in verbal terms, then identify what would constitute a Type I error and a Type II error in this situation 10.7 Following a major earthquake, the city engineer must determine whether the stadium is structurally sound for an upcoming athletic event If the null hypothesis is “the stadium is structurally sound,” and the alternative hypothesis is “the stadium is not structurally sound,” which type of error (Type I or Type II) would the engineer least like to commit? 10.8 A state representative is reported as saying that about 10% of reported auto thefts involve owners whose cars have not really been stolen, but who are trying to defraud their insurance company What null and alternative Chapter 10: Hypothesis Tests Involving a Sample Mean or Proportion hypotheses would be appropriate in evaluating the statement made by this legislator? 10.9 In response to the assertion made in Exercise 10.8, suppose an insurance company executive were to claim the percentage of fraudulent auto theft reports to be “no more than 10%.” What null and alternative hypotheses would be appropriate in evaluating the executive’s statement? 10.10 For each of the following statements, formulate appropriate null and alternative hypotheses Indicate whether the appropriate test will be one-tail or two-tail, then sketch a diagram that shows the approximate location of the “rejection” region(s) for the test a “The average college student spends no more than $300 per semester at the university’s bookstore.” b “The average adult drinks 1.5 cups of coffee per day.” c “The average SAT score for entering freshmen is at least 1200.” d “The average employee put in 3.5 hours of overtime last week.” 315 10.11 In administering a “field sobriety” test to suspected drunks, officers may ask a person to walk in a straight line or close his eyes and touch his nose Define the Type I and Type II errors in terms of this setting Speculate on physiological variables (besides the drinking of alcoholic beverages) that might contribute to the chance of each type of error 10.12 In the judicial system, the defense attorney argues for the null hypothesis that the defendant is innocent In general, what would be the result if judges instructed juries to a never make a Type I error? b never make a Type II error? c compromise between Type I and Type II errors? 10.13 Regarding the testing of pharmaceutical companies’ claims that their drugs are safe, a U.S Food and Drug Administration official has said that it’s “better to turn down 1000 good drugs than to approve one that’s unsafe.” If the null hypothesis is H0: “The drug is not harmful,” what type of error does the official appear to favor? HYPOTHESIS TESTING: BASIC PROCEDURES There are several basic steps in hypothesis testing They are briefly presented here and will be further explained through examples that follow Formulate the null and alternative hypotheses As described in the preceding section, the null hypothesis asserts that a population parameter is equal to, no more than, or no less than some exact value, and it is evaluated in the face of numerical evidence An appropriate alternative hypothesis covers other possible values for the parameter Select the significance level If we end up rejecting the null hypothesis, there’s a chance that we’re wrong in doing so—i.e., that we’ve made a Type I error The significance level is the maximum probability that we’ll make such a mistake In Figure 10.1, the significance level is represented by the shaded area(s) beneath each curve For two-tail tests, the level of significance is the sum of both tail areas In conducting a hypothesis test, we can choose any significance level we desire In practice, however, levels of 0.10, 0.05, and 0.01 tend to be most common—in other words, if we reject a null hypothesis, the maximum chance of our being wrong would be 10%, 5%, or 1%, respectively This significance level will be used to later identify the critical value(s) Select the test statistic and calculate its value For the tests of this chapter, the test statistic will be either z or t, corresponding to the normal and t distributions, respectively Figure 10.2 (page 316) shows how the test statistic is selected An important consideration in tests involving a sample mean is whether the population standard deviation () is known As Figure 10.2 indicates, the z-test (normal distribution and test statistic, z) will be used for hypothesis tests involving a sample proportion 10.2 316 Part 4: Hypothesis Testing FIGURE 10.2 An overview of the process of selecting a test statistic for single-sample hypothesis testing Key assumptions are reviewed in the figure notes Hypothesis test, one population Population mean, m Population proportion, p Is np ≥ and n(1 – p) ≥ 5? s known s unknown Is the population truly or approximately normally distributed? Is the population truly or approximately normally distributed? No No Is n ≥ 30? Is n ≥ 30? Yes Yes Yes z-test, with test statistic x – m0 z = ––––– sx where s sx = ––– √n and m0 is from H0 Section 10.3 Note No Use distribution-free test No Convert to underlying binomial distribution Yes t-test, with test statistic x – m0 t = ––––– sx df = n – s sx = ––– √n and m0 is from H0 z-test, with test statistic p – p0 z = ––––– sp where p0(1 – p0) sp = ––––––––– n and p0 is from H0 Section 10.5 Note Section 10.6 Note √ z distribution: If the population is not normally distributed, n should be Ն30 for the central limit theorem to apply The population is usually not known 2The t distribution: For an unknown , and when the population is approximately normally distributed, the t-test is appropriate regardless of the sample size As n increases, the normality assumption becomes less important If n Ͻ 30 and the population is not approximately normal, nonparametric testing (e.g., the sign test for central tendency, in Chapter 14) may be applied The t-test is “robust” in terms of not being adversely affected by slight departures from the population normality assumption 3When n Ն and n(1 Ϫ ) Ն 5, the normal distribution is considered to be a good approximation to the binomial distribution If this condition is not met, the exact probabilities must be derived from the binomial distribution Most practical business settings involving proportions satisfy this condition, and the normal approximation is used in this chapter 1The Chapter 10: Hypothesis Tests Involving a Sample Mean or Proportion 317 Identify critical value(s) for the test statistic and state the decision rule The critical value(s) will bound rejection and nonrejection regions for the null hypothesis, H0 Such regions are shown in Figure 10.1 They are determined from the significance level selected in step In a one-tail test, there will be one critical value since H0 can be rejected by an extreme result in just one direction Two-tail tests will require two critical values since H0 can be rejected by an extreme result in either direction If the null hypothesis were really true, there would still be some probability (the significance level, ␣) that the test statistic would be so extreme as to fall into a rejection region The rejection and nonrejection regions can be stated as a decision rule specifying the conclusion to be reached for a given outcome of the test (e.g., “Reject H0 if z Ͼ 1.645, otherwise not reject”) Compare calculated and critical values and reach a conclusion about the null hypothesis Depending on the calculated value of the test statistic, it will fall into either a rejection region or the nonrejection region If the calculated value is in a rejection region, the null hypothesis will be rejected Otherwise, the null hypothesis cannot be rejected Failure to reject a null hypothesis does not constitute proof that it is true, but rather that we are unable to reject it at the level of significance being used for the test Make the related business decision After rejecting or failing to reject the null hypothesis, the results are applied to the business decision situation that precipitated the test in the first place For example, Jackson T Backus may decide to return the entire shipment of beets to his distributor exercises 10.14 A researcher wants to carry out a hypothesis test involving the mean for a sample of size n ϭ 18 She does not know the true value of the population standard deviation, but is reasonably sure that the underlying population is approximately normally distributed Should she use a z-test or a t-test in carrying out the analysis? Why? 10.15 A research firm claims that 62% of women in the 40–49 age group save in a 401(k) or individual retirement account If we wished to test whether this percentage could be the same for women in this age group living in New York City and selected a random sample of 300 such individuals from New York, what would be the null and alternative hypotheses? Would the test be a z-test or a t-test? Why? 10.16 In hypothesis testing, what is meant by the decision rule? What role does it play in the hypothesis-testing procedure? 10.17 A manufacturer informs a customer’s design engi- neers that the mean tensile strength of its rivets is at least 3000 pounds A test is set up to measure the tensile strength of a sample of rivets, with the null and alternative hypotheses, H0: Ն 3000 and H1: Ͻ 3000 For each of the following individuals, indicate whether the person would tend to prefer a numerically very high (e.g., ␣ ϭ 0.20) or a numerically very low (e.g., ␣ ϭ 0.0001) level of significance to be specified for the test a The marketing director for a major competitor of the rivet manufacturer b The rivet manufacturer’s advertising agency, which has already made the “at least 3000 pounds” claim in national ads 10.18 It has been claimed that no more than 5% of the units coming off an assembly line are defective Formulate a null hypothesis and an alternative hypothesis for this situation Will the test be one-tail or two-tail? Why? If the test is one-tail, will it be left-tail or right-tail? Why? 318 Part 4: Hypothesis Testing 10.3 TESTING A MEAN, POPULATION STANDARD DEVIATION KNOWN Situations can occur where the population mean is unknown but past experience has provided us with a trustworthy value for the population standard deviation Although this possibility is more likely in an industrial production setting, it can sometimes apply to employees, consumers, or other nonmechanical entities In addition to the assumption that is known, the procedure of this section assumes either (1) that the sample size is large (n Ն 30), or (2) that, if n Ͻ 30, the underlying population is normally distributed These assumptions are summarized in Figure 10.2 If the sample size is large, the central limit theorem assures us that the distribution of sample means will be approximately normally distributed, regardless of the shape of the underlying distribution The larger the sample size, the better this approximation becomes Because it is based on the normal distribution, the test is known as the z-test, and the test statistic is as follows: Test statistic, z-test for a sample mean: zϭ N O T E x Ϫ 0 x where x ϭ standard error for the sample mean, ϭ /͙n x ϭ sample mean 0 ϭ hypothesized population mean n ϭ sample size The symbol 0 is the value of that is assumed for purposes of the hypothesis test Two-Tail Testing of a Mean, Known example Two-Tail Test When a robot welder is in adjustment, its mean time to perform its task is 1.3250 minutes Past experience has found the standard deviation of the cycle time to be 0.0396 minutes An incorrect mean operating time can disrupt the efficiency of other activities along the production line For a recent random sample of 80 jobs, the mean cycle time for the welder was 1.3229 minutes The underlying data are in file CX10WELD Does the machine appear to be in need of adjustment? SOLUTION Formulate the Null and Alternative Hypotheses H0: H1: ϭ 1.3250 minutes 1.3250 minutes The machine is in adjustment The machine is out of adjustment In this test, we are concerned that the machine might be running at a mean speed that is either too fast or too slow Accordingly, the null hypothesis could be 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 60 120 ؕ (continued) 10 12 v1 ؍df, numerator 15 F(a, n 1, n 2) F a = 0.025 20 24 30 40 60 120 ؕ 647.8 799.5 864.2 899.6 921.8 937.1 948.2 956.7 963.3 968.6 976.7 984.9 993.1 997.2 1001 1006 1010 1014 1018 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.41 39.43 39.45 39.46 39.46 39.47 39.48 39.49 39.50 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.34 14.25 14.17 14.12 14.08 14.04 13.99 13.95 13.90 12.22 10.65 9.98 9.60 9.36 9.20 9.07 8.98 8.90 8.84 8.75 8.66 8.56 8.51 8.46 8.41 8.36 8.31 8.26 10.01 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 6.62 6.52 6.43 6.33 6.28 6.23 6.18 6.12 6.07 6.02 8.81 7.26 6.60 6.23 5.99 5.82 5.70 5.60 5.52 5.46 5.37 5.27 5.17 5.12 5.07 5.01 4.96 4.90 4.85 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.82 4.76 4.67 4.57 4.47 4.42 4.36 4.31 4.25 4.20 4.14 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 4.30 4.20 4.10 4.00 3.95 3.89 3.84 3.78 3.73 3.67 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 3.96 3.87 3.77 3.67 3.61 3.56 3.51 3.45 3.39 3.33 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 3.72 3.62 3.52 3.42 3.37 3.31 3.26 3.20 3.14 3.08 6.72 5.26 4.63 4.28 4.04 3.88 3.76 3.66 3.59 3.53 3.43 3.33 3.23 3.17 3.12 3.06 3.00 2.94 2.88 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 3.37 3.28 3.18 3.07 3.02 2.96 2.91 2.85 2.79 2.72 6.41 4.97 4.35 4.00 3.77 3.60 3.48 3.39 3.31 3.25 3.15 3.05 2.95 2.89 2.84 2.78 2.72 2.66 2.60 6.30 4.86 4.24 3.89 3.66 3.50 3.38 3.29 3.21 3.15 3.05 2.95 2.84 2.79 2.73 2.67 2.61 2.55 2.49 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 3.06 2.96 2.86 2.76 2.70 2.64 2.59 2.52 2.46 2.40 6.12 4.69 4.08 3.73 3.50 3.34 3.22 3.12 3.05 2.99 2.89 2.79 2.68 2.63 2.57 2.51 2.45 2.38 2.32 6.04 4.62 4.01 3.66 3.44 3.28 3.16 3.06 2.98 2.92 2.82 2.72 2.62 2.56 2.50 2.44 2.38 2.32 2.25 5.98 4.56 3.95 3.61 3.38 3.22 3.10 3.01 2.93 2.87 2.77 2.67 2.56 2.50 2.44 2.38 2.32 2.26 2.19 5.92 4.51 3.90 3.56 3.33 3.17 3.05 2.96 2.88 2.82 2.72 2.62 2.51 2.45 2.39 2.33 2.27 2.20 2.13 5.87 4.46 3.86 3.51 3.29 3.13 3.01 2.91 2.84 2.77 2.68 2.57 2.46 2.41 2.35 2.29 2.22 2.16 2.09 5.83 4.42 3.82 3.48 3.25 3.09 2.97 2.87 2.80 2.73 2.64 2.53 2.42 2.37 2.31 2.25 2.18 2.11 2.04 5.79 4.38 3.78 3.44 3.22 3.05 2.93 2.84 2.76 2.70 2.60 2.50 2.39 2.33 2.27 2.21 2.14 2.08 2.00 5.75 4.35 3.75 3.41 3.18 3.02 2.90 2.81 2.73 2.67 2.57 2.47 2.36 2.30 2.24 2.18 2.11 2.04 1.97 5.72 4.32 3.72 3.38 3.15 2.99 2.87 2.78 2.70 2.64 2.54 2.44 2.33 2.27 2.21 2.15 2.08 2.01 1.94 5.69 4.29 3.69 3.35 3.13 2.97 2.85 2.75 2.68 2.61 2.51 2.41 2.30 2.24 2.18 2.12 2.05 1.98 1.91 5.66 4.27 3.67 3.33 3.10 2.94 2.82 2.73 2.65 2.59 2.49 2.39 2.28 2.22 2.16 2.09 2.03 1.95 1.88 5.63 4.24 3.65 3.31 3.08 2.92 2.80 2.71 2.63 2.57 2.47 2.36 2.25 2.19 2.13 2.07 2.00 1.93 1.85 5.61 4.22 3.63 3.29 3.06 2.90 2.78 2.69 2.61 2.55 2.45 2.34 2.23 2.17 2.11 2.05 1.98 1.91 1.83 5.59 4.20 3.61 3.27 3.04 2.88 2.76 2.67 2.59 2.53 2.43 2.32 2.21 2.15 2.09 2.03 1.96 1.89 1.81 5.57 4.18 3.59 3.25 3.03 2.87 2.75 2.65 2.57 2.51 2.41 2.31 2.20 2.14 2.07 2.01 1.94 1.87 1.79 5.42 4.05 3.46 3.13 2.90 2.74 2.62 2.53 2.45 2.39 2.29 2.18 2.07 2.01 1.94 1.88 1.80 1.72 1.64 5.29 3.93 3.34 3.01 2.79 2.63 2.51 2.41 2.33 2.27 2.17 2.06 1.94 1.88 1.82 1.74 1.67 1.58 1.48 5.15 3.80 3.23 2.89 2.67 2.52 2.39 2.30 2.22 2.16 2.05 1.94 1.82 1.76 1.69 1.61 1.53 1.43 1.31 5.02 3.69 3.12 2.79 2.57 2.41 2.29 2.19 2.11 2.05 1.94 1.83 1.71 1.64 1.57 1.48 1.39 1.27 1.00 v2 ؍df, denominator TABLE A.6 10 12 v1 ؍df, numerator F(a, n 1, n 2) F a = 0.01 15 20 24 30 40 60 120 ؕ 4052 4999.5 5403 5625 5764 5859 5928 5982 6022 6056 6106 6157 6209 6235 6261 6287 6313 6339 6366 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.49 99.50 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 27.23 27.05 26.87 26.69 26.60 26.50 26.41 26.32 26.22 26.13 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 14.55 14.37 14.20 14.02 13.93 13.84 13.75 13.65 13.56 13.46 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 10.05 9.89 9.72 9.55 9.47 9.38 9.29 9.20 9.11 9.02 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.72 7.56 7.40 7.31 7.23 7.14 7.06 6.97 6.88 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.47 6.31 6.16 6.07 5.99 5.91 5.82 5.74 5.65 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.67 5.52 5.36 5.28 5.20 5.12 5.03 4.95 4.86 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.11 4.96 4.81 4.73 4.65 4.57 4.48 4.40 4.31 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.71 4.56 4.41 4.33 4.25 4.17 4.08 4.00 3.91 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.40 4.25 4.10 4.02 3.94 3.86 3.78 3.69 3.60 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.16 4.01 3.86 3.78 3.70 3.62 3.54 3.45 3.36 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 3.96 3.82 3.66 3.59 3.51 3.43 3.34 3.25 3.17 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.80 3.66 3.51 3.43 3.35 3.27 3.18 3.09 3.00 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.67 3.52 3.37 3.29 3.21 3.13 3.05 2.96 2.87 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.55 3.41 3.26 3.18 3.10 3.02 2.93 2.84 2.75 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.46 3.31 3.16 3.08 3.00 2.92 2.83 2.75 2.65 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.37 3.23 3.08 3.00 2.92 2.84 2.75 2.66 2.57 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.30 3.15 3.00 2.92 2.84 2.76 2.67 2.58 2.49 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.23 3.09 2.94 2.86 2.78 2.69 2.61 2.52 2.42 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31 3.17 3.03 2.88 2.80 2.72 2.64 2.55 2.46 2.36 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.12 2.98 2.83 2.75 2.67 2.58 2.50 2.40 2.31 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21 3.07 2.93 2.78 2.70 2.62 2.54 2.45 2.35 2.26 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.03 2.89 2.74 2.66 2.58 2.49 2.40 2.31 2.21 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13 2.99 2.85 2.70 2.62 2.54 2.45 2.36 2.27 2.17 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 3.09 2.96 2.81 2.66 2.58 2.50 2.42 2.33 2.23 2.13 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15 3.06 2.93 2.78 2.63 2.55 2.47 2.38 2.29 2.20 2.10 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 3.03 2.90 2.75 2.60 2.52 2.44 2.35 2.26 2.17 2.06 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09 3.00 2.87 2.73 2.57 2.49 2.41 2.33 2.23 2.14 2.03 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98 2.84 2.70 2.55 2.47 2.39 2.30 2.21 2.11 2.01 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.66 2.52 2.37 2.29 2.20 2.11 2.02 1.92 1.80 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.50 2.35 2.20 2.12 2.03 1.94 1.84 1.73 1.60 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.34 2.19 2.03 1.95 1.86 1.76 1.66 1.53 1.38 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32 2.18 2.04 1.88 1.79 1.70 1.59 1.47 1.32 1.00 (continued) Source: Standard Mathematical Tables, 26th ed., William H Beyer (ed.), CRC Press, Inc., Boca Raton, FL, 1983 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 60 120 ؕ v2 ؍df, denominator TABLE A.6 826 Appendix A: Statistical Tables TABLE A.7 The Chi-Square Distribution e.g., for a right-tail test with a = 0.01 and d.f = 4, chi-square is 13.277 x2 For ␣ ؍Right-Tail Area of d.f 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 50 60 70 80 90 100 0.99 0.00016 0.0201 0.115 0.297 0.554 0.872 1.239 1.646 2.088 2.558 3.053 3.571 4.107 4.660 5.229 5.812 6.408 7.015 7.633 8.260 8.897 9.542 10.916 10.856 11.524 12.198 12.879 13.565 14.256 14.953 22.164 29.707 37.485 45.442 53.540 61.754 70.065 0.975 0.95 0.90 0.00098 0.0506 0.216 0.484 0.831 1.237 1.690 2.180 2.700 3.247 3.816 4.404 5.009 5.629 6.262 6.908 7.564 8.231 8.907 9.591 10.283 10.982 11.689 12.401 13.120 13.844 14.573 15.308 16.047 16.791 24.433 32.357 40.482 48.758 57.153 65.647 74.222 0.00039 0.103 0.352 0.711 1.145 1.635 2.167 2.733 3.325 3.940 4.575 5.226 5.892 6.571 7.261 7.962 8.672 9.390 10.117 10.851 11.591 12.338 13.091 13.848 14.611 15.379 16.151 16.928 17.708 18.493 26.509 34.764 43.188 51.739 60.391 69.126 77.930 0.0158 0.211 0.584 1.064 1.610 2.204 2.833 3.490 4.168 4.865 5.578 6.304 7.042 7.790 8.547 9.312 10.085 10.865 11.651 12.443 13.240 14.042 14.848 15.659 16.473 17.292 18.114 18.939 19.768 20.599 29.051 37.689 46.459 55.329 64.278 73.291 82.358 0.10 0.05 0.025 2.706 3.841 5.024 4.605 5.991 7.378 6.251 7.815 9.348 7.779 9.488 11.143 9.236 11.070 12.833 10.645 12.592 14.449 12.017 14.067 16.013 13.362 15.507 17.535 14.684 16.919 19.023 15.987 18.307 20.483 17.275 19.675 21.920 18.549 21.026 23.337 19.812 22.362 24.736 21.064 23.685 26.119 22.307 24.996 27.488 23.542 26.296 28.845 24.769 27.587 30.191 25.989 28.869 31.526 27.204 30.144 32.852 28.412 31.410 34.170 29.615 32.671 35.479 30.813 33.924 36.781 32.007 35.172 38.076 33.196 36.415 39.364 34.382 37.652 40.647 35.563 38.885 41.923 36.741 40.113 43.195 37.916 41.337 44.461 39.087 42.557 45.722 40.256 43.773 46.979 51.805 55.759 59.342 63.167 67.505 71.420 74.397 79.082 83.298 85.527 90.531 95.023 96.578 101.88 106.63 107.57 113.15 118.14 118.50 124.34 129.56 Source: Chi-square values generated by Minitab, then rounded as shown 0.01 6.635 9.210 11.345 13.277 15.086 16.812 18.475 20.090 21.666 23.209 24.725 26.217 27.688 29.141 30.578 32.000 33.409 34.805 36.191 37.566 38.932 40.290 41.638 42.980 44.314 45.642 46.963 48.278 49.588 50.892 63.691 76.154 88.381 100.42 112.33 124.12 135.81 Appendix A: Statistical Tables Two-Tail Test: One-Tail Test: n؍4 10 11 12 13 14 15 16 17 18 19 20 ␣ ؍0.20 ␣ ؍0.10 1, 3, 4, 6, 9, 11, 15, 18, 22, 27, 32, 37, 43, 49, 56, 63, 70, 12 17 22 27 34 40 48 56 64 73 83 93 104 115 127 140 827 ␣ ؍0.10 ␣ ؍0.05 0, 1, 3, 4, 6, 9, 11, 14, 18, 22, 26, 31, 36, 42, 48, 54, 61, 10 14 18 24 30 36 44 52 60 69 79 89 100 111 123 136 149 ␣ ؍0.05 ␣ ؍0.025 ␣ ؍0.02 ␣ ؍0.01 0, 10 0, 15 1, 20 3, 25 4, 32 6, 39 9, 46 11, 55 14, 64 18, 73 22, 83 26, 94 30, 106 35, 118 41, 130 47, 143 53, 157 0, 10 0, 15 0, 21 1, 27 2, 34 4, 41 6, 49 8, 58 10, 68 13, 78 16, 89 20, 100 24, 112 28, 125 33, 138 38, 152 44, 166 ␣ ؍0.01 ␣ ؍0.005 0, 0, 0, 0, 1, 2, 4, 6, 8, 10, 13, 16, 20, 24, 28, 33, 38, 10 15 21 28 35 43 51 60 70 81 92 104 116 129 143 157 172 Source: Adapted from Roger C Pfaffenberger and James H Patterson, Statistical Methods for Business and Economics (Homewood, Ill.: Richard D Irwin, Inc., 1987), p 110, and R L McCornack, “Extended Tables of the Wilcoxon Matched Pairs Signed Rank Statistics,” Journal of the American Statistical Association 60 (1965), 864–871 TABLE A.8 Wilcoxon Signed Rank Test, Lower and Upper Critical Values 828 Appendix A: Statistical Tables TABLE A.9 Wilcoxon Rank Sum Test, Lower and Upper Critical Values ␣ ؍0.025 (one-tail) or ␣ ؍0.05 (two-tail) n1: n2: 10 5, 6, 6, 7, 7, 8, 8, 9, 16 18 21 23 26 28 31 33 6, 11, 12, 12, 13, 14, 15, 16, 18 25 28 32 35 38 41 44 6, 12, 18, 19, 20, 21, 22, 24, 21 28 37 41 45 49 53 56 7, 12, 19, 26, 28, 29, 31, 32, 23 32 41 52 56 61 65 70 7, 13, 20, 28, 37, 39, 41, 43, 26 35 45 56 68 73 78 83 8, 14, 21, 29, 39, 49, 51, 54, 28 38 49 61 73 87 93 98 10 8, 31 15, 41 22, 53 31, 65 41, 78 51, 93 63, 108 66, 114 9, 33 16, 44 24, 56 32, 70 43, 83 54, 98 66, 114 79, 131 (Note: n1 is the smaller of the two samples — i.e., n1 Յ n2.) ␣ ؍0.05 (one-tail) or ␣ ؍0.10 (two-tail) n1: n2: 6, 15 7, 17 7, 20 8, 22 9, 24 9, 27 10, 29 10 11, 31 7, 12, 13, 14, 15, 16, 17, 18, 17 24 27 30 33 36 39 42 7, 13, 19, 20, 22, 24, 25, 26, 20 27 36 40 43 46 50 54 8, 14, 20, 28, 30, 32, 33, 35, 22 30 40 50 54 58 63 67 9, 15, 22, 30, 39, 41, 43, 46, 24 33 43 54 66 71 76 80 9, 16, 24, 32, 41, 52, 54, 57, 27 36 46 58 71 84 90 95 10, 17, 25, 33, 43, 54, 66, 69, 29 39 50 63 76 90 105 111 10 11, 18, 26, 35, 46, 57, 69, 83, (Note: n1 is the smaller of the two samples — i.e., n1 Յ n2.) Source: F Wilcoxon and R A Wilcox, Some Approximate Statistical Procedures (New York: American Cyanamid Company, 1964), pp 20–23 31 42 54 67 80 95 111 127 Appendix A: Statistical Tables 829 TABLE A.10 Significance Level ␣ Sample Size n 10 11 12 13 14 15 16 17 18 19 20 25 30 Over 30 0.20 0.15 0.10 0.05 0.01 300 285 265 247 233 223 215 206 199 190 183 177 173 169 166 163 160 142 131 736 319 299 277 258 244 233 224 217 212 202 194 187 182 177 173 169 166 147 136 352 315 294 276 261 249 239 230 223 214 207 201 195 189 184 179 174 158 144 381 337 319 300 285 271 258 249 242 234 227 220 213 206 200 195 190 173 161 417 405 364 348 331 311 294 284 275 268 261 257 250 245 239 235 231 200 187 768 805 886 1.031 ͙n ͙n ͙n ͙n ͙n Source: From H W Lilliefors, “On the Kolmogorov–Smirnov Test for Normality with Mean and Variance Unknown,” Journal of the American Statistical Association, 62 (1967), pp 399–402 As adapted by Conover, Practical Nonparametric Statistics (New York: John Wiley, 1971), p 398 Critical Values of D for the Kolmogorov–Smirnov Test of Normality 830 Appendix A: Statistical Tables TABLE A.11 Critical Values of Spearman’s Rank Correlation Coefficient, rs, One-Tail Test (For a two-tail test, the listed values correspond to the 2␣ level of significance.) n ␣ ϭ 0.05 ␣ ϭ 0.025 ␣ ϭ 0.01 ␣ ϭ 0.005 10 0.900 0.829 0.714 0.643 0.600 0.564 — 0.886 0.786 0.738 0.683 0.648 — 0.943 0.893 0.833 0.783 0.745 — — — 0.881 0.833 0.794 11 12 13 14 15 0.523 0.497 0.475 0.457 0.441 0.623 0.591 0.566 0.545 0.525 0.736 0.703 0.673 0.646 0.623 0.818 0.780 0.745 0.716 0.689 16 17 18 19 20 0.425 0.412 0.399 0.388 0.377 0.507 0.490 0.476 0.462 0.450 0.601 0.582 0.564 0.549 0.534 0.666 0.645 0.625 0.608 0.591 21 22 23 24 25 0.368 0.359 0.351 0.343 0.336 0.438 0.428 0.418 0.409 0.400 0.521 0.508 0.496 0.485 0.475 0.576 0.562 0.549 0.537 0.526 26 27 28 29 30 0.329 0.323 0.317 0.311 0.305 0.392 0.385 0.377 0.370 0.364 0.465 0.456 0.448 0.440 0.432 0.515 0.505 0.496 0.487 0.478 Source: E G Olds, “Distribution of Sums of Squares of Rank Differences for Small Samples,” Annals of Mathematical Statistics (1938) Appendix A: Statistical Tables 831 TABLE A.12 n ؍number of observations k ؍number of independent variables k؍1 k؍2 k؍3 k؍4 Values of dL and dU for the Durbin–Watson Test for ␣ ϭ 0.05 k؍5 n dL dU dL dU dL dU dL dU dL dU 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 50 55 60 65 70 75 80 85 90 95 100 1.08 1.10 1.13 1.16 1.18 1.20 1.22 1.24 1.26 1.27 1.29 1.30 1.32 1.33 1.34 1.35 1.36 1.37 1.38 1.39 1.40 1.41 1.42 1.43 1.43 1.44 1.48 1.50 1.53 1.55 1.57 1.58 1.60 1.61 1.62 1.63 1.64 1.65 1.36 1.37 1.38 1.39 1.40 1.41 1.42 1.43 1.44 1.45 1.45 1.46 1.47 1.48 1.48 1.49 1.50 1.50 1.51 1.51 1.52 1.52 1.53 1.54 1.54 1.54 1.57 1.59 1.60 1.62 1.63 1.64 1.65 1.66 1.67 1.68 1.69 1.69 0.95 0.98 1.02 1.05 1.08 1.10 1.13 1.15 1.17 1.19 1.21 1.22 1.24 1.26 1.27 1.28 1.30 1.31 1.32 1.33 1.34 1.35 1.36 1.37 1.38 1.39 1.43 1.46 1.49 1.51 1.54 1.55 1.57 1.59 1.60 1.61 1.62 1.63 1.54 1.54 1.54 1.53 1.53 1.54 1.54 1.54 1.54 1.55 1.55 1.55 1.56 1.56 1.56 1.57 1.57 1.57 1.58 1.58 1.58 1.59 1.59 1.59 1.60 1.60 1.62 1.63 1.64 1.65 1.66 1.67 1.68 1.69 1.70 1.70 1.71 1.72 0.82 0.86 0.90 0.93 0.97 1.00 1.03 1.05 1.08 1.10 1.12 1.14 1.16 1.18 1.20 1.21 1.23 1.24 1.26 1.27 1.28 1.29 1.31 1.32 1.33 1.34 1.38 1.42 1.45 1.48 1.50 1.52 1.54 1.56 1.57 1.59 1.60 1.61 1.75 1.73 1.71 1.69 1.68 1.68 1.67 1.66 1.66 1.66 1.66 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.66 1.66 1.66 1.66 1.67 1.67 1.68 1.69 1.70 1.70 1.71 1.72 1.72 1.73 1.73 1.74 0.69 0.74 0.78 0.82 0.86 0.90 0.93 0.96 0.99 1.01 1.04 1.06 1.08 1.10 1.12 1.14 1.16 1.18 1.19 1.21 1.22 1.24 1.25 1.26 1.27 1.29 1.34 1.38 1.41 1.44 1.47 1.49 1.51 1.53 1.55 1.57 1.58 1.59 1.97 1.93 1.90 1.87 1.85 1.83 1.81 1.80 1.79 1.78 1.77 1.76 1.76 1.75 1.74 1.74 1.74 1.73 1.73 1.73 1.73 1.73 1.72 1.72 1.72 1.72 1.72 1.72 1.72 1.73 1.73 1.74 1.74 1.74 1.75 1.75 1.75 1.76 0.56 0.62 0.67 0.71 0.75 0.79 0.83 0.86 0.90 0.93 0.95 0.98 1.01 1.03 1.05 1.07 1.09 1.11 1.13 1.15 1.16 1.18 1.19 1.21 1.22 1.23 1.29 1.34 1.38 1.41 1.44 1.46 1.49 1.51 1.52 1.54 1.56 1.57 2.21 2.15 2.10 2.06 2.02 1.99 1.96 1.94 1.92 1.90 1.89 1.88 1.86 1.85 1.84 1.83 1.83 1.82 1.81 1.81 1.80 1.80 1.80 1.79 1.79 1.79 1.78 1.77 1.77 1.77 1.77 1.77 1.77 1.77 1.77 1.78 1.78 1.78 Source: From J Durbin and G.S Watson, “Testing for Serial Correlation in Least Squares Regression,” Biometrika, 38 June, 1951 832 TABLE A.12 (continued) Values of dL and dU for the Durbin-Watson Test for ␣ ϭ 0.025 Appendix A: Statistical Tables n ؍number of observations k ؍number of independent variables k؍1 k؍2 k؍3 k؍4 k؍5 n dL dU dL dU dL dU dL dU dL dU 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 50 55 60 65 70 75 80 85 90 95 100 0.95 0.98 1.01 1.03 1.06 1.08 1.10 1.12 1.14 1.16 1.18 1.19 1.21 1.22 1.24 1.25 1.26 1.27 1.28 1.29 1.30 1.31 1.32 1.33 1.34 1.35 1.39 1.42 1.45 1.47 1.49 1.51 1.53 1.54 1.56 1.57 1.58 1.59 1.23 1.24 1.25 1.26 1.28 1.28 1.30 1.31 1.32 1.33 1.34 1.35 1.36 1.37 1.38 1.38 1.39 1.40 1.41 1.41 1.42 1.43 1.43 1.44 1.44 1.45 1.48 1.50 1.52 1.54 1.55 1.57 1.58 1.59 1.60 1.61 1.62 1.63 0.83 0.86 0.90 0.93 0.96 0.99 1.01 1.04 1.06 1.08 1.10 1.12 1.13 1.15 1.17 1.18 1.20 1.21 1.22 1.24 1.25 1.26 1.27 1.28 1.29 1.30 1.34 1.38 1.41 1.44 1.46 1.48 1.50 1.52 1.53 1.55 1.56 1.57 1.40 1.40 1.40 1.40 1.41 1.41 1.41 1.42 1.42 1.43 1.43 1.44 1.44 1.45 1.45 1.46 1.47 1.47 1.48 1.48 1.48 1.49 1.49 1.50 1.50 1.51 1.53 1.54 1.56 1.57 1.59 1.60 1.61 1.62 1.63 1.64 1.65 1.65 0.71 0.75 0.79 0.82 0.86 0.89 0.92 0.95 0.97 1.00 1.02 1.04 1.06 1.08 1.10 1.12 1.13 1.15 1.16 1.17 1.19 1.20 1.21 1.23 1.24 1.25 1.30 1.34 1.37 1.40 1.43 1.45 1.47 1.49 1.51 1.53 1.54 1.55 1.61 1.59 1.58 1.56 1.55 1.55 1.54 1.54 1.54 1.54 1.54 1.54 1.54 1.54 1.54 1.54 1.55 1.55 1.55 1.55 1.55 1.56 1.56 1.56 1.56 1.57 1.58 1.59 1.60 1.61 1.62 1.63 1.64 1.65 1.65 1.66 1.67 1.67 0.59 0.64 0.68 0.72 0.76 0.79 0.83 0.86 0.89 0.91 0.94 0.96 0.99 1.01 1.03 1.05 1.07 1.08 1.10 1.12 1.13 1.15 1.16 1.17 1.19 1.20 1.25 1.30 1.33 1.37 1.40 1.42 1.45 1.47 1.49 1.50 1.52 1.53 1.84 1.80 1.77 1.74 1.72 1.70 1.69 1.68 1.67 1.66 1.65 1.65 1.64 1.64 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.63 1.62 1.62 1.63 1.63 1.63 1.64 1.64 1.65 1.66 1.66 1.67 1.67 1.68 1.69 1.69 1.70 0.48 0.53 0.57 0.62 0.66 0.70 0.73 0.77 0.80 0.83 0.86 0.88 0.91 0.93 0.96 0.98 1.00 1.02 1.04 1.06 1.07 1.09 1.10 1.12 1.13 1.15 1.21 1.26 1.30 1.33 1.36 1.39 1.42 1.44 1.46 1.48 1.50 1.51 2.09 2.03 1.98 1.93 1.90 1.87 1.84 1.82 1.80 1.79 1.77 1.76 1.75 1.74 1.73 1.73 1.72 1.71 1.71 1.70 1.70 1.70 1.70 1.70 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1.70 1.70 1.70 1.71 1.71 1.71 1.72 Appendix A: Statistical Tables 833 TABLE A.12 n ؍number of observations k ؍number of independent variables k؍1 k؍2 k؍3 k؍4 (continued) Values of dL and dU for the Durbin-Watson Test for ␣ ϭ 0.01 k؍5 n dL dU dL dU dL dU dL dU dL dU 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 50 55 60 65 70 75 80 85 90 95 100 0.81 0.84 0.87 0.90 0.93 0.95 0.97 1.00 1.02 1.04 1.05 1.07 1.09 1.10 1.12 1.13 1.15 1.16 1.17 1.18 1.19 1.21 1.22 1.23 1.24 1.25 1.29 1.32 1.36 1.38 1.41 1.43 1.45 1.47 1.48 1.50 1.51 1.52 1.07 1.09 1.10 1.12 1.13 1.15 1.16 1.17 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 1.28 1.29 1.30 1.31 1.32 1.32 1.33 1.34 1.34 1.38 1.40 1.43 1.45 1.47 1.49 1.50 1.52 1.53 1.54 1.55 1.56 0.70 0.74 0.77 0.80 0.83 0.86 0.89 0.91 0.94 0.96 0.98 1.00 1.02 1.04 1.05 1.07 1.08 1.10 1.11 1.13 1.14 1.15 1.16 1.18 1.19 1.20 1.24 1.28 1.32 1.35 1.38 1.40 1.42 1.44 1.46 1.47 1.49 1.50 1.25 1.25 1.25 1.26 1.26 1.27 1.27 1.28 1.29 1.30 1.30 1.31 1.32 1.32 1.33 1.34 1.34 1.35 1.36 1.36 1.37 1.38 1.38 1.39 1.39 1.40 1.42 1.45 1.47 1.48 1.50 1.52 1.53 1.54 1.55 1.56 1.57 1.58 0.59 0.63 0.67 0.71 0.74 0.77 0.80 0.83 0.86 0.88 0.90 0.93 0.95 0.97 0.99 1.01 1.02 1.04 1.05 1.07 1.08 1.10 1.11 1.12 1.14 1.15 1.20 1.24 1.28 1.32 1.35 1.37 1.39 1.42 1.43 1.45 1.47 1.48 1.46 1.44 1.43 1.42 1.41 1.41 1.41 1.40 1.40 1.41 1.41 1.41 1.41 1.41 1.42 1.42 1.42 1.43 1.43 1.43 1.44 1.44 1.45 1.45 1.45 1.46 1.48 1.49 1.51 1.52 1.53 1.55 1.56 1.57 1.58 1.59 1.60 1.60 0.49 0.53 0.57 0.61 0.65 0.68 0.72 0.75 0.77 0.80 0.83 0.85 0.88 0.90 0.92 0.94 0.96 0.98 1.00 1.01 1.03 1.04 1.06 1.07 1.09 1.10 1.16 1.20 1.25 1.28 1.31 1.34 1.37 1.39 1.41 1.43 1.45 1.46 1.70 1.66 1.63 1.60 1.58 1.57 1.55 1.54 1.53 1.53 1.52 1.52 1.51 1.51 1.51 1.51 1.51 1.51 1.51 1.51 1.51 1.51 1.51 1.52 1.52 1.52 1.53 1.54 1.55 1.56 1.57 1.58 1.59 1.60 1.60 1.61 1.62 1.63 0.39 0.44 0.48 0.52 0.56 0.60 0.63 0.66 0.70 0.72 0.75 0.78 0.81 0.83 0.85 0.88 0.90 0.92 0.94 0.95 0.97 0.99 1.00 1.02 1.03 1.05 1.11 1.16 1.21 1.25 1.28 1.31 1.34 1.36 1.39 1.41 1.42 1.44 1.96 1.90 1.85 1.80 1.77 1.74 1.71 1.69 1.67 1.66 1.65 1.64 1.63 1.62 1.61 1.61 1.60 1.60 1.59 1.59 1.59 1.59 1.59 1.58 1.58 1.58 1.58 1.59 1.59 1.60 1.61 1.61 1.62 1.62 1.63 1.64 1.64 1.65 834 Appendix A: Statistical Tables TABLE A.13 Factors for Determining 3-Sigma Control Limits, Mean and Range Control Charts Number of Observations in Each Sample, n Factor for Determining Control Limits, Control Chart for the Mean, A2 Factors for Determining Control Limits, Control Chart for the Range D3 D4 1.880 1.023 0.729 0.577 0 0 3.267 2.575 2.282 2.115 10 0.483 0.419 0.373 0.337 0.308 0.076 0.136 0.184 0.223 2.004 1.924 1.864 1.816 1.777 11 12 13 14 15 0.285 0.266 0.249 0.235 0.223 0.256 0.284 0.308 0.329 0.348 1.744 1.716 1.692 1.671 1.652 Source: From E S Pearson, “The Percentage Limits for the Distribution of Range in Samples from a Normal Population,” Biometrika 24 (1932): 416 Appendix B: Selected Answers Answers to Selected Odd-Numbered Exercises Chapter 2.3 a 40.88 million b lower limit is 35, upper limit is under 45 c 10 years d 40 years 2.5 a 172.21 thousand b lower limit is 45, upper limit is under 55 c 10 years d 50 years 2.45 a y ϭ 323.53 ϩ 0.8112x b yes, direct 2.55 a 1290 cities b 2729 cities c 242 cities; 8.65% d 175,000 2.67 a yes b lawjudge ϭ 0.381 ϩ 0.90*acad 2.69 a yes b Canada ϭ 1.0894 ϩ 1.754*U.S 2.71 c highest: mechanical and electrical 1; lowest: mechanical and electrical Chapter 3.1 x ϭ $20.45, median ϭ $20.495 3.3 x ϭ 57.05 visitors, median ϭ 57.50, mode ϭ 63 3.5 x ϭ 8.35, median ϭ 8.60 3.7 x ϭ 37.7, median ϭ 35.0 3.9 83.2 3.11 a mean b median 3.13 x ϭ 398.86, median ϭ 396.75 3.15 females: x ϭ 40.62, median ϭ 39.00; males: x ϭ 41.08, median ϭ 41.50 3.17 range ϭ 39, MAD ϭ 19.71 visitors, s ϭ 12.40, s2 ϭ 153.84 3.19 a ϭ29.11million,medianϭ22.4million,rangeϭ 44.5 million, midrange ϭ 39.45 million b MAD ϭ 12.99 million c ϭ 15.44 million, 2 ϭ 238.396 3.21 a x ϭ 27.2 mpg, median ϭ 28 mpg, range ϭ 30 mpg, midrange ϭ 25 mpg b MAD ϭ 5.6 mpg c s ϭ 8.052, s2 ϭ 64.84 3.23 Q1 ϭ 7, Q2 ϭ 18, Q3 ϭ 30, interquartile range ϭ 23, quartile deviation ϭ 11.5 3.25 a x ϭ 23.35, median ϭ 22.86, range ϭ 26.13, midrange ϭ 26.465 b MAD ϭ 4.316 c s ϭ 5.49, s2 ϭ 30.14 3.27 a x ϭ 90.771, median ϭ 91.4, range ϭ 40.4, midrange ϭ 92.0 b MAD ϭ 6.70 c s ϭ 8.36, s2 ϭ 69.90 3.29 a 84% b 88.89% c 96% 3.31 90%, yes 3.33 a 68% b 2.5% c 84% d 13.5% 3.35 Barnsboro 3.39 a approximately 8.40 and 0.912 3.41 approximately 21.75 and 15.65 3.43 r ϭ Ϫ0.8 3.45 lawjudge ϭ 0.3805 ϩ 0.9*acad, r2 ϭ 0.9454, r ϭ 0.972 3.47 generic ϭ Ϫ3.5238 ϩ 0.377*brand, r2 ϭ 0.7447, r ϭ 0.863 3.49 $4.69 3.51 x ϭ 117.75, median ϭ 117.5, no 3.53 no, positively skewed 3.55 a x becomes 3.1 lbs., s remains 0.5 lbs b 4.1 lbs 3.57 a x ϭ 2.10, median ϭ 2.115, range ϭ 0.46, midrange ϭ 2.08 b MAD ϭ 0.137 c s ϭ 0.156, s2 ϭ 0.02446 3.59 120, 116, 124, 20, symmetrical 3.61 greater variation for data in exercise 3.60 Chapter 4.3 a secondary b secondary 4.11 response error 4.13 telephone 4.39 systematic 4.43 a sample b sample c sample d census 4.69 a 56 b 144 Chapter 5.1 subjective 5.7 decrease for men, increase for women 5.9 0.36 5.13 a b 196 c 147 d 372 5.19 0.945, 0.035 5.21 0.35, 0.65 5.23 0.71 5.25 b 0.13 c 0.35 d 0.83 5.27 0.91 5.29 a 0.947 b 0.710 c 0.351 d 0.992 5.31 no, no 5.33 a 0.184 b 0.123 c 0.034 d 0.659 5.35 0.851 5.37 a 0.019 b 0.745 c 0.236 5.39 0.175 5.43 0.154 5.45 a 0.51 b 0.85 c 0.77 5.47 a 0.5 b 0.455 5.49 a 0.1 b 0.233 5.51 256 5.53 36 5.57 20 5.59 7.9019*1027 5.61 a 0.000000005 b 0.0625 c 0.062500005 d 5.63 a 0.001 b classical c yes 5.65 a b c 5.67 a 0.005 b 0.855 c 0.14 d 0.995 5.69 a 0.85 b 0.278 c 0.046 5.71 a 0.7 b 0.9 c 0.667 5.73 a 0.974 b 4.147*10Ϫ12 c 10 5.75 22 5.77 0.296, 0.963 5.79 a 0.9 b 0.429 5.81 120 5.83 1320 5.85 10,000 Chapter 6.3 a discrete b continuous c discrete d continuous 6.5 ϭ 7.2, ϭ 2.18, 2 ϭ 4.76 6.7 ϭ 1.9, ϭ 1.14, 2 ϭ 1.29 6.9 $10 6.11 E(x) ϭ 3.1 6.13 $0.50 6.15 $37.9 million, $41.9 million, $40.3 million, minor 6.19 a 3.6 b 1.59 c 0.2397 d 0.9133 e 0.5075 6.21 a 0.0102 b 0.2304 c 0.3456 d 0.0778 6.23 a 0.9703 b 0.0294 c 0.0003 d 0.0000 6.25 a 0.9375 b 0.6875 6.27 0.1250, 0.1250, 0.3750 6.29 0.3439 6.31 a 0.00187 b 0.0185 c yes 6.35 a 0.1000 b 0.6000 c 0.3000 6.37 0.9000 6.39 a 0.1353 b 0.2707 c 0.8571 d 0.5940 6.41 a 5.3 b 0.1239 c 0.1740 d 0.9106 e 0.8784 6.43 a 1.9 b 0.2842 c 0.0812 d 0.9966 e 0.5531 6.45 0.0047, should consider slight decrease 6.47 a 10.0 b 0.0901 c 0.1251 d 0.7916 e 0.7311 6.49 0.2275, $45,000 6.51 not merely a coincidence 6.55 0.7174 6.57 no 6.61 87.95% 6.63 0.3684 6.65 0.1837 6.67 0.6065 6.69 0.1323 6.71 0.1904 6.73 0.0839 6.75 0.9872 6.77 0.7748 6.79 0.0012, not believable 6.81 0.1667, 0.2853, 0.0327 835 836 Appendix B: Answers to Selected Odd-Numbered Exercises Chapter high b numerically low 10.23 no; not reject H0 10.25 a 0.0618 b 0.1515 c 0.0672 10.27 reject H0; p-value ϭ 0.021 10.29 not reject H0; p-value ϭ 0.035 10.31 no; not reject H0; p-value ϭ 0.052 10.33 p-value ϭ 0.035; not reject H0 10.35 a not reject H0 b reject H0 c not reject H0 d reject H0 10.37 (1.995, 2.055); not reject H0; same 10.41 not reject H0 10.43 no; not reject H0 10.45 no; reject H0 10.47 yes; reject H0 10.49 not reject H0 10.51 yes; reject H0 10.53 (86.29, 92.71); yes; yes 10.55 (35.657, 37.943); no; yes 10.57 0.03; reject H0 10.61 reject H0 10.63 reject H0 10.65 no; reject H0 10.67 yes; reject H0; p-value ϭ 0.005 10.69 reject H0; p-value ϭ 0.023 10.71 yes; p-value ϭ 0.118 10.73 no; not reject H0; p-value ϭ 0.079 10.75 (0.735, 0.805); yes; yes 10.77 (0.402, 0.518); yes; yes 10.81 alpha unchanged, beta decreases 10.83 0.9871 10.87 a 2.33 b 0.036 10.93 reject H0, has increased 10.95 a 0.05 level: reject H0 b 95% CI: (179,278; 198,322) 10.97 yes 10.99 reject H0 10.101 reject H0; statement not credible 10.103 not reject H0; claim is credible 10.105 a 0.9505 b 0.8212 c 0.5753 d 0.2946 e 0.1020 10.107 b e.g., 0.005 c e.g., 0.02 d 0.024 10.109 yes; reject H0; p-value ϭ 0.007 10.111 no; not reject H0; p-value ϭ 0.059 10.113 not reject H0; p-value ϭ 0.282 7.9 a 0.5 b approx 0.683 c approx 0.8415 d e approx 0.4775 f approx 0.9775 7.11 a 0.5 b approx 0.955 c approx 0.683 d approx 0.9985 7.13 a approx 0.1585 b approx 0.683 c approx 0.0225 d approx 0.819 7.15 a approx 0.0015 b approx 0.1585 7.17 a Ϫ0.67, 0.67 b Ϫ1.28, 1.28 c Ϫ0.74, 0.74 7.19 a Ϫ2.00 b Ϫ0.80 c 0.00 d 3.40 e 4.60 7.21 a 0.3643 b 0.1357 c 0.9115 7.23 a 0.8730 b 0.2272 c 0.1091 7.25 a 0.52 b 2.05 c 0.10 d 0.52 7.27 a 0.0874 b 0.8790 c 0.8413 7.29 a 0.4207 b 0.3050 c 0.3446 7.31 $343,350 7.33 a 0.4207 b $9545 7.35 3.22 minutes 7.37 70,500 7.41 a 10.0, 2.739 b 0.1098, 0.2823, 0.3900, 0.1003 7.43 a 12.0, 2.19 b 0.1797 c 0.1820 d 0.8729 7.45 0.6198 7.47 0.6037 7.53 a 0.5488 b 0.4493 c 0.3679 d 0.3012 7.55 0.2865 7.57 0.619, 0.383, 8779.7 hours 7.61 0.1151 7.63 0.3911 7.65 no 7.67 0.0808 7.69 0.1970 7.71 0.8023 7.73 not credible 7.75 0.6604 7.77 0.4584, 0.2101 7.79 0.4307 7.81 0.0918, 0.0446 7.83 20.615 7.85 a 0.4628 b 0.7772 c 12,900 7.87 0.4724 Chapter 8.3 0.18, 0.15, 1000 8.7 a 25 b 10 c d 3.162 8.9 a 0.8413 b 0.6853 c 0.9938 8.11 0.8849, 0.8823 8.13 concern is justified 8.15 0.1056 8.17 0.1949 8.19 0.9881 8.21 ϭ 0.265, p Յ 0.079 8.23 a 0.9616 b 0.5222 c 0.9616 8.25 a 0.426 b 0.35 c 0.035 d 0.9850 8.27 0.9938 8.29 0.0000; district’s claim is much more credible 8.35 0.1170 8.37 0.8023 8.43 a 0.0183 b not credible 8.45 b 0.038 8.47 a 2.40 b 0.0082 c no 8.51 a 0.10 b 0.0062 c no 8.53 0.0643 8.55 0.0000 8.57 0.0228 Chapter 11 9.7 a 2.625 b 4.554 9.11 a 0.45 b (0.419, 0.481) c 95%, 0.95 9.15 90%: (236.997, 243.003); 95%: (236.422, 243.578) 9.17 90%: (82.92, 87.08); 95%: (82.52, 87.48) 9.19 (149.006, 150.994) 9.23 (99.897, 100.027); yes 9.27 1.313 9.29 a 1.292 b Ϫ1.988 c 2.371 9.31 95%: (46.338, 54.362); 99%: (44.865, 55.835) 9.33 a (22.16, 27.84) b yes 9.35 (28.556, 29.707) 9.37 (1520.96, 1549.04) 9.39 (10.15, 13.55); no 9.41 (21.92, 22.68); yes 9.43 (0.429, 0.491) 9.45 (0.153, 0.247); not credible 9.47 (0.37, 0.43) 9.49 (0.5794, 0.6246) 9.51 a (0.527, 0.613) 9.53 (0.925, 0.975) 9.55 (0.566, 0.634); no; may not succeed 9.57 (0.311, 0.489) 9.61 92 9.63 1068 9.65 863 9.67 601 9.71 95%: (0.522, 0.578); 99%: (0.513, 0.587) 9.73 458 9.75 92 9.77 462 9.79 1226 9.81 0.01 9.83 (135.211, 138.789) 9.85 246 9.87 90%: (44.91, 49.96); 95%: (44.39, 50.47) 9.89 411 9.91 $1200 9.93 95%: (0.347, 0.433); 99%: (0.334, 0.446) 9.95 4161 9.97 722 9.99 1068 9.101 1469 9.103 3745 9.105 90%: (0.375, 0.425); 95%: (0.370, 0.430) 9.107 (0.018, 0.062) 9.109 ($24.33, $25.67) 9.111 (64.719, 68.301); funds are not endangered 11.3 not reject H0, Ͼ 0.20 11.5 yes; reject H0 11.7 not reject H0; between 0.05 and 0.10; (Ϫ8.872, 0.472) 11.9 reject H0; between 0.10 and 0.05; 90% CI: (Ϫ1.318, Ϫ0.082) 11.11 claim could be valid; reject H0; between 0.01 and 0.025 11.13 yes; p-value ϭ 0.0000 11.15 (0.639, 3.961); no; yes 11.17 (Ϫ3.33, 31.20); yes; yes 11.19 not reject H0 11.21 Yes, not reject H0; Ͼ 0.20; 95% CI: (Ϫ23.43, 6.43) 11.23 not reject H0; 0.1400; (Ϫ117.18, 17.18); yes; yes 11.25 not reject H0; (Ϫ2.03, Ϫ1.37); yes; yes 11.27 reject H0; 0.000 11.29 not reject H0; 0.095 11.33 reject H0 11.35 Yes, not reject H0; 0.2538; 95% CI: (Ϫ23.10, 6.10) 11.37 not reject H0; 0.1967; (Ϫ7.77, 37.77) 11.39 reject H0; 0.0122 11.41 reject H0; 0.03 11.43 not reject H0; 0.12 11.45 dependent 11.47 reject H0 11.49 reject H0; between 0.005 and 0.01 11.51 not reject H0; 0.170; (Ϫ4.663, 1.774) 11.53 reject H0 11.55 no; not reject H0 11.57 not reject H0 11.59 yes; reject H0; 0.0607; (Ϫ0.150, Ϫ0.010) 11.61 reject H0; 0.0583 11.63 not reject H0; 0.077; (Ϫ0.032, 0.172) 11.65 not reject H0; 0.1866; (Ϫ0.055, 0.005) 11.67 not reject H0; no, no 11.69 yes; not reject H0; no 11.71 yes; reject H0 11.73 reject H0 11.75 suspicion confirmed; reject H0; between 0.025 and 0.05 11.77 not reject H0; (Ϫ505.32, 65.32) 11.79 not reject H0; Ͼ 0.10 11.81 yes; reject H0; 0.0141 11.83 reject H0; table: Ͻ 0.01; computer: 0.0075; (0.13, 6.67) 11.85 not reject H0; (Ϫ1.13, 0.13) 11.87 yes; reject H0 11.89 no; not reject H0 11.91 yes; reject H0; Ͻ 0.005 11.93 yes; reject H0; 0.001 11.95 yes; not reject H0; no 11.97 not reject H0; 0.140; (Ϫ1.95, 12.96) 11.99 not supported; not reject H0; 0.135 11.101 not reject H0; 0.414; (Ϫ0.1204, 0.0404) Chapter 10 Chapter 12 Chapter 10.3 a no b yes c yes d no e yes f no 10.5 type I 10.7 type II 10.13 type I 10.17 a numerically 12.7 designed 12.21 not reject H0; Ͼ 0.05 12.23 not reject H0; Ͼ 0.05 12.25 not reject H0 12.27 b reject Appendix B: Answers to Selected Odd-Numbered Exercises H0 12.29 b not reject H0 c 1: (48.914, 61.086); 2: (42.714, 54.886); 3: (39.514, 51.686) 12.35 1: (16.271, 19.009); 2: (13.901, 16.899); 3: (15.690, 18.060) 12.47 reject H0; between 0.025 and 0.05 12.49 not reject H0; not reject H0 12.51 not reject H0; not reject H0 12.53 reject H0 12.55 reject H0 12.59 yes; reject H0 12.69 Factor A, not reject H0; Factor B, reject H0; Interaction, reject H0 12.71 Factor A, not reject H0; Factor B, reject H0; Interaction, not reject H0 12.73 Factor A, reject H0; Factor B, reject H0; Interaction, reject H0 12.75 Assembly, not reject H0; Music, reject H0; Interaction, reject H0; Method 1, (35.846, 41.654); Method 2, (34.096, 39.904); Classical, (29.096, 34.904); Rock, (40.846, 46.654) 12.77 Bag, not reject H0; Dress, reject H0; Interaction, reject H0; Carry, (24.798, 30.536); Don’t Carry, (27.298, 33.036); Sloppy, (41.736, 48.764); Casual, (17.736, 24.764); Dressy, (16.736, 23.764) 12.79 Keyboard, reject H0; Wordpack, not reject H0; Interaction, reject H0 12.83 randomized block 12.85 independent: faceplate design; dependent: time to complete task; designed 12.87 no, randomized block procedure should be used 12.89 reject H0 12.91 not reject H0 12.93 reject H0 12.95 not reject H0 12.97 reject H0 12.99 Style, not reject H0; Darkness, reject H0; Interaction, not reject H0; Style 1, (25.693, 30.307); Style 2, (23.693, 28.307); Light, (27.425, 33.075); Medium, (22.175, 27.825); Dark, (22.925, 28.575) 12.101 Position, reject H0; Display, not reject H0; Interaction, reject H0; Position 1, (42.684, 47.982); Position 2, (47.129, 52.427); Display 1, (42.089, 48.577); Display 2, (43.923, 50.411); Display 3, (46.923, 53.411) 837 14.33 no; reject H0; 0.058 14.37 yes; not reject H0; between 0.10 and 0.90 14.39 no; not reject H0; 0.393 14.43 not reject H0 14.45 reject H0 14.49 not reject H0 14.51 not reject H0 14.53 not reject H0 14.55 0.83, yes 14.57 0.343, no 14.59 reject H0; 0.0287 14.61 no, 0.736 14.63 not reject H0 14.65 yes; reject H0; 0.01 14.67 no; not reject H0; between 0.05 and 0.10 14.69 yes; not reject H0; between 0.025 and 0.05 14.71 yes; not reject H0; between 0.025 and 0.05 14.75 claim not credible, reject H0; between 0.05 and 0.025 14.77 0.868, yes 14.79 no; not reject H0; 0.343 14.81 not reject H0; 0.210 14.83 they can tell the difference; reject H0; 0.016 14.85 not reject H0; 0.486 Chapter 15 13.7 a 3.490 b 13.362 c 2.733 d 15.507 e 17.535 f 2.180 13.9 a A ϭ 8.547, B ϭ 22.307 b A ϭ 7.261, B ϭ 24.996 c A ϭ 6.262, B ϭ 27.488 d A ϭ 5.229, B ϭ 30.578 13.13 13.15 a b 12.592 c not reject H0 13.17 yes; reject H0 13.19 not reject H0 13.21 not reject H0 13.23 not reject H0 13.25 reject H0 13.29 a b c 12 d e 12 f 13.31 a 12.833 b 15.507 c 16.812 d 7.779 13.33 no, reject H0; between 0.025 and 0.01 13.35 no; reject H0; between 0.05 and 0.025 13.37 no; reject H0; less than 0.01 13.39 yes; not reject H0 13.41 yes; reject H0 13.43 not reject H0 13.45 not reject H0 13.47 reject H0 13.49 yes; not reject H0 13.53 (15.096, 43.011) 13.55 (2.620, 9.664) 13.57 (0.1346, 0.2791) 13.59 not reject H0 13.61 not reject H0 ; between 0.05 and 0.025 13.63 (0.0329, 0.0775) 13.65 not reject H0 13.67 reject H0 13.69 yes; not reject H0 13.71 not reject H0 ; between 0.10 and 0.05 13.73 no; not reject H0; between 0.10 and 0.05 13.75 (0.02001, 0.05694) 13.77 reject H0 13.79 not reject H0 13.81 reject H0 13.83 reject H0 13.85 not reject H0 13.87 probably not; reject H0 15.5 second 15.7 second 15.9 a Shares ϭ 44.3 ϩ 38.756*Years b 431.9 15.11 Totgross ϭ 106.28 ϩ 1.474*Twowks; 253.7 15.13 Acres ϭ 349,550 Ϫ 7851*Rain; 208,223 acres 15.21 a yˆ ϭ 21.701 Ϫ 1.354x b 3.617 c (8.107, 16.339) d (4.647, 14.383) e interval d is wider 15.23 a Rating ϭ 48.59 ϩ 8.327*TD% b 90.22 c 1.765 d (85.228, 95.214) e (88.134, 92.308) 15.25 91.48; (Ϫ33.9, 510.1) 15.27 Revenue ϭ Ϫ1.66*108 ϩ 64,076,803*Dealers; CI: (5.88*109, 6.60*109); PI: (4.92*109, 7.57*109) 15.29 CI: (89,209; 295,832); PI: (Ϫ155,578; 540,619) 15.33 0.81 15.37 a yˆ ϭ 95.797 ϩ 0.09788x b 0.679, 0.461 c 106.56 15.39 a Millsocks ϭ Ϫ8845 ϩ 44.94*Population b 0.904 c 3737.2millionpairs 15.41 rϭ Ϫ0.2791; r2 ϭ 0.0779 15.43 Forgross ϭ Ϫ69.382 ϩ 1.5266*Domgross; r ϭ 0.7356; r2 ϭ 0.5411 15.45 no 15.47 > 0.10 15.49 a reject H0 b reject H0 c (0.023, 0.172) 15.51 a reject H0 b reject H0 c (0.392, 9.608) 15.53 61.2, 58.8 15.55 774.80, 536.01, 238.79; 0.692; not reject H0 15.57 Gallons ϭ 5,921,560.92 ϩ 10.44806*Hours; r ϭ 0.993; r2 ϭ 0.986; no; (8.746, 12.150) 15.59 NetIncome ϭ 59.8006 ϩ 0.0382*Revenue; r ϭ 0.7492; r2 ϭ 0.5612; no; (0.031, 0.045) 15.71 NetIncome ϭ 0.211 ϩ 0.0999*TotRev; $2.009 billion 15.73 a Retired ϭ 33.08 Ϫ 2.893*New b 12.83 c 7.04 15.75 NetIncome ϭ Ϫ0.1821 ϩ 0.0551*Revenue; $0.369 million 15.77 a Fuel ϭ 6.089 ϩ 77.412*Hours b r ϭ 0.995; r2 ϭ 0.99 c 160.9 15.79 a Fuel ϭ 15.084 ϩ 0.0367*Miles b 0.985, 0.971 c 73.131 billion gallons 15.81 a Rear ϭ 1855.9 Ϫ 0.3057*Front b Ϫ0.104, 0.011 c $1550 15.83 a Strength ϭ 60.02 ϩ 10.507*Temptime b 53.0% c 0.017 d 0.017 e (2.445, 18.569) 15.85 a Rolresis ϭ 9.450 Ϫ 0.08113*psi b 23.9% c 0.029 d 0.029 e (Ϫ0.15290, Ϫ0.00936) 15.87 CI: (8.26, 28.87); PI: (Ϫ6.66, 43.78) 15.89 CI: (9.389, 10.076); PI: (8.844, 10.620) 15.91 a GPA ϭ Ϫ0.6964 ϩ 0.0033282*SAT; 2.965 b 69.5% c CI: (2.527, 3.402); PI: (1.651, 4.278) 15.93 a Pay% ϭ 7.0202 ϩ 1.51597*Rate% b 98.0%; yes c CI: (18.9874, 19.3084); PI: (18.6159, 19.6799) 15.95 a Estp/e ϭ 52.56 Ϫ 0.0959*Revgrow%; 38.17 b 0.2%; no c CI: (Ϫ0.74, 77.07); PI: (Ϫ94.67, 171.01) Chapter 14 Chapter 16 14.7 reject H0; < 0.005 14.9 yes; reject H0; 0.017 14.11 not reject H0; 0.700 14.13 reject H0 14.15 yes, reject H0 14.17 not reject H0; between 0.05 and 0.10 14.19 no; not reject H0; 0.137 14.23 not reject H0 14.25 yes; not reject H0; 0.2443 14.29 reject H0 14.31 no; reject H0 16.9 a 300, 7, 13 b 399 16.11 a yˆ ϭ 10.687 ϩ 2.157x1 ϩ 0.0416x2 c 24.59 16.13 a yˆ ϭ Ϫ127.19 ϩ 7.611x1 ϩ 0.3567x2 c Ϫ17.79 16.15 b 454.42 16.17 a 130.0 b 3.195 c (98.493, 101.507) d (93.259, 106.741) 16.19 a CalcFin ϭ Ϫ26.6 ϩ 0.776*MathPro ϩ 0.0820*SATQ; Chapter 13 838 Appendix B: Answers to Selected Odd-Numbered Exercises 90% CI: (64.01, 73.46) b 90% PI: (59.59, 77.88) 16.21 a (79.587, 88.980) b (75.562, 93.005) 16.27 0.716 16.29 a yes b 1, reject H0; 2, not reject H0 d 1, (0.54, 3.77); 2, (Ϫ0.07, 0.15) 16.31 1, (0.0679, 0.4811); 2, (0.1928, 0.5596); 3, (0.1761, 0.4768) 16.33 a yes; both are significant 16.35 a yˆ ϭ Ϫ40,855,482 ϩ 44,281.6x1 ϩ 152,760.2x2 b yes c 1, reject H0; 2, not reject H0 d 1, (41,229.4, 47,333.8); 2, (Ϫ4,446,472, 4,751,992) 16.45 a yˆ ϭ Ϫ0.5617 ϩ 0.0003550x1 ϩ 0.011248x2 Ϫ 0.02116x3 b 1, (Ϫ0.0011, 0.0018); 2, (Ϫ0.0083, 0.0308); 3, (Ϫ0.0847, 0.0424) c 0.465 16.51 Speed ϭ 67.6 Ϫ 3.21*Occupants Ϫ 6.63*Seatbelt 16.53 a yˆ ϭ 99.865 ϩ 1.236x1 ϩ 0.822x2 b 125.816 lbs c (124.194,127.438) d (124.439,127.193) e 1, (0.92, 1.55); 2, (0.09, 1.56) 16.55 a gpa ϭ Ϫ1.984 ϩ 0.00372*sat ϩ 0.00658*rank b 2.634 c (1.594, 3.674) d (2.365, 2.904) e 1, (0.000345, 0.007093); 2, (Ϫ0.010745, 0.023915) 16.57 $38,699 17.73 RATING ϭ 1.955 ϩ 4.189*YDS/ATT Ϫ 4.1649*INT% ϩ3.3227*TD% ϩ 0.8336*COMP%; 100.00% (rounded) Chapter 17 17.3 negative, negative 17.5 positive, negative, positive 17.7 $Avgrate ϭ 286.094 Ϫ 8.239*%Occup ϩ 0.07709*%Occup2; 49.1% 17.9 0to60 ϭ 26.8119 Ϫ 0.153866*hp ϩ 0.0003083*hp2; 8.396 seconds; yes 17.11 Forgross ϭ 860.8 Ϫ 4.152*Domgross ϩ 0.007689*Domgross2; $430.4 million; yes 17.13 second-order with interaction 17.15 $percall ϭ 61.2 ϩ 25.63*yrs ϩ 6.41*score Ϫ1.82*yrs2 Ϫ 0.058*score2 ϩ 0.29*yrs*score; R2 ϭ 0.949; yes 17.17 a oprev ϭ Ϫ231.2 ϩ 0.129*employs ϩ 0.00565*departs; yes b oprev ϭ 399.2 ϩ 0.0745*employs ϩ 0.00087*departs ϩ 0.00000014*employs*departs; R2 increases from 0.958 to 0.986 17.19 0to60 ϭ 25.4 Ϫ 0.161*hp Ϫ 0.00030*curbwt ϩ 0.000028*hp*curbwt; R2 ϭ 0.734; yes 17.23 two 17.25 550 customers 17.27 price ϭ Ϫ30.77 ϩ 4.975*gb ϩ 54.20*highrpm; $54.20 17.29 productivity ϭ 75.4 ϩ 1.59*yrsexp Ϫ 7.36*metha ϩ 9.73*methb; R2 ϭ 0.741 17.31 yˆ ϭ 0.66(1.38)x 17.33 yˆ ϭ 14.5066(1.026016)x ; R2 ϭ 0.509; $87.57 17.35 log revenue ϭ Ϫ0.1285 ϩ 1.0040 log employs Ϫ 0.1121 log departs; revenue ϭ 0.7439*employs1.0040*departsϪ0.1121; $546.2 million 17.41 yes 17.43 multicollinearity may be present 17.45 multicollinearity may be present 17.49 a x5, x2, x9 b yˆ ϭ 106.85 Ϫ 0.35x5 Ϫ 0.33x2 c 0.05 level: x5, x2, x9; 0.01 level: x2 17.59 Pages ϭ Ϫ53.9 ϩ 313.99*x – 26.335*xSq; Ϫ422.6 pages 17.61 yˆ ϭ 10.705 ϩ 0.974x Ϫ 0.015x2; yes; 83.6% 17.63 appartmp ϭ Ϫ10.4 ϩ 1.06*roomtmp ϩ 0.0925*relhumid; appartmp ϭ 3.90 ϩ 0.847*roomtmp Ϫ 0.194*relhumid ϩ 0.00425*roomtmp*relhumid; R2 increases from 0.982 to 0.994 17.65 productivity ϭ 19.1 ϩ 0.211*backlog ϩ 0.577*female; R2 ϭ 0.676; yes 17.67 log appartmp ϭ Ϫ0.28048 ϩ 1.09898 log roomtmp ϩ 0.054483 log relhumid; appartmp ϭ 0.524228*roomtmp1.09898*relhumid0.054483; 71.3 degrees 17.69 yes; opcost/hr ϭ 697.8 ϩ 1.80*gal/hr; 87.69% 17.71 a yes b final ϭ 14.79 ϩ 0.885*test1; R2 ϭ 0.8568 Chapter 18 18.3 369,600 gallons 18.5 with x ϭ for 2001, Earnings ϭ 14.17 ϩ 0.382x; $17.99 18.7 a subs ϭ Ϫ12.4227 ϩ 15.3458x; 263.8 million b subs ϭ 6.5023 ϩ 7.2351x ϩ 0.6239x2; 338.9 million c quadratic 18.17 the 0.4 curve; the 0.7 curve 18.21 30% 18.23 b I , 74.720; II, 103.978; III, 123.761; IV, 97.540 18.25 J, 100.497; F, 94.334; M, 103.158; A, 103.596; M, 98.062; J, 100.547; J, 98.166; A, 98.385; S, 96.864; O, 108.841; N, 93.196; D, 104.353 18.27 $192.0 thousand, $201.6 thousand 18.29 1213.2; 1541.2 18.31 $61.8023 billion 18.33 36.2742 quadrillion Btu 18.35 189,000 gallons 18.39 quadratic, quadratic 18.41 quadratic, quadratic 18.45 0.58, positive autocorrelation 18.47 a inconclusive b inconclusive 18.49 yt ϭ Ϫ1.51 ϩ 1.119ytϪ1; 288.5 18.51 yt ϭ 525.6 ϩ 0.812ytϪ1; 2895.5; MAD ϭ 100.5 18.53 100 18.55 best: services; worst: mining 18.57 $104,213 18.59 Military ϭ 1.6086 Ϫ 0.0379x; 0.964 million 18.61 Restaurants ϭ Ϫ1408.05 ϩ 1206.50x Ϫ 9.92905x2; 23,249 restaurants 18.63 a AvgBill ϭ 16.8086 ϩ 1.15786x; $41.124 b AvgBill ϭ 16.6486 ϩ 1.26452x Ϫ 0.0133x2; $37.324 c quadratic, quadratic 18.67 I, 95.14; II, 89.69; III, 95.64; IV, 119.53 18.69 I, 182.240; II, 68.707; III, 34.210; IV, 114.843 18.71 I, 94.68; II, 93.54; III, 111.05; IV, 100.73 18.73 I, 72.50; II, 83.11; III, 111.16; IV, 132.77 18.75 I, 71; II, 91; III, 104; IV, 66 18.77 Cost ϭ 38.3268 ϩ 0.794341x ϩ 0.0538258x2; Cost ϭ 37.1427 ϩ 1.38642x; quadratic 18.83 DW statistic ϭ 0.48; positive autocorrelation 18.85 CPIt ϭ 2.1176 ϩ 1.01668*CPItϪ1; MAD ϭ 1.429; 194.17 18.87 5.65% increase Chapter 19 19.9 0, 2000, 2000 shirts 19.11 a maximax b maximax c maximin 19.13 purchase, not purchase, purchase 19.17 make claims, $400 19.19 design C, $5.6 million 19.21 direct, 21.5 minutes, 0.5 minutes 19.25 A or C, $12,800, $12,800 19.27 direct, 0.5 minutes; longer, 9.0 minutes 19.29 287 cod 19.31 3545 programs 19.33 units 19.35 current; Dennis; Dennis 19.37 DiskWorth; ComTranDat; ComTranDat Chapter 20 20.35 0.0026 20.39 in control 20.41 out of control 20.43 out of control 20.45 LCL ϭ 0, UCL ϭ 0.088 20.47 LCL ϭ 0, UCL ϭ 15.72 20.49 centerline ϭ 0.197; LCL ϭ 0.113; UCL ϭ 0.282; in control 20.51 centerline ϭ 4.450; LCL ϭ 0; UCL ϭ 10.779; out of control 20.53 in control 20.55 yes; mean chart failed test #1 at sample 20.57 yes; mean chart failed tests #1 and #5 at sample 20.63 c-chart 20.65 centerline ϭ 0.05225; LCL ϭ 0.00504; UCL ϭ 0.09946; in control 20.67 out of control 20.69 centerline ϭ 0.0988; LCL ϭ 0.0355; UCL ϭ 0.1621; in control