Ebook Understandable statistics concepts and methods (10th edition): Part 2

407 85 0
Ebook Understandable statistics concepts and methods (10th edition): Part 2

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

(Bq) Part 2 book Understandable statistics concepts and methods has contents: Hypothesis testing, correlation and regression, chi square and F distributions, nonparametric statistics.

8 Mary Evans Picture Library/Arthur Rackham/ The Image Works 8.1 Introduction to Statistical Tests 8.2 Testing the Mean m 8.3 Testing a Proportion p 8.4 Tests Involving Paired Differences (Dependent Samples) 8.5 Testing m1 Ϫ m2 and p1 Ϫ p2 (Independent Samples) Sam Abell/National Geographic/Getty Images “Would you tell me, please, which way I ought to go from here?” “That depends a good deal on where you want to get to,” said the Cat Charles Lutwidge Dodgson (1832–1898) was an English mathematician who loved to write children’s stories in his free time The dialogue between Alice and the Cheshire Cat occurs in the masterpiece Alice’s Adventures in Wonderland, written by Dodgson under the pen name Lewis Carroll These lines relate to our study of hypothesis testing Statistical tests cannot answer all of life’s questions They cannot always tell us “where to go,” but after this decision is made on other grounds, they can help us find the best way to get there “I don’t much care where—” said Alice “Then it doesn’t matter which way you go,” said the Cat —LEWIS CARROLL Alice’s Adventures in Wonderland For online student resources, visit the Brase/Brase, Understandable Statistics, 10th edition web site at http://www.cengage.com/statistics/brase 408 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Hypothesis Testing Many of life’s questions require a yes or no answer When you must act on incomplete (sample) information, how you decide whether to accept or reject a proposal? (SECTION 8.1) What is the P-value of a statistical test? What does this measurement have to with performance reliability? (SECTION 8.1) How you construct statistical tests for m? Does it make a difference whether s is known or unknown? (SECTION 8.2) How you construct statistical tests for the proportion p of successes in a binomial experiment? (SECTION 8.3) Images ages/Jupiter Comstock Im P R EVI EW QU ESTIONS What are the advantages of pairing data values? How you construct statistical tests for paired differences? (SECTION 8.4) How you construct statistical tests for differences of independent random variables? (SECTION 8.5) FOCUS PROBLEM Benford’s Law: The Importance of Being Number Corbis Benford’s Law states that in a wide variety of circumstances, numbers have “1” as their first nonzero digit disproportionately often Benford’s Law applies to such diverse topics as the drainage areas of rivers; properties of chemicals; populations of towns; figures in newspapers, magazines, and government reports; and the half-lives of radioactive atoms! Specifically, such diverse measurements begin with “1” about 30% of the time, with “2” about 18% of time, and with “3” about 12.5% of the time Larger digits occur less often For example, less than 5% of the numbers in circumstances such as these begin with the digit This is in dramatic contrast to a random sampling situation, in which each of the digits through has an equal chance of appearing The first nonzero digits of numbers taken from large bodies of numerical records such as tax returns, population studies, government records, and so forth, show the probabilities of occurrence as displayed in the table on the next page 409 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it 410 Chapter HYPOTHESIS TESTING First nonzero digit Probability 0.301 0.176 0.125 0.097 0.079 0.067 0.058 0.051 0.046 More than 100 years ago, the astronomer Simon Newcomb noticed that books of logarithm tables were much dirtier near the fronts of the tables It seemed that people were more frequently looking up numbers with a low first digit This was regarded as an odd phenomenon and a strange curiosity The phenomenon was rediscovered in 1938 by physicist Frank Benford (hence the name Benford’s Law) More recently, Ted Hill, a mathematician at the Georgia Institute of Technology, studied situations that might demonstrate Benford’s Law Professor Hill showed that such probability distributions are likely to occur when we have a “distribution of distributions.” Put another way, large random collections of random samples tend to follow Benford’s Law This seems to be especially true for samples taken from large government data banks, accounting reports for large corporations, large collections of astronomical observations, and so forth For more information, see American Scientist, Vol 86, pp 358–363, and Chance, American Statistical Association, Vol 12, No 3, pp 27–31 Can Benford’s Law be applied to help solve a real-world problem? Well, one application might be accounting fraud! Suppose the first nonzero digits of the entries in the accounting records of a large corporation (such as Enron or WorldCom) not follow Benford’s Law Should this set off an accounting alarm for the FBI or the stockholders? How “significant” would this be? Such questions are the subject of statistics In Section 8.3, you will see how to use sample data to test whether the proportion of first nonzero digits of the entries in a large accounting report follows Benford’s Law Problems and of Section 8.3 relate to Benford’s Law and accounting discrepancies In one problem, you are asked to use sample data to determine if accounting books have been “cooked” by “pumping numbers up” to make the company look more attractive or perhaps to provide a cover for money laundering In the other problem, you are asked to determine if accounting books have been “cooked” by artificially lowered numbers, perhaps to hide profits from the Internal Revenue Service or to divert company profits to unscrupulous employees (See Problems and of Section 8.3.) SECTION 8.1 Introduction to Statistical Tests FOCUS POINTS • • • • • • Understand the rationale for statistical tests Identify the null and alternate hypotheses in a statistical test Identify right-tailed, left-tailed, and two-tailed tests Use a test statistic to compute a P-value Recognize types of errors, level of significance, and power of a test Understand the meaning and risks of rejecting or not rejecting the null hypothesis In Chapter 1, we emphasized the fact that one of a statistician’s most important jobs is to draw inferences about populations based on samples taken from the populations Most statistical inference centers around the parameters of a population (often the mean or probability of success in a binomial trial) Methods for drawing inferences about parameters are of two types: Either we make decisions concerning the value of the parameter, or we actually estimate the value of the Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Section 8.1 Introduction to Statistical Tests 411 parameter When we estimate the value (or location) of a parameter, we are using methods of estimation such as those studied in Chapter Decisions concerning the value of a parameter are obtained by hypothesis testing, the topic we shall study in this chapter Students often ask which method should be used on a particular problem—that is, should the parameter be estimated, or should we test a hypothesis involving the parameter? The answer lies in the practical nature of the problem and the questions posed about it Some people prefer to test theories concerning the parameters Others prefer to express their inferences as estimates Both estimation and hypothesis testing are found extensively in the literature of statistical applications Hypothesis testing Hypothesis Stating Hypotheses Null hypothesis H0 Alternate hypothesis H1 Our first step is to establish a working hypothesis about the population parameter in question This hypothesis is called the null hypothesis, denoted by the symbol H0 The value specified in the null hypothesis is often a historical value, a claim, or a production specification For instance, if the average height of a professional male basketball player was 6.5 feet 10 years ago, we might use a null hypothesis H0: m ϭ 6.5 feet for a study involving the average height of this year’s professional male basketball players If television networks claim that the average length of time devoted to commercials in a 60-minute program is 12 minutes, we would use H0: m ϭ 12 minutes as our null hypothesis in a study regarding the average length of time devoted to commercials Finally, if a repair shop claims that it should take an average of 25 minutes to install a new muffler on a passenger automobile, we would use H0: m ϭ 25 minutes as the null hypothesis for a study of how well the repair shop is conforming to specified average times for a muffler installation Any hypothesis that differs from the null hypothesis is called an alternate hypothesis An alternate hypothesis is constructed in such a way that it is the hypothesis to be accepted when the null hypothesis must be rejected The alternate hypothesis is denoted by the symbol H1 For instance, if we believe the average height of professional male basketball players is taller than it was 10 years ago, we would use an alternate hypothesis H1: m 6.5 feet with the null hypothesis H0: m ϭ 6.5 feet Null hypothesis H0: This is the statement that is under investigation or being tested Usually the null hypothesis represents a statement of “no effect,” “no difference,” or, put another way, “things haven’t changed.” Alternate hypothesis H1: This is the statement you will adopt in the situation in which the evidence (data) is so strong that you reject H0 A statistical test is designed to assess the strength of the evidence (data) against the null hypothesis EX AM P LE Null and alternate hypotheses A car manufacturer advertises that its new subcompact models get 47 miles per gallon (mpg) Let m be the mean of the mileage distribution for these cars You assume that the manufacturer will not underrate the car, but you suspect that the mileage might be overrated (a) What shall we use for H0? SOLUTION: We want to see if the manufacturer’s claim that m ϭ 47 mpg can be rejected Therefore, our null hypothesis is simply that m ϭ 47 mpg We denote the null hypothesis as H0: m ϭ 47 mpg Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it 412 Chapter HYPOTHESIS TESTING (b) What shall we use for H1? SOLUTION: From experience with this manufacturer, we have every reason to believe that the advertised mileage is too high If m is not 47 mpg, we are sure it is less than 47 mpg Therefore, the alternate hypothesis is H1: m 47 mpg GUIDED EXERCISE Null and alternate hypotheses A company manufactures ball bearings for precision machines The average diameter of a certain type of ball bearing should be 6.0 mm To check that the average diameter is correct, the company formulates a statistical test (a) What should be used for H0? (Hint: What is the company trying to test?) If m is the mean diameter of the ball bearings, the company wants to test whether m ϭ 6.0 mm Therefore, H0: m ϭ 6.0 mm (b) What should be used for H1? (Hint: An error either way, too small or too large, would be serious.) An error either way could occur, and it would be serious Therefore, H1: m 6.0 mm (m is either smaller than or larger than 6.0 mm) In statistical testing, the null hypothesis H0 always contains the equals symbol However, in the null hypothesis, some statistical software packages and texts also include the inequality symbol that is opposite that shown in the alternate hypothesis For instance, if the alternate hypothesis is “m is less than 3” (m 3), then the corresponding null hypothesis is sometimes written as “m is greater than or equal to 3” (m Ն 3) The mathematical construction of a statistical test uses the null hypothesis to assign a specific number (rather than a range of numbers) to the parameter m in question The null hypothesis establishes a single fixed value for m, so we are working with a single distribution having a specific mean In this case, H0 assigns m ϭ So, when H1: m is the alternate hypothesis, we follow the commonly used convention of writing the null hypothesis simply as H0: m ϭ COMMENT: NOTATION REGARDING THE NULL HYPOTHESIS Types of Tests The null hypothesis H0 always states that the parameter of interest equals a specified value The alternate hypothesis H1 states that the parameter is less than, greater than, or simply not equal to the same value We categorize a statistical test as left-tailed, right-tailed, or two-tailed according to the alternate hypothesis Types of statistical tests A statistical test is: Left-tailed test left-tailed if H1 states that the parameter is less than the value claimed in H0 Right-tailed test right-tailed if H1 states that the parameter is greater than the value claimed in H0 Two-tailed test two-tailed if H1 states that the parameter is different from (or not equal to) the value claimed in H0 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Section 8.1 TABLE 8-1 The Null and Alternate Hypotheses for Tests of the Mean M Null Hypothesis Alternate Hypotheses and Type of Test Claim about m or historical value of m You believe that m is less than value stated in H0 H0: m ϭ k 413 Introduction to Statistical Tests You believe that m is more than value stated in H0 You believe that m is different from value stated in H0 H1: m k H1: m k H1: m Left-tailed test Right-tailed test Two-tailed test k In this introduction to statistical tests, we discuss tests involving a population mean m However, you should keep an open mind and be aware that the methods outlined apply to testing other parameters as well (e.g., p, s, m1 Ϫ m2, p1 Ϫ p2, and so on) Table 8-1 shows how tests of the mean m are categorized Hypothesis Tests of M, Given x Is Normal and S Is Known Sample test statistic for m, given x normal and s known P ROCEDU R E Once you have selected the null and alternate hypotheses, how you decide which hypothesis is likely to be valid? Data from a simple random sample and the sample test statistic, together with the corresponding sampling distribution of the test statistic, will help you decide Example leads you through the decision process First, a quick review of Section 6.4 is in order Recall that a population parameter is a numerical descriptive measurement of the entire population Examples of population parameters are m, p, and s It is important to remember that for a given population, the parameters are fixed values They not vary! The null hypothesis H0 makes a statement about a population parameter A statistic is a numerical descriptive measurement of a sample Examples of statistics are x, pˆ , and s Statistics usually vary from one sample to the next The probability distribution of the statistic we are using is called a sampling distribution For hypothesis testing, we take a simple random sample and compute a sample test statistic corresponding to the parameter in H0 Based on the sampling distribution of the statistic, we can assess how compatible the sample test statistic is with H0 In this section, we use hypothesis tests about the mean to introduce the concepts and vocabulary of hypothesis testing In particular, let’s suppose that x has a normal distribution with mean m and standard deviation s Then, Theorem 6.1 tells us that x has a normal distribution with mean m and standard deviation s/ 1n Requirements The x distribution is normal with known standard deviation s Then x has a normal distribution The standardized test statistic is test statistic ϭ z ϭ xϪm s/ 1n where x ϭ mean of a simple random sample m ϭ value stated in H0 n ϭ sample size EX AM P LE Statistical testing preview Rosie is an aging sheep dog in Montana who gets regular checkups from her owner, the local veterinarian Let x be a random variable that represents Rosie’s resting heart rate (in beats per minute) From past experience, the vet knows that Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it 414 Chapter HYPOTHESIS TESTING x has a normal distribution with s ϭ 12 The vet checked the Merck Veterinary Manual and found that for dogs of this breed, m ϭ 115 beats per minute Over the past six weeks, Rosie’s heart rate (beats/min) measured 93 109 110 89 112 117 The sample mean is x ϭ 105.0 The vet is concerned that Rosie’s heart rate may be slowing Do the data indicate that this is the case? SOLUTION: PictureQuest/Jupiter Images (a) Establish the null and alternate hypotheses If “nothing has changed” from Rosie’s earlier life, then her heart rate should be nearly average This point of view is represented by the null hypothesis H0: m ϭ 115 However, the vet is concerned about Rosie’s heart rate slowing This point of view is represented by the alternate hypothesis H1: m 115 (b) Are the observed sample data compatible with the null hypothesis? Are the six observations of Rosie’s heart rate compatible with the null hypothesis H0: m ϭ 115? To answer this question, we need to know the probability of obtaining a sample mean of 105.0 or less from a population with true mean m ϭ 115 If this probability is small, we conclude that H0: m ϭ 115 is not the case Rather, H1: m 115 and Rosie’s heart rate is slowing (c) How we compute the probability in part (b)? Well, you probably guessed it! We use the sampling distribution for x and compute P(x 105.0) Figure 8-1 shows the x distribution and the corresponding standard normal distribution with the desired probability shaded Check Requirements Since x has a normal distribution, x will also have a normal distribution for any sample size n and given s (see Theorem 6.1) Note that using m ϭ 115 from H0, s ϭ 12, and n ϭ the sample x ϭ 105.0 converts to test statistic ϭ z ϭ xϪm s/ 1n ϭ 105.0 Ϫ 115 Ϸ Ϫ2.04 12/ 16 Using the standard normal distribution table, we find that P(x 105.0) ϭ P(z Ϫ2.04) ϭ 0.0207 The area in the left tail that is more extreme than x ϭ 105.0 is called the P-value of the test In this example, P-value ϭ 0.0207 We will learn more about P-values later P-value FIGURE 8-1 Sampling Distribution for x and Corresponding z Distribution Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Section 8.1 415 Introduction to Statistical Tests (d) Interpretation What conclusion can be drawn about Rosie’s average heart rate? If H0: m ϭ 115 is in fact true, the probability of getting a sample mean of x Յ 105.0 is only about 2% Because this probability is small, we reject H0: m ϭ 115 and conclude that H1: m 115 Rosie’s average heart rate seems to be slowing (e) Have we proved H0: m ϭ 115 to be false and H1: m 115 to be true? No! The sample data not prove H0 to be false and H1 to be true! We say that H0 has been “discredited” by a small P-value of 0.0207 Therefore, we abandon the claim H0: m ϭ 115 and adopt the claim H1: m 115 The P-value of a Statistical Test Rosie the sheep dog has helped us to “sniff out” an important statistical concept P-value Assuming H0 is true, the probability that the test statistic will take on values as extreme as or more extreme than the observed test statistic (computed from sample data) is called the P-value of the test The smaller the P-value computed from sample data, the stronger the evidence against H0 The P-value, sometimes called the probability of chance, can be thought of as the probability that the results of a statistical experiment are due only to chance The lower the P-value, the greater the likelihood of obtaining the same (or very similar) results in a repetition of the statistical experiment Thus, a low P-value is a good indication that your results are not due to random chance alone The P-value associated with the observed test statistic takes on different values depending on the alternate hypothesis and the type of test Let’s look at P-values and types of tests when the test involves the mean and standard normal distribution Notice that in Example 2, part (c), we computed a P-value for a left-tailed test Guided Exercise asks you to compute a P-value for a two-tailed test P-values and types of tests Let zx represent the standardized sample test statistic for testing a mean m using the standard normal distribution That is, zx ϭ (x Ϫ m)/(s/ 1n) P-value ‫ ؍‬P(z zx) This is the probability of getting a test statistic as low as or lower than zx Continued Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it 416 Chapter HYPOTHESIS TESTING P-value ‫ ؍‬P(z zx) This is the probability of getting a test statistic as high as or higher than zx P-value ϭ P(z 0zx ); therefore, P-value ‫ ؍‬2P(z 0zx ) This is the probability of getting a test statistic either lower than Ϫ zx or higher than 0zx – Types of Errors Type I error Type II error Level of significance a If we reject the null hypothesis when it is, in fact, true, we have made an error that is called a type I error On the other hand, if we accept the null hypothesis when it is, in fact, false, we have made an error that is called a type II error Table 8-2 indicates how these errors occur For tests of hypotheses to be well constructed, they must be designed to minimize possible errors of decision (Usually, we not know if an error has been made, and therefore, we can talk only about the probability of making an error.) Usually, for a given sample size, an attempt to reduce the probability of one type of error results in an increase in the probability of the other type of error In practical applications, one type of error may be more serious than another In such a case, careful attention is given to the more serious error If we increase the sample size, it is possible to reduce both types of errors, but increasing the sample size may not be possible Good statistical practice requires that we announce in advance how much evidence against H0 will be required to reject H0 The probability with which we are willing to risk a type I error is called the level of significance of a test The level of significance is denoted by the Greek letter a (pronounced “alpha”) The level of significance a is the probability of rejecting H0 when it is true This is the probability of a type I error TABLE 8-2 Type I and Type II Errors Our Decision Truth of H0 And if we not reject H0 And if we reject H0 If H0 is true Correct decision; no error Type I error If H0 is false Type II error Correct decision; no error Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Section 8.1 417 Introduction to Statistical Tests TABLE 8-3 Probabilities Associated with a Statistical Test Our Decision Probability of a type II error b Truth of H0 And if we accept H0 as true And if we reject H0 as false If H0 is true Correct decision, with corresponding probability Ϫ a Type I error, with corresponding probability a, called the level of significance of the test If H0 is false Type II error, with corresponding probability b Correct decision; with corresponding probability Ϫ b, called the power of the test The probability of making a type II error is denoted by the Greek letter b (pronounced “beta”) Methods of hypothesis testing require us to choose a and b values to be as small as possible In elementary statistical applications, we usually choose a first Power of a test (1 Ϫ b) The quantity Ϫ b is called the power of a test and represents the probability of rejecting H0 when it is, in fact, false For a given level of significance, how much power can we expect from a test? The actual value of the power is usually difficult (and sometimes impossible) to obtain, since it requires us to know the H1 distribution However, we can make the following general comments: The power of a statistical test increases as the level of significance a increases A test performed at the a ϭ 0.05 level has more power than one performed at a ϭ 0.01 This means that the less stringent we make our significance level a, the more likely we will be to reject the null hypothesis when it is false Using a larger value of a will increase the power, but it also will increase the probability of a type I error Despite this fact, most business executives, administrators, social scientists, and scientists use small a values This choice reflects the conservative nature of administrators and scientists, who are usually more willing to make an error by failing to reject a claim (i.e., H0) than to make an error by accepting another claim (i.e., H1) that is false Table 8-3 summarizes the probabilities of errors associated with a statistical test Since the calculation of the probability of a type II error is treated in advanced statistics courses, we will restrict our attention to the probability of a type I error COMMENT GUIDED EXERCISE Types of errors Let’s reconsider Guided Exercise 1, in which we were considering the manufacturing specifications for the diameter of ball bearings The hypotheses were H0: m ϭ 6.0 mm (manufacturer’s specification) H1: m (a) Suppose the manufacturer requires a 1% level of significance Describe a type I error, its consequence, and its probability A type I error is caused when sample evidence indicates that we should reject H0 when, in fact, the average diameter of the ball bearings being produced is 6.0 mm A type I error will cause a needless 6.0 mm (cause for adjusting process) Continued Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it A80 ANSWERS TO SELECTED EVEN-NUMBERED PROBLEMS 12 (a) Visitors Treated Each Day by YPMS (first period) CHAPTER Section 6.1 40 (a) Normal Curve 15, 34.3 2 30 20 21.7 10 9.1 11 13 15 17 19 10 Day 21 In control (b) Visitors Treated Each Day by YPMS (second period) (b) Normal Curve 15, 40 34.3 30 12 15 18 21 21.7 24 20 (c) Normal Curve 12, 10 9.1 10 Day Out-of-control signals I and III are present 14 (a) Number of Rooms Rented (first period) 310 10 12 14 16 300 18 304 290 (d) Normal Curve 280 12, 270 260 268 250 240 230 232 220 210 12 15 18 21 10 Days Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it A81 ANSWERS TO SELECTED EVEN-NUMBERED PROBLEMS In control (b) Number of Rooms Rented (second period) CHAPTER Section 9.1 310 300 14 (a) Group Health Insurance Plans: Average Number of Employees versus Administrative Costs as a Percentage of Claims 304 290 Percent 280 270 260 268 40 250 240 230 50 232 30 220 210 20 10 Days 10 Out-of-control signals I and III are present Chapter Review 10 20 30 40 50 60 70 80 Number of employees 20 (a) Hydraulic Pressure in Main Cylinder of Landing Gear of Airplanes (psi)—First Data Set 16 (a) Magnitude (Richter Scale) and Depth (km) of Earthquakes 888 Depth (km) 865 12 10 819 773 2 3 Landing number In control (b) Hydraulic Pressure in Main Cylinder of Landing Gear of Airplanes (psi)—Second Data Set Out of control signals I and III are present 890 880 870 860 850 840 830 820 810 800 790 780 770 760 750 740 730 720 888 865 2 Magnitude 18 (a) Student Enrollment (in thousands) versus Number of Burglaries Burglaries y 890 880 870 860 850 840 830 820 810 800 790 780 770 760 750 70 60 50 40 30 20 10 773 10 20 30 Student enrollment x 3 10 Landing number Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it A82 ANSWERS TO SELECTED EVEN-NUMBERED PROBLEMS Section 9.2 14 (a) Percent Change in Rate of Violent Crime and Percent Change in Rate of Imprisonment in U.S Population (a) Age and Weight of Healthy Calves x 220 Percent change in imprisonment y Weight (kg) 200 180 160 140 120 x (x, y ) 100 80 60 (x, y) –5 40 10 11 Percent change in violent crime x 10 15 20 25 30 35 40 Age (weeks) 16 (a) Number of Research Programs and Mean Number of Patents per Program Percent of wins 10 (a) Fouls and Basketball Wins y 2.0 y 50 X 40 X X (x, y) 1.5 (x, y) X 30 1.0 20 10 0.5 x Excess fouls x 10 11 12 13 14 15 16 17 18 19 20 Number of programs 18 (a) Chirps per Second and Temperature (ЊF) Temperature y Percent failing to yield y 12 (a) Age and Percentage of Fatal Accidents Due to Failure to Yield 40 30 90 (x, y) 20 80 (x, y) 10 70 40 50 60 70 80 90 Age x 14 15 16 17 18 19 20 Chirps per second x Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it A83 ANSWERS TO SELECTED EVEN-NUMBERED PROBLEMS Residual 20 (a) Residuals: 2.9; 2.1; Ϫ0.1; Ϫ2.1; Ϫ0.5; Ϫ2.3; Ϫ1.9; 1.9 Residual Plot (a) Number of Insurance Sales and Number of Visits 3.0 X 2.0 1.0 X –1.0 x –2.0 –3.0 20 30 40 50 60 Weight of car (hundreds of pounds) 10 (a) Percent Population Change and Crime Rate Crime rate (per 1000) 10 24 (a) Model with (xЈyЈ) Data Pairs y' 0.7 0.6 0.5 250 200 X 150 100 X (x, y) 50 0 0.4 10 20 30 40 % Population change 0.3 0.2 C H A P T E R 10 0.1 0.5 1.0 Section 10.1 1.5 14 (i) Percentage of Each Party Spending Designated Amount x' Chapter Review (a) Annual Salary (thousands) and Number of Job Changes Percentage Democrat 60 49% 50 40% 40 33% 30 20 Republican 34% 26% 18% 10 X X 10 billion Party spending x Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it A84 ANSWERS TO SELECTED EVEN-NUMBERED PROBLEMS Section 10.5 (a) ␣ ϭ 0.05; H0: ␮1 ϭ ␮2 ϭ ␮3 ϭ ␮4; H1: Not all the means are equal (b–f) Source of Variation Between groups Sum of Squares Degrees of Freedom Mean Square F Ratio P-value Test Decision 1.573 Ͼ 0.100 Do not reject H0 Mean Square F Ratio P-value Test Decision 0.816 Ͼ 0.100 Do not reject H0 421.033 140.344 Within groups 1516.967 17 89.233 Total 1938.000 20 From TI-84, P-value Ϸ 0.2327 (a) ␣ ϭ 0.01; H0: ␮1 ϭ ␮2 ϭ ␮3; H1: Not all the means are equal (b–f) Source of Variation Between groups Sum of Squares Degrees of Freedom 215.680 107.840 Within groups 1981.725 15 132.115 Total 2197.405 17 From TI-84, P-value Ϸ 0.4608 (a) ␣ ϭ 0.05; H0: ␮1 ϭ ␮2 ϭ ␮3; H1: Not all the means are equal (b–f) Source of Variation Between groups Sum of Squares Degrees of Freedom Mean Square F Ratio P-value Test Decision 2.441 1.2207 2.95 between Do not reject H0 0.4138 Within groups 7.448 18 Total 9.890 20 0.050 and 0.100 From TI-84, P-value Ϸ 0.0779 (a) ␣ ϭ 0.05; H0: ␮1 ϭ ␮2 ϭ ␮3 ϭ ␮4; H1: Not all the means are equal (b–f) Source of Variation Sum of Squares Between groups Within groups Total Degrees of Freedom Mean Square F Ratio P-value Test Decision 18.965 6.322 14.910 Ͻ 0.001 Reject H0 5.517 13 0.424 24.482 16 From TI-84, P-value Ϸ 0.0002 Chapter 10 Review One-way ANOVA H0: ␮1 ϭ ␮2 ϭ ␮3; H1: Not all the means are equal Source of Variation Between groups Sum of Squares Degrees of Freedom Mean Square F Ratio P-value Test Decision 0.443 Ͼ 0.100 Fail to reject H0 1.002 0.501 Within groups 10.165 1.129 Total 11.167 11 TI-84 gives P-value Ϸ 0.6651 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Index Additive rules of probability, 147–150, 154 general, 149, 150, 154 mutually exclusive events, 149, 150, 154 Alpha (level of significance), 416 Alpha (probability of Type I error), 416 Alpha (population constant in least-squares line), 541 Alternate hypothesis H1, 411 for coefficients in multiple regression model, 566 for difference of several means (one-way ANOVA), 642, 648 for difference of several means (two-way ANOVA), 659 for difference of two means (paired difference), 453 for difference of two means (independent samples), 468 for difference of two proportions, 476 for left tailed test, 412 for rank-sum test, 688, 689 for right tailed tests, 412 for runs test, 706, 709 for sign test, 679, 681 for test of correlation coefficient, 518, 542 for test of goodness of fit, 611 for test of homogeneity, 601, 602 for test of independence, 598 for test of mean, 413 for test of proportion, 442 for test of slope of least-squares line, 550, 558 for test of Spearman rank correlation coefficient, 696, 698 for test of two variances, 631–632 for test of variance, 622 for two tailed test, 412 Analysis of variance (one-way ANOVA), 640–649 alternate hypothesis, 642, 648 degrees of freedom for denominator, 647, 649 degrees of freedom for numerator, 647, 649 F distribution, 647, 649 null hypothesis, 642, 648 Analysis of variance (two-way ANOVA), 656–664 alternate hypothesis, 659 column factor, 657, 658 degrees of freedom for denominator, 661 degrees of freedom for numerator, 661 F distribution, 661–662 interaction, 657, 659, 663 levels of a factor, 657 null hypothesis, 659 row factor, 657, 658 And (A and B), 143, 146, 147, 154 See also Probability Arithmetic mean, 85, 108, 292 See also Mean Averages, 82–89, 107–108 geometric mean, 93 grouped data, 107–108 harmonic mean, 93 mean, 85, 108, 185, 186, 188, 213, 223, 225, 238, 269 median, 83, 110, 707 mode, 82 moving, 109 population mean m, 85, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 299, 301, 309, 313, 334, 347, 350–351, 352–353, 379–380, 413, 428, 467, 471, 642, 659, 679, 688, 713 sample mean x, 85, 108, 185, 292, 335, 350–351, 413, 426, 428, 504, 508, 524, 530 trimmed mean, 86 weighted, 88 b (slope of least squares line), 522, 524, 541, 549–550 Back-to-back stem plot, 70 Bar graph, 54, 55, 59, 601 Bayes’s Theorem, A1-A5 Benford’s Law, 409–410 Bernoulli, 195 Bernoulli experiment, 195 Best fitting line, 521 See also Least squares line Beta (probability of a type II error), 417 Beta (population coefficient of least squares equation), 541, 549–550 Bias, 25, 27 Bimodal histogram, 47 Binomial, 195, 210–213, 227–228, 309 approximated by normal, 309 approximated by Poisson, 227–228 coefficients, 198, 199 distribution, 197–200, 210, 213, 229 experiment, 195 formula for probabilities, 198, 229 histogram, 210 mean of binomial distribution, 213 negative, 224, 238 standard deviation of binomial distribution, 213 variable (r), 195, 227, 309, 313, 360, 380, 442, 475 Bivariate normal distribution, 508, 522 Black swan event, 138 Block, 24, 665 Blocking, 23, 665 Boundaries, class, 42 Box-and-whisker plot, 114 CV (coefficient of variation), 100 Categorical data, Cause and effect relations, 513 Cells, 593, 657 Census, 25 Central Limit Theorem, 299–300 Chebyshev’s Theorem, 101–102 Chi-square (x2), 592, 597, 602, 609–616, 619–621, 625–626 calculation of, 595–596, 598, 610, 611, 619, 622 confidence interval for variance, 625–626 degrees of freedom for confidence interval of variance, 625 degrees of freedom for goodness of fit test, 610, 611 degrees of freedom for homogeneity test, 598, 600–602 degrees of freedom for independence test, 597, 598 degrees of freedom for test of a variance, 619, 622 distribution of, 592, 619–621 test for goodness of fit, 608–611 test for homogeneity, 600–602 test for independence, 593–598 test for a single variance or standard deviation, 618–622 Circle graph, 57, 59 Class, 40–42 boundaries, 42 frequency, 40, 42 limits, 41 mark, 42 midpoint, 42 width, 41 Cluster sample, 16, 17 Coefficient, binomial, 198, 199 Coefficient of determination, r2, 530–531 Coefficient of linear correlation, r, 506–508 formula for, 508 testing, 518, 542 Coefficient of multiple determination, 565 Coefficient of variation, CV, 100 Column Factor, 657, 658 Combinations rule, 168, 198 Complement of event A, 136, 137, 154 Completely randomized experiment, 23, 665 Conclusions (for hypothesis testing), 420–421 using critical regions, 435–436 using P-values, 418–419 Conditional probability, 143, 144, 209, 237, 291, A1-A2 Confidence interval, 338, 339 for coefficients of multiple regression model, 567 I1 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it I2 INDEX Confidence interval (Continued) for difference of means, 374–375, 376–377, 379–380, 394 for difference of proportions, 380–381 for mean, 334–338, 339, 350–351, 352–353 alternate method when s unknown, 359 for paired data difference of means, 466 for predicted value of response variable, 547–548, 565 for proportion, 362 plus four method, 371–372 for slope of least-squares line, 549–550 for variance or standard deviation, 625–626 method of testing, 441 Confidence level, c, 335–337 Confidence prediction band, 549 Confounding variable, 24, 26 Contingency tables, 151, 593 Continuity correction for normal approximation to binomial, 311 Continuous random variable, 182 Control Chart for mean, 255–258 Control group, 23, 24 Convenience sample, 16, 17 Correlation Pearson product moment correlation coefficient r, 506–508 formula for, 508 interpretation of, 506, 508–509 testing, 518, 542 Spearman rank correlation rS , 694–698 formula for, 696 interpretation of, 696 testing, 696–697, 698 Covariance, 520 Criterion for least squares equation, 522, 560–561 Critical regions, 432–436, 441, 445, 459, 480–481, 706–707 Critical values, 335, 338, 433, 706–707 for Chi-square distribution, 592, 619, 625–626 for correlation coefficient r, 518 for normal distribution, 335–336, 362, 375, 381, 433 for runs test of randomness, 706–707 for t, 349, 351, 377, 441 Cumulative frequency, 48 Curve fitting, curvilinear, 573 exponential, 538–539 linear, 521–522 polynomial, 573 power, 540 Curvilinear regression, 573 Data continuous, 182 discrete, 182 paired (dependent samples), 373, 452 population, sample, qualitative, quantitative, Decision errors, types of, 416–417 Degrees of freedom (d.f.) for chi-square estimating a variance, 619, 622 for chi-square goodness of fit test, 610, 611 for chi-square test of homogeneity, 598, 599, 602 for chi-square test of independence, 597, 598 for chi-square test of variance, 619, 622 for F distribution denominator, test of two variances, 632, 635 one-way ANOVA, 647, 649 two-way ANOVA, 661 for F distribution numerator, test of two variances, 632, 635 one-way ANOVA, 647, 649 two-way ANOVA, 661 for Student’s t distribution, confidence interval for coefficient of multiple regression model, 567 difference of means, 377, 379–380, 394 mean, 350–351, 352–353 paired data difference of means, 466 prediction, 547–548, 565 slope of least-squares line, 550 for Student’s t distribution, test of coefficient of multiple regression, 566 correlation coefficient, 542 difference of means, 472, 487–488 mean, 428 paired difference, 454 test of slope, 549–550 Deming, W.E., 56 DeMoivre, 250 Density function, 252 Dependent events, 143 Dependent samples, 467 Descriptive Statistics, 10 Deviation population standard s, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 298, 299, 301, 309, 313, 338, 347, 374, 426, 467, 619, 622, 625, 631, 635, 679, 688, 713 sample standard s, 94–95, 107–108, 292, 348, 351–352, 377, 428, 471, 619, 622, 625, 631, 635, 643 computation formula, 95, 108 Difference among several means (one-way ANOVA), 640–649 among several means (two-way ANOVA), 656–664 between two means, 374–375, 376–377, 380, 394, 468–469, 471–472, 475 between two proportions, 380–381, 475–477 paired difference test, 452–454, 468 Discrete probability distribution, 183–185 Discrete random variable, 183 Disjoint events, See Mutually exclusive events Distribution bell shaped, 251 bimodal, 47 binomial, 198, 210–213, 227, 229, 230 bivariate normal, 508, 522 chi-square, 592, 597, 602, 609–616, 619–621, 625–626 exponential, 264–265 F, 630, 632–633, 635, 647, 661–662 geometric, 222–223, 229 hypergeometric, A5-A6 negative binomial, 224, 238 normal, 250–252 Poisson, 224–225, 227–228, 229, 230 probability, 183, 185–186 sampling, 292, 294, 296–297, 299–300, 312–313 skewed, 47, 284 Student’s t, 347–349, 376–377, 428–429, 453, 466, 471, 487–488, 542, 547, 549-550, 566–567 symmetrical, 47, 251, 284 uniform, 47, 263–264 Distribution free tests, 678–681, 686–689, 696–698, 705–709 See also Nonparametric tests Dotplot, 54 Double blind experiment, 24 E, maximal margin of error, 337, 342, 350, 353, 361, 365, 375, 377, 380, 381 for least squares prediction, 547 for difference of proportions, 381 for difference of means, independent samples, 375, 377, 380 for difference of means, paired data, 466 for mean, 337, 350, 353 for proportion, 361, 365 for slope of least-squares line, 550 EDA, 63–64, 114 Empirical rule, 252 Equally likely outcomes, 132 Equation of least squares line, simple regression, 522 multiple regression model, 559 Error of estimate See margin of error Errors Type I, 416–417 Type II, 416–417 Error, sampling, 17, 18 Estimation, 334–338 difference of means, independent samples, 374–375, 376–377, 379–380, 394 difference of means, paired data, 466 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it I3 INDEX difference of proportions, 380–381 mean, 334–338, 339, 350–351, 352–353 predicted value in linear regression, 547–548, 565 proportion, 362 slope of least-squares line, 522 variance (or standard deviation), 625–626 Event, probability of, 132 Event, 134, 137 complement of, 136, 137, 154 dependent, 143 equally likely, 132 failure F, binomial, 195 independent, 145, 154 mutually exclusive, 149, 154 simple, 134, 137 success S, binomial, 195 Expected frequency for contingency table, 594–595 for goodness of fit, 608–609 Expected value, 185, 186 for binomial distribution, 213 for general discrete probability distribution, 185, 186 for geometric distribution, 223 for hypergeometric distribution, A6 for negative binomial, 238 for Poisson distribution, 224–225 Experiment binomial, 195 completely randomized, 23 double blind, 24 randomized block design, 24, 665 statistical, 22, 134, 137 Experimental Design, 21–27, 664–665 Explanatory variable in simple regression, 502, 521 in multiple regression, 559 Exploratory data analysis, 63–64, 114 Exponential growth model, 538–539 Exponential probability distribution, 264–265 Extrapolation, 526, 566 F distribution, 614 in one-way ANOVA, 647 in testing two variances, 632–633, 635 in two-way ANOVA, 661–662 F, failure on a binomial trial, 195 See also Binomial Factor (two-way ANOVA) column, 657, 658 row, 657, 658 Factorial, 166, 198 Fail to reject null hypothesis, 420 Fence, 116 F ratio, 632, 635, 647, 649, 661 Fisher, R.A., 322, 425, 630, 655 Five-number summary, 114 Frame, sampling, 17 Frequency, 40, 42 cumulative, 48 expected, 594–595, 608, 609 relative, 44, 132 Frequency distribution, 40–43 See also Histogram Frequency histogram, 40–45 See also Histogram Frequency table, 40, 43 Gauss, C.F., 250 Gaussian distribution See Normal distribution General probability rule for addition, 149, 154 for multiplication, 143, 154 Geometric distribution, 222–223 Geometric mean, 93 Goodness of fit test, 608–611 Gosset, W.S., 347–348 Graphs bar, 54, 55, 59, 601 circle, 57, 59 dotplot, 54 frequency histogram, 40–45, 47 histogram, 40–45, 47 ogive, 48–49 Pareto chart, 56, 59 Pie chart, 57, 59 relative frequency histogram, 44–45 residual plot, 537 scatter diagram, 502, 579–580 Stem-and-leaf display, 64–65, 68, 69, 70 time series graph, 58, 59 Grouped data, 107–108 Harmonic mean, 93 Hinge, 115 See also Quartile Histogram, 44, 47 bimodal, 47 frequency, 40–44 how to construct, 40–44 relative frequency, 44 skewed, 47 symmetric, 47 uniform, 47 Homogeneity test, 600–602 Hypergeometric distribution, A5–A6 Hypothesis test, in general, 410–413, 415, 418–419 alternate hypothesis H1, 411 conclusion, 420–421 conclusion based on critical regions, 435–436 conclusion based on P-value, 418–419 confidence interval method, 441 critical region, 432–434 critical value, 433 level of significance, 416 null hypothesis H0, 411 P-value, 415 Power of a test, 417 Hypothesis testing (types of tests) of coefficients of multiple regression, 566 of correlation coefficient, 518, 542 of difference of means, 467–469, 471–472, 475, 480–481, 488 of difference of proportions, 475–477, 480–481 of difference among several means one-way ANOVA, 640–649 two-way ANOVA, 656–664 of goodness of fit, 608–611 of homogeneity, 600–602 of independence, 593–598 of mean, 426–427, 428–429, 432–434 of nonparametric, 678–681, 686–689, 696–698, 705–709 of paired differences, 452–454 of proportion, 442–443 rank-sum test, 686–689 runs test for randomness, 705–709 sign test, 678–681 of slope, 550, 558 of Spearman rank correlation coefficient, 696–698 of variance or standard deviation, 618–622 of two variances, 631–635 Independence test, 593–598 Independent events, 143, 154 Independent samples, 373, 467, 631, 658, 686 Independent trials, 195 Individual, Inference, statistical, 10 Inflection point, 251 Influential point, 525, 579–580 Interaction, 657, 659, 661, 663 Interpolation, 526 Interquartile range, 112 Interval, confidence, 334–338, 339, 350–351, 352–353, 362, 374–375, 376–377, 380–381, 466, 547–548, 549–550, 565, 567, 625–626 Interval level of measurement, 7, Inverse normal distribution, 279 Large samples, 299, 309, 313 Law of large numbers, 134 Leaf, 64, 65, 70 Least squares criterion, 521, 560–561 Least squares line calculation of simple, 521–522 calculation of multiple, 559–561 exponential transformation, 538–539 formula for simple, 522 power transformation, 540 predictions from multiple, 565 predictions from simple, 525–526, 547–548 slope of simple, 522, 524, 542 Level of confidence, c, 335, 336, 337 Level of significance, 416 Levels of measurement, 7–9, 87 interval, 7, nominal, 7, ordinal, 7, ratio 7, Likert scale, 25, 485 Limits, class, 41, 43 Linear combination of independent random variables, 188 of dependent random variables, 520 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it I4 INDEX Linear function of a random variable, 187–188 Linear regression, 521–523 Logarithmic transformation exponential growth model, 538–539 power law model, 540 Lower class limit, 41, 43 Lurking variable, 26, 513 Mann-Whitney U test, 686 See also rank-sum test Margin of error, 335, 365, 403 Maximal error of estimate see, E, maximal margin of error Mean See also Estimation and Hypothesis testing for binomial distribution, 213 comparison with median, 56 defined, 85 discrete probability distribution, 173–174 exponential distribution, 264–265 formula for grouped data, 108 formula for ungrouped data, 85 geometric, 93 geometric distribution, 223 harmonic, 93 hypergeometric distribution, A5–A7 linear combination of independent random variables, 188 linear combination of dependent random variables, 519–520 linear function of a random variable, 188 moving, 109 negative binomial distribution, 238 Poisson distribution, 225 population m, 85, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 299, 301, 309, 313, 334, 347, 350–351, 352–353, 379–380, 413, 428, 467, 471, 642, 659, 679, 688, 713 sample , 85, 108, 185, 292, 335, 350–351, 413, 426, 428, 507, 508, 524, 530 trimmed, 86 uniform distribution, 263–264 weighted, 88 Mean square MS, 645–646, 649, 661 Median, 83, 110, 707 Midpoint, class, 42 Mode, 82 Monotone relation, 695 decreasing, 695 increasing, 695 Moving average, 109 Mu, population mean, 85, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 299, 301, 309, 313, 334, 347, 350–351, 352–353, 379–380, 413, 428, 467, 471, 642, 659, 679, 688, 713 Multinomial experiment, 603 Multiple regression, 559–561 coefficients in equation, 559, 564–565, 566–567 coefficient of multiple determination, 565 confidence interval for coefficients, 567 confidence interval for prediction, 565 curvilinear regression, 573 equation, 559, 564–565 explanatory variables, 559 forecast value of response variable, 559, 565 model, 559–561 polynomial regression, 573 residual, 561 response variable, 559 testing a coefficient, 566 theory, 560–561 Multiplication rule of counting, 163 Multiplication rule of probability, 143, 154 for dependent events, 143, 154 for independent events, 143, 154 Multistage sampling 16, 17 Mutually exclusive events, 149, 156 N, population size, 85 Negative binomial distribution, 224, 238 Negative correlation, 505 Nightingale, Florence, 38, 191 Nonparametric tests, 458, 678 rank-sum test, 686–689 runs test, 705–709 sign test, 678–681 Spearman correlation test, 696–698 Nonresponse, 25 Nonsampling error, 17, 18 Nominal level of measurement, 7, Normal approximation to binomial, 309 to pˆ , distribution, 312–313 Normal density function, 252 Normal distribution, 250–253, 269, 309, 296–297, 299–300 areas under normal curve, 269–273, 277 bivariate, 508, 532 normal curves, 250–253 standard normal, 269 Normal quantile plot, 284, 325–326 Normality, 283–284, 325–326 Null hypothesis, H0, 411 See also Alternate hypothesis, H1 Number of degrees of freedom See Degrees of freedom (d.f.) Observational study, 22 Observed frequency (O), 595, 598, 602, 608, 611 Odds against, 142 Odds in favor, 141 Ogive, 48–49 Or (A or B), 147, 148 Ordinal level of measurement, 7, Out of control, 256–257 Signal I, 357 Signal II, 357 Signal III, 357 Outlier, 48, 103, 116, 119, 284, 537 p (probability of success in a binomial trial), 195, 292, 360, 442 pˆ , point estimate of p, 292, 312–313, 360, 362, 442, 443, 475–476 p, pooled estimate of a proportion, 476, 477 ෂ p, plus four estimate of a proportion, 372 P-value, 415–416, 418, 419 Paired data, 452, 453, 502, 678 Paired difference confidence interval, 466 Paired difference test, 452–454 Parameter, 5, 291, 335, 413 Parametric test, 458 Pareto chart, 56, 59 Pearson, Karl, 506, 603 Pearson product moment correlation coefficient r, 506–508, 696 Pearson’s index for skewness, 284 Percentile, 110–112 Permutations rule, 167 Pie chart, 57, 59 Placebo, 23 effect, 23, 27 Plus four confidence interval for p, 372 Point estimate, for population mean, 335 for population proportion, 360 for population probability of success, 360 Poisson, S.D., 224 Poisson approximation to binomial, 227–228 Poisson, distribution, 224–228, 229–230 Pooled estimate of a proportion, p, 476, 477 of a standard deviation, s, 394, 488 of a variance, s2, 394, 488 Population defined, 5, 291 mean m, 85, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 299, 301, 309, 313, 334, 347, 350–351, 352–353, 379–380, 413, 428, 467, 471, 642, 659, 679, 688, 713 standard deviation s, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 298, 299, 301, 309, 313, 338, 347, 374, 426, 467, 619, 622, 625, 631, 635, 679, 688, 713 Population parameter, 5, 291 Positive correlation, 504 Power of a test, 417 Power law model, 540 Prediction for y given x multiple regression, 565 simple regression, 514 Probability addition rule (general events), 149, 154 addition rule (mutually exclusive events), 149, 154 binomial, 198 of the complement of an event, 136, 137, 154 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it I5 INDEX conditional, 143, 144, A1–A2 defined, 132, 137 of an event, 132 multiplication rule (general events), 143, 154 multiplication rule (independent events), 143, 154 Probability distribution, 47, 183, 185 continuous, 183, 250 discrete, 183, 185, 210 mean, 185–186 standard deviation, 185–186 Proportion, estimate of pˆ , 292, 312–313, 360 Proportion, plus four estimate ෂ p, 372 Proportion, pooled estimate p, 476, 477 Proportion, test of, 442–443 q, probability of failure in a binomial trial, 195 Qualitative variable, Quantitative variable, quartile, 112 Quota problems, 214–216 r number of successes in a binomial experiment, 195, 309, 313, 360, 381, 442, 475 r Pearson product moment correlation coefficient, 506–508 r2 coefficient of determination, 530–531 r2 coefficient of multiple determination, 565 rS Spearman rank correlation coefficient, 696 R, sum of ranks, 687 R, number of runs, 706 Random, 13–14, 706 Random number generator, 15, 35–37 Random number table, 13 Random sample, 13–14 Random variable, 182 Randomized block design, 24, 665 Randomized experiment, 23 Range, 93 interguartile, 112 Rank-sum test, 686–689 Ranked data, 687, 689–690 Ranks, 687 ties, 689–690, 700 Ratio level of measurement, 7, Raw score, 268 Rectangular distribution, 264 Region, rejection or critical, 432–436 Regression, curvilinear, 573 exponential, 538–539 polynomial, 573 power, 540 multiple, 559–561 See also Multiple regression simple linear, 521–523 Reject null hypothesis, 420 Rejection region, See Critical region Relative frequency, 43, 132 Relative frequency table, 43 Replication, 24 Residual, 525, 530, 537, 544, 561 Residual plot, 537 Resistant measures, 86 Response variable in simple regression, 502, 521 in multiple regression, 559 Rho, 512, 518, 541, 542 row factor, 657, 658 Run, 706 Runs test for randomness, 705–709 s, pooled standard deviation, 394, 488 s, sample standard deviation, 94–95, 107–108, 292, 348, 351–352, 377, 428, 471, 619, 622, 625, 631, 635, 243 s2, sample variance, 95, 96, 619, 622, 625, 631, 635, 643 S, success on a binomial trial, 195 Sample, 5, 22 cluster, 16, 17 convenience, 16, 17 large, 299, 309, 313 mean, 85, 108, 185, 292, 335, 350–351, 413, 426, 428, 507, 508, 524, 530 multistage, 16, 17 simple random, 13, 17 standard deviation s, 94–95, 107–108, 292, 348, 351–352, 377, 428, 471, 619, 622, 625, 631, 635, 643 stratified, 16, 17 systematic, 16, 17 variance s2, 95, 96, 619, 622, 625, 631, 635, 643 voluntary response, 25 Sample size, determination of, for estimating a mean, 342 for estimating a proportion, 366 for estimating a difference of means, 392 for estimating a difference of proportions, 393 Samples dependent, 452, 467 independent, 373, 467, 631, 658, 686 repeated with replacement, 15, 35, 196 repeated without replacement, 15, 35, 196 Sample space, 134, 137 Sample test statistic See Test statistic Sampling, 12–18 cluster, 16, 17 convenience, 16, 17 frame, 17 multistage, 16, 17 simple random, 13 stratified, 16, 17 systematic, 16, 17 with replacement, 15, 35 Sampling distribution, for proportion, 312–314 for mean, 292–295, 296–299, 299–300 See also Central Limit Theorem Sampling frame, 17 Sampling error, 17–18 Satterwaite’s formula for degrees of freedom, 377, 393, 487–488 Scatter diagram, 502, 579–580 Sequence, 706 Sigma s, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 298, 299, 301, 309, 313, 338, 347, 374, 426, 467, 619, 622, 625, 631, 635, 679, 688, 713 ⌺, 85 Sign test, 678–681 Significance level, 416 Simple event, 134, 137 Simple random sample, 13 Simulation, 15, 19, 22, 32, 179, 190, 326, 357, 404–405, 406–407, 497–498 Skewed distribution, 47, 284 Slope of least squares line, 522, 524, 541, 549–550 Spearman, Charles, 695 Spearman rank correlation, 696 Standard deviation for binomial distribution, 213 for exponential distribution, 265 for geometric distribution, 223 for grouped data, 107–108 for hypergeometric distribution, A6 for negative binomial distribution, 238 for Poisson distribution, 225 for uniform distribution, 264 pooled, 394, 488 for population standard s, 98, 185, 186, 188, 213, 223, 225, 238, 251, 264, 265, 269, 292, 296–297, 298, 299, 301, 309, 313, 338, 347, 374, 426, 467, 619, 622, 625, 631, 635, 679, 688, 713 for sample standard s, 94–95, 107–108, 292, 348, 351–352, 377, 428, 471, 619, 622, 625, 631, 635, 643 for distribution of sample proportion, 313 for distribution of sample mean, 296–297, 298, 299 for number of runs, 713 for rank R, 688 for testing and estimating a variance, 618–619, 619, 625 for testing two variances, 631–632, 635 Standard error of coefficient in multiple regression, 567 of mean, 298 of proportion, 298 of slope, 550 Standard error of estimate Se, 544–545 Standard normal distribution, 269 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it I6 INDEX Standard score, z, 267, 269 Standard unit z, 267, 269 Statistic, 5, 291, 302, 413 Statistical experiment, 22, 134, 137 Statistical significance, 418 Statistics definition, descriptive, 10 inferential, 10 Stem, 64, 68 Stem and leaf display, 64–65 back-to-back, 70 split stem, 68 Strata, 16 Stratified sampling, 16, 17 Student’s t distribution, 347–349, 376–377, 428–429, 453, 466, 471, 487–488, 542, 547, 549–550, 566–567 Study sponsor, 26 Sum of squares SS, 95, 643, 644, 645, 649, 659–661 Summation notation ⌺, 85 Survey, 25–27 Symmetrical distribution, 47, 283–284 Systematic sampling, 16, 17 t (Student’s t distribution), 347–349, 376–377, 428–429, 453, 466, 471, 487–488, 542, 547, 549–550, 566–567 Taleb, Nassin Nicholas, 138 Tally, 42 Tally survey, 151 Test of hypotheses; See Hypothesis testing Test statistic, 413 for ANOVA (one-way), 647, 649 for ANOVA (two-way), 661 for chi-square goodness of fit test, 608, 611 for chi-square test of homogeneity, 602 for chi-square test of independence, 596, 598 for chi-square test of variance, 619, 622 for correlation coefficient rho, 518, 542 for difference of means dependent samples, 453, 454 independent samples, 469, 471–472, 475, 488 for difference of proportions, 476, 477 for mean, 413, 426–427, 428, 434 for proportion, 442, 443 for rank-sum test, 688, 689 for runs test for randomness, 706, 709 for sign test, 679, 681 for slope of least-squares line, 550, 558 for Spearman rank correlation coefficient, 696, 698 for two variances, 632, 635 Time series, 58, 59 Time series graph, 58, 59 Tree diagram, 163 Trial, binomial, 195 Trimmed mean, 86 Two-tailed test, 412 Two-way ANOVA, 656–664 Type I error, 416–417 Type II error, 416–417 Unbiased statistic, 302 Undercoverage, 17 Uniform distribution, 47 Uniform probability distribution, 263–264 Upper class limit, 41, 43 Variable, continuous, 182 discrete, 182 explanatory, 502, 521, 559 qualitative, quantitative, random, 182 response, 502, 521, 559 standard normal, 267, 269 See also z value Variance, 94, 95, 96, 98 analysis of (one-way ANOVA), 640–649 analysis of (two-way ANOVA), 656–664 between samples, 645, 646 error (in two-way ANOVA), 660 estimate of pooled, 394, 488 estimation of single, 625–626 for grouped data s2, 107–108 for ungrouped sample data s2, 95, 96 population 98, 188, 619, 622, 625, 631–635 sample s2, 94–95, 107–108, 619, 622, 625, 631–632, 643 testing, 618–622 testing two, 631–635 treatment, 660 within samples, 645, 646 Variation, explained, 530–531, 565 unexplained, 530–531, 565 Voluntary response sample, 25 Welch approximation, 377 Weighted average, 88 Whisker, 114 x bar (x), 85, 108, 292 See also mean z score, 267, 269, 297, 299 z value, 267, 269, 297, 299 Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it F R E Q U E N T LY U S E D F O R M U L A S n ϭ sample size N ϭ population size Chapter Class width ϭ high Ϫ low (increase to next integer) number of classes Class midpoint ϭ upper limit ϩ lower limit Lower boundary ϭ lower boundary of previous class ϩ class width Chapter Sample mean x ϭ Combination rule Cn, r ϭ n! r!(n Ϫ r)! Chapter Mean of a discrete probability distribution ␮ ϭ ⌺xP(x) Standard deviation of a discrete probability distribution ␮L ϭ a ϩ b␮ ␴L ϭ b ␴ Given W ϭ ax1 ϩ bx2 (x1 and x2 independent) g xw gw ␮W ϭ a␮1 ϩ b␮2 Range ϭ largest data value Ϫ smallest data value g (x Ϫ x)2 D nϪ1 Sample standard deviation s ϭ g x2 Ϫ (g x)2/n Computation formula s ϭ nϪ1 D g (x Ϫ m)2 Population standard deviation s ϭ D N Sample variance s2 sW ϭ 2a2s21 ϩ b2s22 For Binomial Distributions r ϭ number of successes; p ϭ probability of success; qϭ1Ϫp Binomial probability distribution P(r) ϭ Cn,r prqnϪr Mean ␮ ϭ np Standard deviation s ϭ 1npq Geometric Probability Distribution n ϭ number of trial on which first success occurs Population variance ␴ Sample coefficient of variation CV ϭ s # 100 x g xf Sample mean for grouped data x ϭ n g x f Ϫ (g xf ) /n g (x Ϫ x) f ϭ D nϪ1 nϪ1 2 P(n) ϭ p(1 Ϫ p)nϪ1 Poisson Probability Distribution r ϭ number of successes Sample standard deviation for grouped data D n! (n Ϫ r)! Given L ϭ a ϩ bx gx Population mean m ϭ N sϭ Permutation rule Pn, r ϭ s ϭ 2g (x Ϫ m)2P(x) gx n Weighted average ϭ f ϭ frequency Chapter Probability of the complement of event A P(Ac) ϭ Ϫ P(A) Multiplication rule for independent events P(A and B) ϭ P(A) и P(B) General multiplication rules P(A and B) ϭ P(A) и P(B A) P(A and B) ϭ P(B) и P(A B) Addition rule for mutually exclusive events P(A or B) ϭ P(A) ϩ P(B) General addition rule P(A or B) ϭ P(A) ϩ P(B) Ϫ P(A and B) ␭ ϭ mean number of successes over given interval P(r) ϭ eϪllr r! Chapter Standard score z ϭ Raw score x ϭ z␴ ϩ ␮ xϪm s Mean of x distribution mx ϭ ␮ Standard deviation of x distribution sx ϭ Standard score for x zϭ s 1n xϪm s/ 1n Mean of pˆ distribution mpˆ ϭ p Standard deviation of pˆ distribution spˆ ϭ pq ;qϭ1Ϫp B n Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Chapter for p (np Ͼ and nq Ͼ 5) Confidence Interval for ␮ xϪEϽ␮ϽxϩE s where E ϭ zc when ␴ is known 1n s E ϭ tc when ␴ is unknown 1n with d.f ϭ n Ϫ for p (np Ͼ and n(1 Ϫ p) Ͼ 5) pˆ Ϫ E Ͻ p Ͻ pˆ ϩ E where E ϭ zc D r pˆ ϭ n pˆ (1 Ϫ pˆ ) n pˆ Ϫ p 1pq/n where q ϭ Ϫ p; pˆ ϭ r/n for paired differences d tϭ d Ϫ md sd /1n ; d.f ϭ n Ϫ for difference of means, ␴1 and ␴2 known zϭ x1 Ϫ x2 s21 s22 ϩ n2 D n1 for difference of means, ␴1 or ␴2 unknown tϭ x1 Ϫ x2 s21 s22 ϩ n2 D n1 for ␮1 Ϫ ␮2 (independent samples) (x1 Ϫ x2) Ϫ E Ͻ ␮1 Ϫ ␮2 Ͻ (x1 Ϫ x2) ϩ E where E ϭ zc zϭ s21 s22 ϩ when ␴1 and ␴2 are known n2 D n1 s21 s22 ϩ E ϭ tc when ␴1 or ␴2 is unknown n2 D n1 with d.f ϭ smaller of n1 Ϫ and n2 Ϫ (Note: Software uses Satterthwaite’s approximation for degrees of freedom d.f.) for difference of proportions p1 Ϫ p2 (pˆ Ϫ pˆ 2) Ϫ E Ͻ p1 Ϫ p2 Ͻ (pˆ Ϫ pˆ 2) ϩ E pˆ 1qˆ pˆ 2qˆ ϩ n n2 D ˆp1 ϭ r1 /n1; pˆ ϭ r2 /n2 qˆ ϭ Ϫ pˆ 1; qˆ ϭ Ϫ pˆ where E ϭ zc d.f ϭ smaller of n1 Ϫ and n2 Ϫ (Note: Software uses Satterthwaite’s approximation for degrees of freedom d.f.) for difference of proportions zϭ pˆ Ϫ pˆ pq pq ϩ n2 D n1 r1 ϩ r2 where p ϭ and q ϭ Ϫ p n1 ϩ n2 pˆ ϭ r1/n1; pˆ ϭ r2 /n2 Chapter Regression and Correlation Pearson product-moment correlation coefficient rϭ ngxy Ϫ (gx)(gy) 2ngx Ϫ (gx)2 2ngy2 Ϫ (gy)2 Sample Size for Estimating means n ϭ a Least-squares line yˆ ϭ a ϩ bx zcs b E where b ϭ proportions zc n ϭ p(1 Ϫ p) a b with preliminary estimate for p E nϭ zc a b without preliminary estimate for p E Chapter Sample Test Statistics for Tests of Hypotheses for ␮ (␴ known) for ␮ (␴ unknown) zϭ xϪm xϪm s/ 1n ngx2 Ϫ (gx)2 a ϭ y Ϫ bx Coefficient of determination ϭ r2 Sample test statistic for r tϭ r 1n Ϫ 21 Ϫ r2 with d.f ϭ n Ϫ Standard error of estimate Se ϭ gy2 Ϫ agy Ϫ bgxy D nϪ2 Confidence interval for y yˆ Ϫ E Ͻ y Ͻ yˆ ϩ E s/ 1n tϭ ngxy Ϫ (gx)(gy) ; d.f ϭ n Ϫ where E ϭ tc Se D 1ϩ n(x Ϫ x)2 ϩ n ngx2 Ϫ (gx)2 with d.f ϭ n Ϫ Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Sample test statistic for slope b tϭ b g x2 Ϫ (gx)2 with d.f ϭ n Ϫ n Se A Confidence interval for ␤ bϪEϽ␤ϽbϩE where E ϭ tc Se gx Ϫ (gx)2 n A with d.f ϭ n Ϫ (O Ϫ E)2 where E O ϭ observed frequency and E ϭ expected frequency x2 ϭ g For tests of independence and tests of homogeneity Eϭ (row total)(column total) sample size For goodness of fit test E ϭ (given percent)(sample size) Tests of independence d.f ϭ (R Ϫ 1)(C Ϫ 1) Test of homogeneity d.f ϭ (R Ϫ 1)(C Ϫ 1) Goodness of fit d.f ϭ (number of categories) Ϫ Confidence interval for ␴2; d.f ϭ n Ϫ (n Ϫ 1)s2 s2 x2U (n Ϫ 1)s2 x2L Sample test statistic for ␴2 (n Ϫ 1)s2 s with d.f ϭ n Ϫ Testing Two Variances Sample test statistic F ϭ a s21 s22 where s21 Ն s22 MSW ϭ SSBET where d.f.BET ϭ k Ϫ d.f.BET SSW where d.f.W ϭ N Ϫ k d.f.W MSBET where d.f numerator ϭ d.f.BET ϭ k Ϫ 1; MSW d.f denominator ϭ d.f.W ϭ N Ϫ k Two-Way ANOVA r ϭ number of rows; c ϭ number of columns Row factor F: Interaction F: SSTOT ϭ g x2TOT Ϫ (gxTOT)2 N MS column factor MS error MS interaction MS error with degrees of freedom for row factor ϭ r Ϫ interaction ϭ (r Ϫ 1)(c Ϫ 1) column factor ϭ c Ϫ error ϭ rc(n Ϫ 1) Chapter 11 Sample test statistic for x ϭ proportion of plus signs to all signs (n Ն 12) zϭ x Ϫ 0.5 20.25/n Sample test statistic for R ϭ sum of ranks zϭ R Ϫ mR n1(n1 ϩ n2 ϩ 1) where mR ϭ and sR D n1n2(n1 ϩ n2 ϩ 1) 12 Spearman rank correlation coefficient ANOVA k ϭ number of groups; N ϭ total sample size MS row factor MS error Column factor F: sR ϭ d.f.N ϭ n1 Ϫ 1; d.f.D ϭ n2 Ϫ (gxi)2 b ni SSTOT ϭ SSBET ϩ SSW Fϭ Chapter 10 agx2i Ϫ all groups MSBET ϭ x2 ϭ SSW ϭ rs ϭ Ϫ 6gd2 where d ϭ x Ϫ y n(n2 Ϫ 1) Sample test statistic for runs test R ϭ number of runs in sequence (gxi)2 (g xTOT)2 SSBET ϭ a a b Ϫ ni N all groups Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it Procedure for Hypothesis Testing Use appropriate experimental design and obtain random samples of data (see Sections 1.2 and 1.3) In the context of the application: State the null hypothesis H0 and the alternate hypothesis H1 Set the level of significance ␣ for the test Determine the appropriate sampling distribution and compute the sample test statistic Use the type of test (one-tailed or two-tailed) and the sampling distribution to compute the P-value of the corresponding sample test statistic Conclude the test If P-value Յ ␣ then reject H0 If P-value Ͼ ␣ then not reject H0 Interpret the conclusion in the context of the application Finding the P-Value Corresponding to a Sample Test Statistic Use the appropriate sampling distribution as described in procedure displays for each of the various tests Left-Tailed Test P-value ϭ area to the left of the sample test statistic Right-Tailed Test P-value ϭ area to the right of the sample test statistic P-value P-value Sample Test Statatic Sample Test Statatic Two-Tailed Test Sample test statistic lies to left of center Sample test statistic lies to right of center P-value ϭ twice area to the left of sample P-value ϭ twice area to the right of sample test statistic test statistic P-value = twice this area P-value = twice this area Sample Test Statatic Sample Test Statatic Sampling Distributions for Inferences Regarding M or p Parameter Condition Sampling Distribution ␮ ␴ is known and x has a normal distribution or n Ն 30 Normal distribution ␮ ␴ is not known and x has a normal or mound-shaped, symmetric distribution or n Ն 30 Student’s t distribution with d.f ϭ n Ϫ p np Ͼ and n(1 Ϫ p) Ͼ Normal distribution Copyright 2010 Cengage Learning All Rights Reserved May not be copied, scanned, or duplicated, in whole or in part Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s) Editorial review has deemed that any suppressed content does not materially affect the overall learning experience Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it ... 3 .2 3.5 4.0 1.9 4.1 3.3 2. 3 3.0 4.0 1.8 2. 4 3.1 3.4 4.0 2. 1 2. 8 2. 7 4 .2 4.6 2. 2 2. 8 2. 1 2. 4 1.9 1.3 2. 9 2. 0 3.5 1.7 2. 9 4.8 3.1 3.0 2. 2 1.9 3.7 3.7 2. 4 3.9 3.7 2. 3 2. 1 2. 0 2. 9 2. 6 3.4 5 .2 2.6 2. 2... Bulletin) 12. 5 12. 0 28 .0 9.4 14.1 27 .4 13.0 25 .7 37.6 53.5 6.5 47.8 48.3 73.9 134.7 50.0 67.3 104.0 114.0 45.3 70.0 54.6 72. 7 61.0 43.8 4.4 81 .2 39.0 56.5 177.3 24 .1 12. 0 59.7 70.1 20 .4 7 .2 24.0 54.0... distribution with s ϭ 0.30 Recently, a random sample of eight water tests from the creek gave the following x values 2. 1 2. 5 2. 2 2. 8 3.0 2. 2 2. 4 2. 9 The sample mean is x Ϸ 2. 51 Let us construct a statistical

Ngày đăng: 03/02/2020, 19:14

Từ khóa liên quan

Mục lục

  • Front Cover

  • Inside Front Cover

    • Areas of a Standard Normal Distribution

    • Critical Values for Student’s t Distribution

    • The x2 Distribution

    • Title Page

    • Dedication/Copyright Page

    • Book Features

      • Critical Thinking

      • Statistical Literacy

      • Direction and Purpose

      • Real-World Skills

      • Preface

        • AdditionalResources—Get More from Your Textbook

        • Table of Prerequisite Material

        • CONTENTS

        • CHAPTER 1: Getting Started

          • FOCUS PROBLEM: Where Have All the Fireflies Gone?

          • SECTION 1.1: What Is Statistics?

          • SECTION 1.2: Random Samples

          • SECTION 1.3: Introduction to Experimental Design

          • Using Technology

          • CHAPTER 2: Organizing Data

            • FOCUS PROBLEM: Say It with Pictures

            • SECTION 2.1: Frequency Distributions, Histograms, and Related Topics

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan