EVALUATION OF THE REGRESSION EQUATION

Một phần của tài liệu Managerial economics 12th edition thomas maurice (Trang 158 - 164)

Once the individual parameter estimates a and ˆ ˆb have been tested for statistical significance using t-tests, researchers often wish to evaluate the complete estimated regression equation, Y 5 ˆa 1 b X. Evaluation of the regression equation involves ˆ determining how well the estimated regression equation “explains” the variation

p-value

The exact level of significance for a test statistic, which is the probability of finding significance when none exists.

5Although this section discusses t-statistics, a p-value can be computed for any test statistic, and it gives the exact significance level for the associated test statistic.

Now try Technical Problem 5.

in Y. Two statistics are frequently employed to evaluate the overall acceptability of a regression equation. The first is called the coefficient of determination, normally denoted as “R2” and pronounced “R-square.” The second is the F-statistic, which is used to test whether the overall equation is statistically significant.

The Coefficient of Determination (R 2)

The coefficient of determination (R2) measures the fraction of the total variation in the dependent variable that is explained by the regression equation. In terms of the example used earlier, it is the fraction of the variation in sales that is explained by variation in advertising expenditures. Therefore, the value of R2 can range from 0 (the regression equation explains none of the variation in Y) to 1 (the regression equation explains all the variation in Y). While the R2 is printed out as a decimal value by most computers, the R2 is often spoken of in terms of a percentage. For example, if the calculated R2 is 0.7542, we could say that approximately 75 percent of the variation in Y is explained by the model.

If the value of R2 is high, there is high correlation between the dependent and independent variables; if it is low, there is low correlation. For example, in Figure 4.4, Panel A, the observations in the scatter diagram all lie rather close to the regression line. Because the deviations from the line are small, the correla- tion between X and Y is high and the value of R2 will be high. In the extreme case when all of the observations lie on a straight line, R2 will be equal to 1. In Panel B, the observations are scattered widely around the regression line. The correlation between X and Y in this case is much less than that in Panel A, so the value of R2 is rather small.

We must caution you that high correlation between two variables (or even a statistically significant regression coefficient) does not necessarily mean the

coefficient of determination (R 2) The fraction of total variation in the dependent variable explained by the regression equation.

F I G U R E 4.4

High and Low Correlation Y

0 X

Panel A

Y

0 X

Panel B

variation in the dependent variable Y is caused by the variation in the independent variable X. It might be the case that variation in Y is caused by variation in Z, but X happens to be correlated to Z. Thus, Y and X will be correlated even though variation in X does not cause Y to vary. A high R2 does not prove that Y and X are causally related, only that Y and X are correlated. We summarize this discussion with a statistical relation:

Relation The coefficient of determination (R 2) measures the fraction of the total variation in Y that is explained by the variation in X. R 2 ranges in value from 0 (the regression explains none of the variation in Y ) to 1 (the regression explains all the variation in Y ). A high R 2 indicates Y and X are highly correlated and the scatter diagram tightly fits the sample regression line.

The F-Statistic

Although the R2 is a widely used statistic, it is subjective in the sense of how much explained variation—explained by the regression equation—is enough to view the equation as being statistically significant. An alternative is the F-statistic. In very general terms, this statistic provides a measure of the ratio of explained variation (in the dependent variable) to unexplained variation. To test whether the overall equation is significant, this statistic is compared with a critical F-value obtained from an F-table (at the end of this text). The critical F-value is identified by two separate degrees of freedom and the significance level. The first of the degrees of freedom is k 2 1 (i.e., the number of indepen- dent variables) and the second is n 2 k. If the value for the calculated F-statistic exceeds the critical F-value, the regression equation is statistically significant at the specified significance level. The discussion of the F-statistic is summarized in a statistical relation:

Relation The F -statistic is used to test whether the regression equation as a whole explains a significant amount of the variation in Y. The test involves comparing the F-statistic to the critical F-value with k 2 1 and n 2 k degrees of freedom and the chosen level of significance. If the F-statistic exceeds the critical F-value, the regression equation is statistically significant.

Rather than performing an F-test, which requires that you select arbitrarily a significance or confidence level, you may wish to report the exact level of significance for the F-statistic. The p-value for the F-statistic gives the exact level of significance for the regression equation as a whole. One minus the p-value is the exact level of confidence associated with the computed F-statistic.

All the statistics you will need to analyze a regression—the coefficient estimates, the standard errors, the t-ratios, R2, the F-statistic, and the p-value—are automatically calculated and printed by most available regression programs. As mentioned earlier, our objective is not that you understand how these statistics are calculated. Rather, we want you to know how to set up a regression and interpret the results. We now provide you with a hypothetical example of a regression analysis that might be performed by a manager of a firm.

F-statistic

A statistic used to test whether the overall regression equation is statistically significant.

Controlling Product Quality at SLM: A Regression Example

Specialty Lens Manufacturing (SLM) produces contact lenses for patients who are unable to wear standard contact lenses. These specialty contact lenses must meet extraordinarily strict standards. The production process is not perfect, however, and some lenses have slight flaws. Patients receiving flawed lenses almost always detect the flaws, and the lenses are returned to SLM for replacement.

Returned lenses are costly, in terms of both redundant production costs and diminished corporate reputation for SLM. Every week SLM produces 2,400 lenses, and inspectors using high-powered microscopes have time to examine only a fraction of the lenses before they are shipped to doctors.

Management at SLM decided to measure the effectiveness of its inspection process using regression analysis. During a 22-week time period, SLM collected data each week on the number of lenses produced that week that were later returned by doctors because of flaws (F) and the number of hours spent that week examining lenses (H). The manager estimated the regression equation

F 5 a 1 bH

using the 22 weekly observations on F and H. The computer printed out the following output:

DEPENDENT VARIABLE: F R-SQUARE F-RATIO P-VALUE ON F OBSERVATIONS: 22 0.4527 16.54 0.001

PARAMETER STANDARD

VARIABLE ESTIMATE ERROR T-RATIO P-VALUE INTERCEPT 90.0 28.13 3.20 0.004 H 20.80 0.32 22.50 0.021

As expected, a is positive and ˆ b is negative. If no inspection is done (H 5 0), ˆ SLM’s management expects 90 lenses from each week’s production to be returned as defective. The estimate of b ( b 5 DFyDH 5 20.80) indicates that each additional ˆ hour per week spent inspecting lenses will decrease the number of flawed lenses by 0.8. Thus it takes 10 extra hours of inspection to find eight more flawed lenses.

To determine if the parameter estimates ˆa and b are significantly different from ˆ zero, the manager can conduct a t-test on each estimated parameter. The t-ratios for ˆa and b are 3.20 and 22.50, respectively:ˆ

t ˆa 5 90.0/28.13 5 3.20 and t ˆb 5 20.80/0.32 5 22.50

The critical t-value is found in the table at the end of the book. There are 22 observations and two parameters, so the degrees of freedom are n 2 k 5 22 2 2 5 20. Choos- ing the 5 percent level of significance (a 95 percent level of confidence), the critical t-value is 2.086. Because the absolute values of t ˆa and t ˆb both exceed 2.086, both a ˆ and b are statistically significant at the 5 percent significance level.ˆ

I L L U S T R AT I O N 4 . 1 R&D Expenditures and the Value of the Firm

To determine how much to spend on research and development (R&D) activities, a manager may wish to know how R&D expenditures affect the value of the firm. To investigate the relation between the value of a firm and the amount the firm spends on R&D, Wallin and Gilmana used simple regression analysis to estimate the model

V 5 a 1 bR

where the value of the firm (V) is measured by the price-to-earnings ratio, and the level of expenditures on R&D (R) is measured by R&D expenditures as a percentage of the firm’s total sales.

Wallin and Gilman collected a cross-sectional data set on the 20 firms with the largest R&D expen- ditures in the 1981–1982 time period. The computer output from a regression program and a scatter dia- gram showing the 20 data points with the sample regression line are presented here:

(Continued) R V

Price-to-earnings ratio

16 14 12 10 8 6

4

0 1 2 3 4 5 6 7 8 9 10

R&D expenditures (as percent of sales)

V = 6.0 + 0.74Rˆ

DEPENDENT VARIABLE: V R-SQUARE F-RATIO P-VALUE ON F OBSERVATIONS: 20 0.5274 20.090 0.0003 PARAMETER STANDARD

VARIABLE ESTIMATE ERROR T-RATIO P-VALUE INTERCEPT 6.00 0.917 6.54 0.0001 R 0.74 0.165 4.48 0.0003

First, we test to see if the estimate of a is statistically significant. To test for statistical significance, use the t-ratio for , which the computer has calculated for you as the ratio of the parameter estimate to its standard error:

t ˆa 5 6.00 _____0.917 5 6.54

and compare this value with the critical value of t. We use a 5 percent significance level (a 95 percent confidence level).

Because there are 20 observations and two parameters are estimated, there are 20 2 2 5 18 deg rees of freedom.

The table at the end of the text (critical t-values) gives us a critical value of 2.101. The calculated t-value for a is larger ˆ

than 2.101, so we conclude that ˆa is significantly different from 0. The p-value for ˆa is so small (0.0001) that the prob- ability of finding significance when none exists is virtually 0. In this case, the selection of a 5 percent significance level greatly underestimates the exact degree of significance associated with the estimate of a. The estimated value of a suggests that firms that spend nothing on R&D, on average, have price-to-earnings ratios of 6.

The estimate of b (0.74) is positive, which suggests V and R are directly related. The calculated t-ratio is 4.48, which is greater than the critical value of t. The p-value for ˆ b indicates the significance level of the t-test could

have been set as low as 0.0003, or 0.03 percent, and the hypothesis that b 5 0 could be rejected. In other words, with a t-statistic equal to 4.48, the probability of incorrectly concluding that R&D expenditures sig- nificantly affect the value of a firm is just 0.03 percent.

Or stated equivalently in terms of a confidence level, we can be 99.97 percent confident that the t-test would not indicate statistical significance if none existed. The value of ˆ b implies that if a firm increases R&D expen- ditures by 1 percent (of sales), the firm can expect its value (as measured by the PyE ratio) to rise by 0.74.

The R2 for the regression equation indicates that about 53 percent of the total variation in the value of a firm is explained by the regression equation; that is, 53 percent of the variation in V is explained by the vari- ation in R. The regression equation leaves 47 percent of the variation in the value of the firm unexplained.

The F-ratio is used to test for significance of the entire equation. To determine the critical value of F (with a

5 percent significance level), it is necessary to determine the degrees of freedom. In this case, k 2 1 5 2 2 1 5 1 and n 2 k 5 20 2 2 5 18 degrees of freedom. In the table of values of the F-statistic at the end of the text, you can look down the k 2 1 5 1 column until you get to the 18th row (n 2 k 5 18) and read the value 4.41. Since the calcu- lated F-value (20.090) exceeds 4.41, the regression equa- tion is significant at the 5 percent significance level. In fact, the F-value of 20.090 is much larger than the critical F-value for a 5 percent level of significance, suggesting that the exact level of significance will be much lower than 0.05. The p-value for the F-statistic, 0.0003, confirms that the exact significance level is much smaller than 0.05.

aC. Wallin and J. Gilman, “Determining the Optimal Level for R&D Spending,” Research Management 14, no. 5 (Sep.yOct.

1986), pp. 19–24.

Source: Adapted from a regression problem presented in Terry Sincich, A Course in Modern Business Statistics (Dellen/

Macmillan, 1994), p. 432.

Instead of performing a t-test at a fixed level of significance, the manager could assess the significance of the parameter estimates by examining the p-values for a ˆ and b . The exact level of significance for ˆ a is 0.004, or 0.4 percent, which indicates ˆ that the t-statistic of 3.20 is just large enough to reject the hypothesis that a is 0 at ˆ a significance level of 0.004 (or a confidence level of 0.996). The p-value for is so small that the manager almost certainly has avoided committing a Type I error (finding statistical significance where there is none). The exact level of significance for b is 0.021, or 2.1 percent. For both parameter estimates, the p-values provide a ˆ stronger assessment of statistical significance than could be established by satisfy- ing the requirements of a t-test performed at a 5 percent level of significance.

Overall, because R2 5 0.4527, the equation explains about 45 percent of the total variation in the dependent variable (F), with 55 percent of the variation in F re- maining unexplained. To test for significance of the entire equation, the manager could use an F-test. The critical F-value is obtained from the table at the end of the book. Since k 2 1 5 2 2 1 5 1, and n 2 k 5 22 2 2 5 20, the critical F-value at the 5 percent significance level is 4.35. The F-statistic calculated by the computer, 16.54, exceeds 4.35, and the entire equation is statistically significant. The p-value for the F-statistic shows that the exact level of significance for the entire equation is 0.001, or 0.1 percent (a 99.9 percent level of confidence).

Using the estimated equation, Fˆ 5 90.0 2 0.80H, the manager can estimate the number of flawed lenses that will be shipped for various hours of weekly inspec- tion. For example, if inspectors spend 60 hours per week examining lenses, SLM can expect 42 (5 90 2 0.8 3 60) of the lenses shipped to be flawed.

Now try Technical Problems 6–7.

Một phần của tài liệu Managerial economics 12th edition thomas maurice (Trang 158 - 164)

Tải bản đầy đủ (PDF)

(737 trang)