George Smith, an analyst with Great Lakes Investments, has created a comprehensive report on the pharmaceutical industry at the request of his boss. The Great Lakes portfolio currently has a significant exposure to the pharmaceuticals industry through its large equity position in the top two pharmaceutical manufacturers. His boss requested that Smith determine a way to accurately forecast pharmaceutical sales in order for Great Lakes to identify further investment opportunities in the industry as well as to minimize their exposure to downturns in the market. Smith realized that there are many factors that could possibly have an impact on sales, and he must identify a method that can quantify their effect. Smith used a multiple regression analysis with five independent variables to predict industry sales. His goal is to not only identify relationships that are statistically significant, but economically significant as well. The assumptions of his model are fairly standard: a linear relationship exists between the dependent and independent variables, the independent variables are not random, and the expected value of the error term is zero.
Smith is confident with the results presented in his report. He has already done some hypothesis testing for statistical significance, including calculating a t-statistic and conducting a two-tailed test where the null hypothesis is that the regression coefficient is equal to zero versus the alternative that it is not. He feels that he has done a thorough job on the report and is ready to answer any questions posed by his boss.
However, Smith’s boss, John Sutter, is concerned that in his analysis, Smith has ignored several potential problems with the regression model that may affect his conclusions. He knows that when any of the basic assumptions of a regression model are violated, any results drawn for the model are questionable. He asks Smith to go back and carefully examine the effects of heteroskedasticity, multicollinearity, and serial correlation on his model. In specific, he wants Smith to make suggestions regarding how to detect these errors and to correct problems that he encounters.
Suppose that there is evidence that the residual terms in the regression are positively correlated. The most likely effect on the statistical inferences drawn from the regressions results is for Smith to commit a:
A) |
Type I error by incorrectly rejecting the null hypotheses that the regression parameters are equal to zero. | |
B) |
Type II error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
C) |
Type I error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
One problem with positive autocorrelation (also known as positive serial correlation) is that the standard errors of the parameter estimates will be too small and the t-statistics too large. This may lead Smith to incorrectly reject the null hypothesis that the parameters are equal to zero. In other words, Smith will incorrectly conclude that the parameters are statistically significant when in fact they are not. This is an example of a Type I error: incorrectly rejecting the null hypothesis when it should not be rejected. (Study Session 3, LOS 12.i)
Sutter has detected the presence of conditional heteroskedasticity in Smith’s report. This is evidence that:
A) |
the variance of the error term is correlated with the values of the independent variables. | |
B) |
two or more of the independent variables are highly correlated with each other. | |
C) |
the error terms are correlated with each other. | |
Conditional heteroskedasticity exists when the variance of the error term is correlated with the values of the independent variables.
Multicollinearity, on the other hand, occurs when two or more of the independent variables are highly correlated with each other. Serial correlation exists when the error terms are correlated with each other. (Study Session 3, LOS 12.i)
Suppose there is evidence that the variance of the error term is correlated with the values of the independent variables. The most likely effect on the statistical inferences Smith can make from the regressions results is to commit a:
A) |
Type II error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
B) |
Type I error by incorrectly rejecting the null hypotheses that the regression parameters are equal to zero. | |
C) |
Type I error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
One problem with heteroskedasticity is that the standard errors of the parameter estimates will be too small and the t-statistics too large. This will lead Smith to incorrectly reject the null hypothesis that the parameters are equal to zero. In other words, Smith will incorrectly conclude that the parameters are statistically significant when in fact they are not. This is an example of a Type I error: incorrectly rejecting the null hypothesis when it should not be rejected. (Study Session 3, LOS 12.i)
Which of the following is most likely to indicate that two or more of the independent variables, or linear combinations of independent variables, may be highly correlated with each other? Unless otherwise noted, significant and insignificant mean significantly different from zero and not significantly different from zero, respectively.
A) |
The R2 is low, the F-statistic is insignificant and the Durbin-Watson statistic is significant. | |
B) |
The R2 is high, the F-statistic is significant and the t-statistics on the individual slope coefficients are insignificant. | |
C) |
The R2 is high, the F-statistic is significant and the t-statistics on the individual slope coefficients are significant. | |
Multicollinearity occurs when two or more of the independent variables, or linear combinations of independent variables, may be highly correlated with each other. In a classic effect of multicollinearity, the R2 is high and the F-statistic is significant, but the t-statistics on the individual slope coefficients are insignificant. (Study Session 3, LOS 12.j)
Suppose there is evidence that two or more of the independent variables, or linear combinations of independent variables, may be highly correlated with each other. The most likely effect on the statistical inferences Smith can make from the regression results is to commit a:
A) |
Type II error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
B) |
Type I error by incorrectly rejecting the null hypothesis that the regression parameters are equal to zero. | |
C) |
Type I error by incorrectly failing to reject the null hypothesis that the regression parameters are equal to zero. | |
One problem with multicollinearity is that the standard errors of the parameter estimates will be too large and the t-statistics too small. This will lead Smith to incorrectly fail to reject the null hypothesis that the parameters are statistically insignificant. In other words, Smith will incorrectly conclude that the parameters are not statistically significant when in fact they are. This is an example of a Type II error: incorrectly failing to reject the null hypothesis when it should be rejected. (Study Session 3, LOS 12.j)
Using the Durbin-Watson test statistic, Smith rejects the null hypothesis suggested by the test. This is evidence that:
A) |
two or more of the independent variables are highly correlated with each other. | |
B) |
the error term is normally distributed. | |
C) |
the error terms are correlated with each other. | |
Serial correlation (also called autocorrelation) exists when the error terms are correlated with each other.
Multicollinearity, on the other hand, occurs when two or more of the independent variables are highly correlated with each other. One assumption of multiple regression is that the error term is normally distributed. (Study Session 3, LOS 12.i)
|