返回列表 发帖

Assume an analyst performs two simple regressions. The first regression analysis has an R-squared of 0.90 and a slope coefficient of 0.10. The second regression analysis has an R-squared of 0.70 and a slope coefficient of 0.25. Which one of the following statements is most accurate?

A)
The first regression has more explanatory power than the second regression.
B)
The influence on the dependent variable of a one unit increase in the independent variable is 0.9 in the first analysis and 0.7 in the second analysis.
C)
Results of the second analysis are more reliable than the first analysis.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The larger R-squared (0.90) of the first regression means that 90% of the variability in the dependent variable is explained by variability in the independent variable, while 70% of that is explained in the second regression. This means that the first regression has more explanatory power than the second regression. Note that the Beta is the slope of the regression line and doesn’t measure explanatory power.

TOP

Assume you perform two simple regressions. The first regression analysis has an R-squared of 0.80 and a beta coefficient of 0.10. The second regression analysis has an R-squared of 0.80 and a beta coefficient of 0.25. Which one of the following statements is most accurate?

A)
The influence on the dependent variable of a one-unit increase in the independent variable is the same in both analyses.
B)
Results from the first analysis are more reliable than the second analysis.
C)
Explained variability from both analyses is equal.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The R-squared (0.80) being identical between the first and second regressions means that 80% of the variability in the dependent variable is explained by variability in the independent variable for both regressions. This means that the first regression has the same explaining power as the second regression.

TOP

An analyst performs two simple regressions. The first regression analysis has an R-squared of 0.40 and a beta coefficient of 1.2. The second regression analysis has an R-squared of 0.77 and a beta coefficient of 1.75. Which one of the following statements is most accurate?

A)
The first regression equation has more explaining power than the second regression equation.
B)
The second regression equation has more explaining power than the first regression equation.
C)
The R-squared of the first regression indicates that there is a 0.40 correlation between the independent and the dependent variables.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The larger R-squared (0.77) of the second regression means that 77% of the variability in the dependent variable is explained by variability in the independent variable, while only 40% of that is explained in the first regression. This means that the second regression has more explaining power than the first regression. Note that the Beta is the slope of the regression line and doesn’t measure explaining power.

TOP

Consider the following estimated regression equation:

ROEt = 0.23 - 1.50 CEt
The standard error of the coefficient is 0.40 and the number of observations is 32. The 95% confidence interval for the slope coefficient, b1, is:

A)
{0.683 < b1 < 2.317}.
B)
{-2.300 < b1 < -0.700}.
C)
{-2.317 < b1 < -0.683}.


The confidence interval is -1.50 ± 2.042 (0.40), or {-2.317 < b1 < -0.683}.

TOP

Consider the following estimated regression equation:

AUTOt = 0.89 + 1.32 PIt
The standard error of the coefficient is 0.42 and the number of observations is 22. The 95% confidence interval for the slope coefficient, b1, is:

A)
{0.444 < b1 < 2.196}.
B)
{-0.766 < b1 < 3.406}.
C)
{0.480 < b1 < 2.160}.


The degrees of freedom are found by n-k-1 with k being the number of independent variables or 1 in this case.  DF =  22-1-1 = 20.  Looking up 20 degrees of freedom on the student's t distribution for a 95% confidence level and a 2 tailed test gives us a critical value of 2.086.  The confidence interval is 1.32 ± 2.086 (0.42), or {0.444 < b1 < 2.196}.

TOP

Craig Standish, CFA, is investigating the validity of claims associated with a fund that his company offers. The company advertises the fund as having low turnover and, hence, low management fees. The fund was created two years ago with only a few uncorrelated assets. Standish randomly draws two stocks from the fund, Grey Corporation and Jars Inc., and measures the variances and covariance of their monthly returns over the past two years. The resulting variance covariance matrix is shown below. Standish will test whether it is reasonable to believe that the returns of Grey and Jars are uncorrelated. In doing the analysis, he plans to address the issue of spurious correlation and outliers.

 

Grey

Jars

Grey

42.2

20.8

Jars

20.8

36.5

Standish wants to learn more about the performance of the fund. He performs a linear regression of the fund’s monthly returns over the past two years on a large capitalization index. The results are below:

ANOVA

 

df

SS

MS

F

Regression

1

92.53009

92.53009

28.09117

Residual

22

72.46625

3.293921

 

Total

23

164.9963

 

 

 

 

 

 

 

 

Coefficients

Standard Error

t-statistic

P-value

Intercept

0.148923

0.391669

0.380225

0.707424

Large Cap Index

1.205602

0.227467

5.30011

2.56E-05

Standish forecasts the fund’s return, based upon the prediction that the return to the large capitalization index used in the regression will be 10%. He also wants to quantify the degree of the prediction error, as well as the minimum and maximum sensitivity that the fund actually has with respect to the index.

He plans to summarize his results in a report. In the report, he will also include caveats concerning the limitations of regression analysis. He lists four limitations of regression analysis that he feels are important: relationships between variables can change over time, the decision to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary, if the error terms are heteroskedastic the test statistics for the equation may not be reliable, and if the error terms are correlated with each other over time the test statistics may not be reliable.

Given the variance/covariance matrix for Grey and Jars, in a one-sided hypothesis test that the returns are positively correlated H0: ρ = 0 vs. H1: ρ > 0, Standish would:

A)
reject the null at the 5% but not the 1% level of significance.
B)
reject the null at the 1% level of significance.
C)
need to gather more information before being able to reach a conclusion concerning significance.


First, we must compute the correlation coefficient, which is 0.53 = 20.8 / (42.2 × 36.5)0.5.

The t-statistic is: 2.93 = 0.53 × [(24 - 2) / (1 ? 0.53 × 0.53)]0.5, and for df = 22 = 24 ? 2, the t-statistics for the 5 and 1% level are 1.717 and 2.508 respectively. (Study Session 3, LOS 11.g)


In performing the correlation test on Grey and Jars, Standish would most appropriately address the issue of:

A)
spurious correlation but not the issue of outliers.
B)
neither outliers nor correlation.
C)
spurious correlation and the issue of outliers.


Both these issues are important in performing correlation analysis. A single outlier observation can change the correlation coefficient from significant to not significant and even from negative (positive) to positive (negative). Even if the correlation coefficient is significant, the researcher would want to make sure there is a reason for a relationship and that the correlation is not caused by chance. (Study Session 3, LOS 11.b)


If the large capitalization index has a 10% return, then the forecast of the fund’s return will be:

A)
13.5.
B)
12.2.
C)
16.1.


The forecast is 12.209 = 0.149 + 1.206 × 10, so the answer is 12.2. (Study Session 3, LOS 11.h)


The standard error of the estimate is:

A)
9.62.
B)
0.56.
C)
1.81.


SEE equals the square root of the MSE, which on the ANOVA table is 72.466 / 22 = 3.294. The SEE is 1.81 = (3.294)(0.5). (Study Session 3, LOS 11.i)


A 95% confidence interval for the slope coefficient is:

A)
0.905 to 1.506.
B)
0.760 to 1.650.
C)
0.734 to 1.677.


The 95% confidence interval is 1.2056 ± (2.074 × 0.2275). (Study Session 3, LOS 11.f)


Of the four caveats of regression analysis listed by Standish, the least accurate is:

A)
the choice to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary.
B)
if the error terms are heteroskedastic the test statistics for the equation may not be reliable.
C)
the relationships of variables change over time.


The t-statistic is used for constructing the confidence interval for the forecast. The F-statistic is not used for this purpose. The other possible shortfalls listed are valid. (Study Session 3, LOS 11.i)


TOP

What does the R2 of a simple regression of two variables measure and what calculation is used to equate the correlation coefficient to the coefficient of determination?

R2measures: Correlation coefficient

A)
percent of variability of the dependent variable that is explained by the variability of the independent variable R2 = r2
B)
percent of variability of the independent variable that is explained by the variability of the dependent variable R2 = r2
C)
percent of variability of the independent variable that is explained by the variability of the dependent variable R2 = r × 2


R2, or the Coefficient of Determination, is the square of the coefficient of correlation (r). The coefficient of correlation describes the strength of the relationship between the X and Y variables. The standard error of the residuals is the standard deviation of the dispersion about the regression line. The t-statistic measures the statistical significance of the coefficients of the regression equation. In the response: "percent of variability of the independent variable that is explained by the variability of the dependent variable," the definitions of the variables are reversed.

TOP

The R2 of a simple regression of two factors, A and B, measures the:

A)
impact on B of a one-unit change in A.
B)
statistical significance of the coefficient in the regression equation.
C)
percent of variability of one factor explained by the variability of the second factor.


The coefficient of determination measures the percentage of variation in the dependent variable explained by the variation in the independent variable.

TOP

返回列表