返回列表 发帖

Reading 11: Correlation and Regression-LOS f 习题精选

Session 3: Quantitative Methods for Valuation
Reading 11: Correlation and Regression

LOS f: Calculate and interpret the standard error of estimate, the coefficient of determination, and a confidence interval for a regression coefficient.

 

 

Bea Carroll, CFA, has performed a regression analysis of the relationship between 6-month LIBOR and the U.S. Consumer Price Index (CPI). Her analysis indicates a standard error of estimate (SEE) that is high relative to total variability. Which of the following conclusions regarding the relationship between 6-month LIBOR and CPI can Carroll most accurately draw from her SEE analysis? The relationship between the two variables is:

A)
very weak.
B)
positively correlated.
C)
very strong.


 

The SEE is the standard deviation of the error terms in the regression, and is an indicator of the strength of the relationship between the dependent and independent variables. The SEE will be low if the relationship is strong and conversely will be high if the relationship is weak.

Craig Standish, CFA, is investigating the validity of claims associated with a fund that his company offers. The company advertises the fund as having low turnover and, hence, low management fees. The fund was created two years ago with only a few uncorrelated assets. Standish randomly draws two stocks from the fund, Grey Corporation and Jars Inc., and measures the variances and covariance of their monthly returns over the past two years. The resulting variance covariance matrix is shown below. Standish will test whether it is reasonable to believe that the returns of Grey and Jars are uncorrelated. In doing the analysis, he plans to address the issue of spurious correlation and outliers.

 

Grey

Jars

Grey

42.2

20.8

Jars

20.8

36.5

Standish wants to learn more about the performance of the fund. He performs a linear regression of the fund’s monthly returns over the past two years on a large capitalization index. The results are below:

ANOVA

 

df

SS

MS

F

Regression

1

92.53009

92.53009

28.09117

Residual

22

72.46625

3.293921

 

Total

23

164.9963

 

 

 

 

 

 

 

 

Coefficients

Standard Error

t-statistic

P-value

Intercept

0.148923

0.391669

0.380225

0.707424

Large Cap Index

1.205602

0.227467

5.30011

2.56E-05

Standish forecasts the fund’s return, based upon the prediction that the return to the large capitalization index used in the regression will be 10%. He also wants to quantify the degree of the prediction error, as well as the minimum and maximum sensitivity that the fund actually has with respect to the index.

He plans to summarize his results in a report. In the report, he will also include caveats concerning the limitations of regression analysis. He lists four limitations of regression analysis that he feels are important: relationships between variables can change over time, the decision to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary, if the error terms are heteroskedastic the test statistics for the equation may not be reliable, and if the error terms are correlated with each other over time the test statistics may not be reliable.

Given the variance/covariance matrix for Grey and Jars, in a one-sided hypothesis test that the returns are positively correlated H0: ρ = 0 vs. H1: ρ > 0, Standish would:

A)
reject the null at the 5% but not the 1% level of significance.
B)
reject the null at the 1% level of significance.
C)
need to gather more information before being able to reach a conclusion concerning significance.


First, we must compute the correlation coefficient, which is 0.53 = 20.8 / (42.2 × 36.5)0.5.

The t-statistic is: 2.93 = 0.53 × [(24 - 2) / (1 ? 0.53 × 0.53)]0.5, and for df = 22 = 24 ? 2, the t-statistics for the 5 and 1% level are 1.717 and 2.508 respectively. (Study Session 3, LOS 11.g)


In performing the correlation test on Grey and Jars, Standish would most appropriately address the issue of:

A)
spurious correlation but not the issue of outliers.
B)
neither outliers nor correlation.
C)
spurious correlation and the issue of outliers.


Both these issues are important in performing correlation analysis. A single outlier observation can change the correlation coefficient from significant to not significant and even from negative (positive) to positive (negative). Even if the correlation coefficient is significant, the researcher would want to make sure there is a reason for a relationship and that the correlation is not caused by chance. (Study Session 3, LOS 11.b)


If the large capitalization index has a 10% return, then the forecast of the fund’s return will be:

A)
13.5.
B)
12.2.
C)
16.1.


The forecast is 12.209 = 0.149 + 1.206 × 10, so the answer is 12.2. (Study Session 3, LOS 11.h)


The standard error of the estimate is:

A)
9.62.
B)
0.56.
C)
1.81.


SEE equals the square root of the MSE, which on the ANOVA table is 72.466 / 22 = 3.294. The SEE is 1.81 = (3.294)(0.5). (Study Session 3, LOS 11.i)


A 95% confidence interval for the slope coefficient is:

A)
0.905 to 1.506.
B)
0.760 to 1.650.
C)
0.734 to 1.677.


The 95% confidence interval is 1.2056 ± (2.074 × 0.2275). (Study Session 3, LOS 11.f)


Of the four caveats of regression analysis listed by Standish, the least accurate is:

A)
the choice to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary.
B)
if the error terms are heteroskedastic the test statistics for the equation may not be reliable.
C)
the relationships of variables change over time.


The t-statistic is used for constructing the confidence interval for the forecast. The F-statistic is not used for this purpose. The other possible shortfalls listed are valid. (Study Session 3, LOS 11.i)


TOP

What does the R2 of a simple regression of two variables measure and what calculation is used to equate the correlation coefficient to the coefficient of determination?

R2measures: Correlation coefficient

A)
percent of variability of the dependent variable that is explained by the variability of the independent variable R2 = r2
B)
percent of variability of the independent variable that is explained by the variability of the dependent variable R2 = r2
C)
percent of variability of the independent variable that is explained by the variability of the dependent variable R2 = r × 2


R2, or the Coefficient of Determination, is the square of the coefficient of correlation (r). The coefficient of correlation describes the strength of the relationship between the X and Y variables. The standard error of the residuals is the standard deviation of the dispersion about the regression line. The t-statistic measures the statistical significance of the coefficients of the regression equation. In the response: "percent of variability of the independent variable that is explained by the variability of the dependent variable," the definitions of the variables are reversed.

TOP

The R2 of a simple regression of two factors, A and B, measures the:

A)
impact on B of a one-unit change in A.
B)
statistical significance of the coefficient in the regression equation.
C)
percent of variability of one factor explained by the variability of the second factor.


The coefficient of determination measures the percentage of variation in the dependent variable explained by the variation in the independent variable.

TOP

Consider the following estimated regression equation:

ROEt = 0.23 - 1.50 CEt
The standard error of the coefficient is 0.40 and the number of observations is 32. The 95% confidence interval for the slope coefficient, b1, is:

A)
{0.683 < b1 < 2.317}.
B)
{-2.300 < b1 < -0.700}.
C)
{-2.317 < b1 < -0.683}.


The confidence interval is -1.50 ± 2.042 (0.40), or {-2.317 < b1 < -0.683}.

TOP

Consider the following estimated regression equation:

AUTOt = 0.89 + 1.32 PIt
The standard error of the coefficient is 0.42 and the number of observations is 22. The 95% confidence interval for the slope coefficient, b1, is:

A)
{0.444 < b1 < 2.196}.
B)
{-0.766 < b1 < 3.406}.
C)
{0.480 < b1 < 2.160}.


The degrees of freedom are found by n-k-1 with k being the number of independent variables or 1 in this case.  DF =  22-1-1 = 20.  Looking up 20 degrees of freedom on the student's t distribution for a 95% confidence level and a 2 tailed test gives us a critical value of 2.086.  The confidence interval is 1.32 ± 2.086 (0.42), or {0.444 < b1 < 2.196}.

TOP

If X and Y are perfectly correlated, regressing Y onto X will result in which of the following:

A)
the regression line will be sloped upward.
B)
the standard error of estimate will be zero.
C)
the alpha coefficient will be zero.


If X and Y are perfectly correlated, all of the points will plot on the regression line, so the standard error of the estimate will be zero. Note that the sign of the correlation coefficient will indicate which way the regression line is pointing (there can be perfect negative correlation sloping downward as well as perfect positive correlation sloping upward). Alpha is the intercept and is not directly related to the correlation.

TOP

A simple linear regression is run to quantify the relationship between the return on the common stocks of medium sized companies (Mid Caps) and the return on the S& 500 Index, using the monthly return on Mid Cap stocks as the dependent variable and the monthly return on the S& 500 as the independent variable. The results of the regression are shown below:

Coefficient

Standard Error

of coefficient

t-Value

Intercept

1.71

2.950

0.58

S& 500

1.52

0.130

11.69

R2= 0.599

The strength of the relationship, as measured by the correlation coefficient, between the return on Mid Cap stocks and the return on the S& 500 for the period under study was:

A)
0.130.
B)
0.774.
C)
0.599.


You are given R2 or the coefficient of determination of 0.599 and are asked to find R or the coefficient of correlation. The square root of 0.599 = 0.774.

TOP

Assume an analyst performs two simple regressions. The first regression analysis has an R-squared of 0.90 and a slope coefficient of 0.10. The second regression analysis has an R-squared of 0.70 and a slope coefficient of 0.25. Which one of the following statements is most accurate?

A)
The first regression has more explanatory power than the second regression.
B)
The influence on the dependent variable of a one unit increase in the independent variable is 0.9 in the first analysis and 0.7 in the second analysis.
C)
Results of the second analysis are more reliable than the first analysis.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The larger R-squared (0.90) of the first regression means that 90% of the variability in the dependent variable is explained by variability in the independent variable, while 70% of that is explained in the second regression. This means that the first regression has more explanatory power than the second regression. Note that the Beta is the slope of the regression line and doesn’t measure explanatory power.

TOP

Assume you perform two simple regressions. The first regression analysis has an R-squared of 0.80 and a beta coefficient of 0.10. The second regression analysis has an R-squared of 0.80 and a beta coefficient of 0.25. Which one of the following statements is most accurate?

A)
The influence on the dependent variable of a one-unit increase in the independent variable is the same in both analyses.
B)
Results from the first analysis are more reliable than the second analysis.
C)
Explained variability from both analyses is equal.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The R-squared (0.80) being identical between the first and second regressions means that 80% of the variability in the dependent variable is explained by variability in the independent variable for both regressions. This means that the first regression has the same explaining power as the second regression.

TOP

返回列表