返回列表 发帖
An analyst is examining the relationship between two random variables, RCRANTZ and GSTERN. He performs a linear regression that produces an estimate of the relationship:
RCRANTZ = 61.4 − 5.9GSTERN

Which interpretation of this regression equation is least accurate?
A)
The covariance of RCRANTZ and GSTERN is negative.
B)
The intercept term implies that if GSTERN is zero, RCRANTZ is 61.4.
C)
If GSTERN increases by one unit, RCRANTZ should increase by 5.9 units.



The slope coefficient in this regression is -5.9. This means a one unit increase of GSTERN suggests a decrease of 5.9 units of RCRANTZ. The slope coefficient is the covariance divided by the variance of the independent variable. Since variance (a squared term) must be positive, a negative slope term implies that the covariance is negative.

TOP

Which of the following is least likely an assumption of linear regression? The:
A)
expected value of the residuals is zero.
B)
residuals are mean reverting; that is, they tend towards zero over time.
C)
residuals are independently distributed.



The assumptions regarding the residuals are that the residuals have a constant variance, have a mean of zero, and are independently distributed.

TOP

Which of the following is least likely an assumption of linear regression?
A)
The residuals are normally distributed.
B)
The variance of the residuals is constant.
C)
The independent variable is correlated with the residuals.



The assumption is that the independent variable is uncorrelated with the residuals.

TOP

The assumptions underlying linear regression include all of the following EXCEPT the:
A)
disturbance term is normally distributed with an expected value of 0.
B)
independent variable is linearly related to the residuals (or disturbance term).
C)
disturbance term is homoskedastic and is independently distributed.



The independent variable is uncorrelated with the residuals (or disturbance term).
The other statements are true. The disturbance term is homoskedastic because it has a constant variance. It is independently distributed because the residual for one observation is not correlated with that of another observation. Note: The opposite of homoskedastic is heteroskedastic. For the examination, memorize the assumptions underlying linear regression!

TOP

Linear regression is based on a number of assumptions. Which of the following is least likely an assumption of linear regression?
A)
Values of the independent variable are not correlated with the error term.
B)
There is at least some correlation between the error terms from one observation to the next.
C)
The variance of the error terms each period remains the same.



When correlation (between the error terms from one observation to the next) exists, autocorrelation is present. As a result, residual terms are not normally distributed. This is inconsistent with linear regression.

TOP

Which of the following statements about linear regression analysis is most accurate?
A)
The coefficient of determination is defined as the strength of the linear relationship between two variables.
B)
An assumption of linear regression is that the residuals are independently distributed.
C)
When there is a strong relationship between two variables we can conclude that a change in one will cause a change in the other.



Even when there is a strong relationship between two variables, we cannot conclude that a causal relationship exists. The coefficient of determination is defined as the percentage of total variation in the dependent variable explained by the independent variable.

TOP

Assume you perform two simple regressions. The first regression analysis has an R-squared of 0.80 and a beta coefficient of 0.10. The second regression analysis has an R-squared of 0.80 and a beta coefficient of 0.25. Which one of the following statements is most accurate?
A)
Explained variability from both analyses is equal.
B)
The influence on the dependent variable of a one-unit increase in the independent variable is the same in both analyses.
C)
Results from the first analysis are more reliable than the second analysis.



The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The R-squared (0.80) being identical between the first and second regressions means that 80% of the variability in the dependent variable is explained by variability in the independent variable for both regressions. This means that the first regression has the same explaining power as the second regression.

TOP

Consider the following estimated regression equation:
ROEt = 0.23 - 1.50 CEt
The standard error of the coefficient is 0.40 and the number of observations is 32. The 95% confidence interval for the slope coefficient, b1, is:
A)
{0.683 < b1 < 2.317}.
B)
{-2.317 < b1 < -0.683}.
C)
{-2.300 < b1 < -0.700}.



The confidence interval is -1.50 ± 2.042 (0.40), or {-2.317 < b1 < -0.683}.

TOP

What does the R2 of a simple regression of two variables measure and what calculation is used to equate the correlation coefficient to the coefficient of determination?
[td=1,1,225]R2measures:Correlation coefficient
A)
percent of variability of the independent variable that is explained by the variability of the dependent variableR2 = r2
B)
percent of variability of the independent variable that is explained by the variability of the dependent variableR2 = r × 2
C)
percent of variability of the dependent variable that is explained by the variability of the independent variable R2 = r2



R2, or the Coefficient of Determination, is the square of the coefficient of correlation (r). The coefficient of correlation describes the strength of the relationship between the X and Y variables. The standard error of the residuals is the standard deviation of the dispersion about the regression line. The t-statistic measures the statistical significance of the coefficients of the regression equation. In the response: "percent of variability of the independent variable that is explained by the variability of the dependent variable," the definitions of the variables are reversed.

TOP

Craig Standish, CFA, is investigating the validity of claims associated with a fund that his company offers. The company advertises the fund as having low turnover and, hence, low management fees. The fund was created two years ago with only a few uncorrelated assets. Standish randomly draws two stocks from the fund, Grey Corporation and Jars Inc., and measures the variances and covariance of their monthly returns over the past two years. The resulting variance covariance matrix is shown below. Standish will test whether it is reasonable to believe that the returns of Grey and Jars are uncorrelated. In doing the analysis, he plans to address the issue of spurious correlation and outliers.

GreyJars
Grey42.220.8
Jars20.836.5


Standish wants to learn more about the performance of the fund. He performs a linear regression of the fund’s monthly returns over the past two years on a large capitalization index. The results are below:

ANOVA

[td=1,1,110]
[td=1,1,75]
[td=1,1,70]
[/td]


df

SS

MS

F

Regression

1

92.53009

92.53009

28.09117

Residual

22

72.46625

3.293921


Total

23

164.9963









Coefficients

Standard Error

t-statistic

P-value

Intercept

0.148923

0.391669

0.380225

0.707424

Large Cap Index

1.205602

0.227467

5.30011

2.56E-05

Standish forecasts the fund’s return, based upon the prediction that the return to the large capitalization index used in the regression will be 10%. He also wants to quantify the degree of the prediction error, as well as the minimum and maximum sensitivity that the fund actually has with respect to the index.
He plans to summarize his results in a report. In the report, he will also include caveats concerning the limitations of regression analysis. He lists four limitations of regression analysis that he feels are important: relationships between variables can change over time, the decision to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary, if the error terms are heteroskedastic the test statistics for the equation may not be reliable, and if the error terms are correlated with each other over time the test statistics may not be reliable. Given the variance/covariance matrix for Grey and Jars, in a one-sided hypothesis test that the returns are positively correlated H0: ρ = 0 vs. H1: ρ > 0, Standish would:
A)
reject the null at the 5% but not the 1% level of significance.
B)
need to gather more information before being able to reach a conclusion concerning significance.
C)
reject the null at the 1% level of significance.



First, we must compute the correlation coefficient, which is 0.53 = 20.8 / (42.2 × 36.5)0.5.
The t-statistic is: 2.93 = 0.53 × [(24 - 2) / (1 − 0.53 × 0.53)]0.5, and for df = 22 = 24 − 2, the t-statistics for the 5 and 1% level are 1.717 and 2.508 respectively. (Study Session 3, LOS 11.g)


In performing the correlation test on Grey and Jars, Standish would most appropriately address the issue of:
A)
spurious correlation but not the issue of outliers.
B)
spurious correlation and the issue of outliers.
C)
neither outliers nor correlation.



Both these issues are important in performing correlation analysis. A single outlier observation can change the correlation coefficient from significant to not significant and even from negative (positive) to positive (negative). Even if the correlation coefficient is significant, the researcher would want to make sure there is a reason for a relationship and that the correlation is not caused by chance. (Study Session 3, LOS 11.b)

If the large capitalization index has a 10% return, then the forecast of the fund’s return will be:
A)
13.5.
B)
16.1.
C)
12.2.



The forecast is 12.209 = 0.149 + 1.206 × 10, so the answer is 12.2. (Study Session 3, LOS 11.h)

The standard error of the estimate is:
A)
1.81.
B)
9.62.
C)
0.56.



SEE equals the square root of the MSE, which on the ANOVA table is 72.466 / 22 = 3.294. The SEE is 1.81 = (3.294)0.5. (Study Session 3, LOS 11.i)

A 95% confidence interval for the slope coefficient is:
A)
0.905 to 1.506.
B)
0.760 to 1.650.
C)
0.734 to 1.677.



The 95% confidence interval is 1.2056 ± (2.074 × 0.2275). (Study Session 3, LOS 11.f)

Of the four caveats of regression analysis listed by Standish, the least accurate is:
A)
if the error terms are heteroskedastic the test statistics for the equation may not be reliable.
B)
the choice to use a t-statistic or F-statistic for a forecast confidence interval is arbitrary.
C)
the relationships of variables change over time.



The t-statistic is used for constructing the confidence interval for the forecast. The F-statistic is not used for this purpose. The other possible shortfalls listed are valid. (Study Session 3, LOS 11.i)

TOP

返回列表