返回列表 发帖

An analyst performs two simple regressions. The first regression analysis has an R-squared of 0.40 and a beta coefficient of 1.2. The second regression analysis has an R-squared of 0.77 and a beta coefficient of 1.75. Which one of the following statements is most accurate?

A)
The first regression equation has more explaining power than the second regression equation.
B)
The second regression equation has more explaining power than the first regression equation.
C)
The R-squared of the first regression indicates that there is a 0.40 correlation between the independent and the dependent variables.


The coefficient of determination (R-squared) is the percentage of variation in the dependent variable explained by the variation in the independent variable. The larger R-squared (0.77) of the second regression means that 77% of the variability in the dependent variable is explained by variability in the independent variable, while only 40% of that is explained in the first regression. This means that the second regression has more explaining power than the first regression. Note that the Beta is the slope of the regression line and doesn’t measure explaining power.

TOP

The standard error of estimate is closest to the:

A)
standard deviation of the residuals.
B)
standard deviation of the independent variable.
C)
standard deviation of the dependent variable.


The standard error of the estimate measures the uncertainty in the relationship between the actual and predicted values of the dependent variable. The differences between these values are called the residuals, and the standard error of the estimate helps gauge the fit of the regression line (the smaller the standard error of the estimate, the better the fit).

TOP

The standard error of the estimate measures the variability of the:

A)
predicted y-values around the mean of the observed y-values.
B)
values of the sample regression coefficient.
C)
actual dependent variable values about the estimated regression line.


The standard error of the estimate (SEE) measures the uncertainty in the relationship between the independent and dependent variables and helps gauge the fit of the regression line (the smaller the standard error of the estimate, the better the fit).

Remember that the SEE is different from the sum of squared errors (SSE). SSE = the sum of (actual value - predicted value)2. SEE is the the square root of the SSE "standardized" by the degrees of freedom, or SEE = [SSE / (n - 2)]1/2

TOP

Jason Brock, CFA, is performing a regression analysis to identify and evaluate any relationship between the common stock of ABT Corp and the S& 100 index. He utilizes monthly data from the past five years, and assumes that the sum of the squared errors is .0039. The calculated standard error of the estimate (SEE) is closest to:

A)
0.0082.
B)
0.0080.
C)
0.0360.


The standard error of estimate of a regression equation measures the degree of variability between the actual and estimated Y-values. The SEE may also be referred to as the standard error of the residual or the standard error of the regression. The SEE is equal to the square root of the mean squared error. Expressed in a formula,

SEE = √(SSE / (n-2)) = √(.0039 / (60-2)) = .0082

TOP

The standard error of the estimate in a regression is the standard deviation of the:<

A)
differences between the actual values of the dependent variable and the mean of the dependent variable.
B)
residuals of the regression.
C)
dependent variable.


The standard error is se = √[SSE/(n-2)]. It is the standard deviation of the residuals.

TOP

Which of the following statements about the standard error of the estimate (SEE) is least accurate?

A)
The larger the SEE the larger the R2.
B)
The SEE will be high if the relationship between the independent and dependent variables is weak.
C)
The SEE may be calculated from the sum of the squared errors and the number of observations.


The R2, or coefficient of determination, is the percentage of variation in the dependent variable explained by the variation in the independent variable. A higher R2 means a better fit. The SEE is smaller when the fit is better.

TOP

The most appropriate measure of the degree of variability of the actual Y-values relative to the estimated Y-values from a regression equation is the:

A)
sum of squared errors (SSE).
B)
coefficient of determination (R2).
C)
standard error of the estimate (SEE).


The SEE is the standard deviation of the error terms in the regression, and is an indicator of the strength of the relationship between the dependent and independent variables. The SEE will be low if the relationship is strong, and conversely will be high if the relationship is weak.

TOP

Which of the following statements about the standard error of estimate is least accurate? The standard error of estimate:

A)
is the square root of the sum of the squared deviations from the regression line divided by (n ? 2).
B)
is the square of the coefficient of determination.
C)
measures the Y variable's variability that is not explained by the regression equation.


Note: The coefficient of determination (R2) is the square of the correlation coefficient in simple linear regression.

TOP

返回列表