上一主题:Reading 12: Multiple Regression and Issues in Regression Analy
下一主题:Reading 12: Multiple Regression and Issues in Regression Analy
返回列表 发帖

Reading 12: Multiple Regression and Issues in Regression Analy

Session 3: Quantitative Methods for Valuation
Reading 12: Multiple Regression and Issues in Regression Analysis

LOS j: Describe multicollinearity, and discuss its causes and effects in regression analysis.

 

 

An analyst is testing to see whether a dependent variable is related to three independent variables. He finds that two of the independent variables are correlated with each other, but that the correlation is spurious. Which of the following is most accurate? There is:

A)
evidence of multicollinearity but not serial correlation.
B)
no evidence of multicollinearity and serial correlation.
C)
evidence of multicollinearity and serial correlation.


 

Just because the correlation is spurious, does not mean the problem of multicollinearity will go away. However, there is no evidence of serial correlation.

A variable is regressed against three other variables, x, y, and z. Which of the following would NOT be an indication of multicollinearity? X is closely related to:

A)
3y + 2z.
B)
3.
C)
y2.


If x is related to y2, the relationship between x and y is not linear, so multicollinearity does not exist. If x is equal to a constant (3), it will be correlated with the intercept term.

TOP

Miles Mason, CFA, works for ABC Capital, a large money management company based in New York. Mason has several years of experience as a financial analyst, but is currently working in the marketing department developing materials to be used by ABC’s sales team for both existing and prospective clients. ABC Capital’s client base consists primarily of large net worth individuals and Fortune 500 companies. ABC invests its clients’ money in both publicly traded mutual funds as well as its own investment funds that are managed in-house. Five years ago, roughly half of its assets under management were invested in the publicly traded mutual funds, with the remaining half in the funds managed by ABC’s investment team. Currently, approximately 75% of ABC’s assets under management are invested in publicly traded funds, with the remaining 25% being distributed among ABC’s private funds. The managing partners at ABC would like to shift more of its client’s assets away from publicly-traded funds into ABC’s proprietary funds, ultimately returning to a 50/50 split of assets between publicly traded funds and ABC funds. There are three key reasons for this shift in the firm’s asset base. First, ABC’s in-house funds have outperformed other funds consistently for the past five years. Second, ABC can offer its clients a reduced fee structure on funds managed in-house relative to other publicly traded funds. Lastly, ABC has recently hired a top fund manager away from a competing investment company and would like to increase his assets under management.

ABC Capital’s upper management requested that current clients be surveyed in order to determine the cause of the shift of assets away from ABC funds. Results of the survey indicated that clients feel there is a lack of information regarding ABC’s funds. Clients would like to see extensive information about ABC’s past performance, as well as a sensitivity analysis showing how the funds will perform in varying market scenarios. Mason is part of a team that has been charged by upper management to create a marketing program to present to both current and potential clients of ABC. He needs to be able to demonstrate a history of strong performance for the ABC funds, and, while not promising any measure of future performance, project possible return scenarios. He decides to conduct a regression analysis on all of ABC’s in-house funds. He is going to use 12 independent economic variables in order to predict each particular fund’s return. Mason is very aware of the many factors that could minimize the effectiveness of his regression model, and if any are present, he knows he must determine if any corrective actions are necessary. Mason is using a sample size of 121 monthly returns.

In order to conduct an F-test, what would be the degrees of freedom used (dfnumerator; dfdenominator)?

A)
108; 12.
B)
12; 108.
C)
11; 120.


Degrees of freedom for the F-statistic is k for the numerator and n ? k ? 1 for the denominator.

k = 12

n ? k ? 1 = 121 ? 12 ? 1 = 108

(Study Session 3, LOS 12.e)


In regard to multiple regression analysis, which of the following statements is most accurate?

A)
Adjusted R2 is less than R2.
B)
Adjusted R2 always decreases as independent variables increase.
C)
R2 is less than adjusted R2.


Whenever there is more than one independent variable, adjusted R2 is less than R2. Adding a new independent variable will increase R2, but may either increase or decrease adjusted R2.

R2 adjusted = 1 ? [((n ? 1) / (n ? k ? 1)) × (1 ? R2)]

Where:
n = number of observations
K = number of independent variables
R2 = unadjusted R2

(Study Session 3, LOS 12.f)


Which of the following tests is used to detect autocorrelation?

A)
Residual Plot.
B)
Breusch-Pagan.
C)
Durbin-Watson.


Durbin-Watson is used to detect autocorrelation. Breusch-Pagan and the residual plot are methods to detect heteroskedasticity. (Study Session 3, LOS 12.i)


One of the most popular ways to correct heteroskedasticity is to:

A)
adjust the standard errors.
B)
use robust standard errors.
C)
improve the specification of the model.


Using generalized least squares and calculating robust standard errors are possible remedies for heteroskedasticity. Improving specifications remedies serial correlation. The standard error cannot be adjusted, only the coefficient of the standard errors. (Study Session 3, LOS 12.i)


Which of the following statements regarding the Durbin-Watson statistic is most accurate? The Durbin-Watson statistic:

A)
only uses error terms in its computations.
B)
can only be used to detect positive serial correlation.
C)
is approximately equal to 1 if the error terms are not serially correlated.


The formula for the Durbin-Watson statistic uses error terms in its calculation. The Durbin-Watson statistic is approximately equal to 2 if there is no serial correlation. A Durbin-Watson statistic less than 2 indicates positive serial correlation, while a Durbin-Watson statistic greater then 2 indicates negative serial correlation. (Study Session 3, LOS 12.i)


If a regression equation shows that no individual t-tests are significant, but the F-statistic is significant, the regression probably exhibits:

A)
multicollinearity.
B)
heteroskedasticity.
C)
serial correlation.


Common indicators of multicollinearity include: high correlation (>0.7) between independent variables, no individual t-tests are significant but the F-statistic is, and signs on the coefficients that are opposite of what is expected. (Study Session 3, LOS 12.j)


TOP

Which of the following is a potential remedy for multicollinearity?

A)
Omit one or more of the collinear variables.
B)
Take first differences of the dependent variable.
C)
Add dummy variables to the regression.


The first differencing is not a remedy for the collinearity, nor is the inclusion of dummy variables. The best potential remedy is to attempt to eliminate highly correlated variables.

TOP

Which of the following statements regarding multicollinearity is least accurate?

A)
Multicollinearity may be a problem even if the multicollinearity is not perfect.
B)
If the t-statistics for the individual independent variables are insignificant, yet the F-statistic is significant, this indicates the presence of multicollinearity.
C)
Multicollinearity may be present in any regression model.


Multicollinearity is not an issue in simple regression.


TOP

An analyst runs a regression of portfolio returns on three independent variables.  These independent variables are price-to-sales (P/S), price-to-cash flow (P/CF), and price-to-book (P/B).  The analyst discovers that the p-values for each independent variable are relatively high.  However, the F-test has a very small p-value.  The analyst is puzzled and tries to figure out how the F-test can be statistically significant when the individual independent variables are not significant.  What violation of regression analysis has occurred? 

A)
conditional heteroskedasticity.
B)
multicollinearity.
C)
serial correlation.


An indication of multicollinearity is when the independent variables individually are not statistically significant but the F-test suggests that the variables as a whole do an excellent job of explaining the variation in the dependent variable.

TOP

When two or more of the independent variables in a multiple regression are correlated with each other, the condition is called:

A)
serial correlation.
B)
conditional heteroskedasticity.
C)
multicollinearity.


Multicollinearity refers to the condition when two or more of the independent variables, or linear combinations of the independent variables, in a multiple regression are highly correlated with each other. This condition distorts the standard error of estimate and the coefficient standard errors, leading to problems when conducting t-tests for statistical significance of parameters.

TOP

Thank you very much

TOP

返回列表
上一主题:Reading 12: Multiple Regression and Issues in Regression Analy
下一主题:Reading 12: Multiple Regression and Issues in Regression Analy