返回列表 发帖
Let's assume this linear model we are regressing:

y = Bo + B1x1 + B2x2 + ei

Least squares are run and we get an equation of the estimates of the predictors, B_hats. Can't really type it out, but same equation as above, by y and B's get a hat on them ("^"), and there error term disappears since the expected value is zero.

The t-test looks at the significance of one B (with a hat). that is that null hypothesis, Ho: B1 = 0. If we reject, then that means B1 is significantly different than zero. Therefore, we would keep that factor in our model, it's helping to explain the estimate we want, y.

The F-test looks at the significance of ALL the B's at once. that is that the null hypothersis, Ho: B1 = B2 = 0. ALL of them are equal to zero. If we reject, ALL the estimates produce a non-significant (not zero) explanation of dependent variable y_hat.

In short, t-test is significance of one parameter, Beta; F-test is the entire set of B's.


Sorry if I overexplained, I tend to do that.

TOP

返回列表