
- UID
- 217804
- 帖子
- 274
- 主题
- 141
- 注册时间
- 2011-5-24
- 最后登录
- 2012-9-12
|
2#
发表于 2011-7-11 19:26
| 只看该作者
Let's assume this linear model we are regressing:
y = Bo + B1x1 + B2x2 + ei
Least squares are run and we get an equation of the estimates of the predictors, B_hats. Can't really type it out, but same equation as above, by y and B's get a hat on them ("^"), and there error term disappears since the expected value is zero.
The t-test looks at the significance of one B (with a hat). that is that null hypothesis, Ho: B1 = 0. If we reject, then that means B1 is significantly different than zero. Therefore, we would keep that factor in our model, it's helping to explain the estimate we want, y.
The F-test looks at the significance of ALL the B's at once. that is that the null hypothersis, Ho: B1 = B2 = 0. ALL of them are equal to zero. If we reject, ALL the estimates produce a non-significant (not zero) explanation of dependent variable y_hat.
In short, t-test is significance of one parameter, Beta; F-test is the entire set of B's.
Sorry if I overexplained, I tend to do that. |
|