Sei sulla pagina 1di 5

That was a more or less complete exposition of the problem. Heres a simpler account.

t. All cases assume we want 95% confidence (i.e. 5% significance level). TEST 1: SIGNIFICANT OVERALL REGRESSION Objective: determine whether the full model is any better than just using the average of y values and calling it a day. Ho: B1=B2==Bk=0 Ha: at least one of those slopes =/= 0 Test Stat (F stat): F= MSR/MSE Critical Value: F(k, n-k-1, .95) TEST 2: PARTIAL F TEST Objective: determine whether a full model could be improved by lopping off one of the variables. Ho: B*=0 Ha: B*=/=0 F Stat: Extra SS (B*)/MSE or T Stat: Bhat*/SE Bhat* Critical F Value: F (1, n-p-2, .95) Critical T Value: T (n-p-2, .025) assuming a two-tailed test (where Ho: something=something else) TEST 3: MULTIPLE PARTIAL F TEST Objective: determine whether we can cut a bunch of stuff and still have a workable model. First Assumption: Full Model is B1, B2, , Bp, B*1, , B*k (along with y-int and error, etc) Ho: B*1= = B*k = 0 Ha: at least one of the * slopes is not zero. F Stat: (All the extra sum-of-square value for the * terms combined/k)/MSE Critical Value: F (k, n-p-k-1, .95)

AN EASY WAY TO REMEMBER THE DEGREES OF FREEDOM FOR ALL THREE F TESTS: 1. Look at your full model. How many variables have a big letter B next to them? Write that number down. Im including the y-intercept here. If it has a B it gets counted. 2. Look at your n size. Write that number down. 3. Subtract (1) from (2). Thats your DENOMINATOR DF. 4. Look at the variables listed in your null hypothesis. The ones that Ho claims equal zero. Count em. That number is your NUMERATOR DF. 5. Slap a .95 on the end of that thing for good measure. Done deal.

Potrebbero piacerti anche