Sei sulla pagina 1di 31

The t-test

Inferences about Population Means


Questions
What is the main use of the t-test?
How is the distribution of t related to the unit
normal?
When would we use a t-test instead of a z-
test? Why might we prefer one to the other?
What are the chief varieties or forms of the t-
test?
What is the standard error of the difference
between means? What are the factors that
influence its size?
More Questions
Identify the appropriate to version of t
to use for a given design.
Compute and interpret t-tests
appropriately.
Given that
H 0 : 75; H1 : 75; s y 14; N 49; t(.05, 48) 2.01

construct a rejection region. Draw a


picture to illustrate.
Background
The t-test is used to test hypotheses about
means when the population variance is
unknown (the usual case). Closely
related to z, the unit normal.
Developed by Gossett for the quality
control of beer.
Comes in 3 varieties:
Single sample, independent samples, and
dependent samples.
What kind of t is it?
Single sample t we have only 1 group; want
to test against a hypothetical mean.
Independent samples t we have 2 means, 2
groups; no relation between groups, e.g.,
people randomly assigned to a single group.
Dependent t we have two means. Either
same people in both groups, or people are
related, e.g., husband-wife, left hand-right
hand, hospital patient and visitor.
Single-sample z test
For large samples (N>100) can use z to
test hypotheses about means. ( X X ) 2
(X )
zM sX N 1
est. M est. M
Suppose N N
H 0 : 10; H1 : 10; s X 5; N 200
Then
sX 5 5
est. M .35
N 200 14.14
If
(11 10)
X 11 z 2.83; 2.83 1.96 p .05
.35
The t Distribution
We use t when the population variance is unknown (the
usual case) and sample size is small (N<100, the usual
case). If you use a stat package for testing hypotheses
about means, you will use t.
The t distribution is a short, fat relative of the normal. The shape of t depends on
its df. As N becomes infinitely large, t becomes normal.
Degrees of Freedom
For the t distribution, degrees of freedom are always a
simple function of the sample size, e.g., (N-1).

One way of explaining df is that if we know the total or


mean, and all but one score, the last (N-1) score is not free
to vary. It is fixed by the other scores. 4+3+2+X = 10.
X=1.
Single-sample t-test
With a small sample size, we compute the same numbers
as we did for z, but we compare them to the t distribution
instead of the z distribution.
H 0 : 10; H1 : 10; s X 5; N 25
sX 5 (11 10)
est. M 1 X 11 t 1
N 25 1

t (.05,24) 2.064 (c.f. z=1.96) 1<2.064, n.s.

X t M
Interval =
11 2.064(1) [8.936, 13.064]
Interval is about 9 to 13 and contains 10, so n.s.
Review
How are the distributions of z and t related?
Given that
H 0 : 75; H1 : 75; s y 14; N 49; t(.05, 48) 2.01
construct a rejection region. Draw a picture
to illustrate.
Difference Between Means (1)
Most studies have at least 2 groups
(e.g., M vs. F, Exp vs. Control)
If we want to know diff in population
means, best guess is diff in sample
means.
Unbiased: E ( y y ) E ( y ) E ( y )
1 2 1 2 1 2

Variance of the Difference: var( y y ) 1 2


2
M1 M2 2

Standard Error:
2 2
diff M1 M2
Difference Between Means (2)
We can estimate the standard error of
the difference between means.
est. diff est. M2 1 est. M2 2

For large samples, can use z


( X 1 X 2 ) ( 1 2 ) H 0 : 1 2 0; H 1 : 1 2 0
z diff est diff X 10; N 100; SD 2
1 1 1

X 2 12; N 2 100; SD2 3


4 9 13
est. diff .36
100 100 100
(10 12) 0 2
z diff 5.56; p .05
.36 .36
Independent Samples t (1)
( y1 y 2 ) ( 1 2 )
Looks just like z: t diff est diff

df=N1-1+N2-1=N1+N2-2
If SDs are equal, estimate is:
2 2 2 1 1
diff
N1 N 2 N
1 N 2

Pooled variance estimate is weighted average:


2 [( N1 1) s12 ( N 2 1) s22 ] /[1 /( N1 N 2 2)]
Pooled Standard Error of the Difference (computed):

( N1 1) s12 ( N 2 1) s22 N1 N 2
est. diff
N1 N 2 2 N N
1 2
Independent Samples t (2)
( N1 1) s12 ( N 2 1) s22 N1 N 2
est. diff
N1 N 2 2 N N
1 2
H 0 : 1 2 0; H1 : 1 2 0
( y1 y 2 ) ( 1 2 )
t diff est diff y1 18; s12 7; N1 5
y2 20; s22 5.83; N 2 7

4(7) 6(5.83) 12
est. diff 1.47
5 7 2 35

(18 20) 0 2
t diff 1.36; n.s.
1.47 1.47
tcrit = t(.05,10)=2.23
Review
What is the standard error of the
difference between means? What are
the factors that influence its size?
Describe a design (what IV? What
DV?) where it makes sense to use the
independent samples t test.
Dependent t (1)
Observations come in pairs. Brother, sister, repeated measure.
diff
2
M2 1 M2 2 2 cov( y1 , y2 )

Problem solved by finding diffs between pairs Di=yi1-yi2.

D
(D ) i
i
( D D ) 2
est. MD
sD
N s
2
D
N 1 N
D E(D )
t df=N(pairs)-1
est. MD
Dependent t (2)
Brother Sister Diff (D D )2
5 7 2 1
7 8 1 0
3 3 0 1
y 5 y6 D 1

sD
( D D ) 2

1
N 1 est. MD 1 / 3 .58
D E(D ) 1
t 1.72
est. MD .58
Assumptions
The t-test is based on assumptions of
normality and homogeneity of variance.
You can test for both these (make sure
you learn the SAS methods).
As long as the samples in each group
are large and nearly equal, the t-test is
robust, that is, still good, even tho
assumptions are not met.
Review
Describe a design where it makes sense
to use a single-sample t.
Describe a design where it makes sense
to use a dependent samples t.
Strength of Association (1)
Scientific purpose is to predict or
explain variation.
Our variable Y has some variance that
we would like to account for. There are
statistical indexes of how well our IV
accounts for variance in the DV. These
are measures of how strongly or closely
associated our Ivs and DVs are.
Variance accounted for:
2
2
( 1 2 ) 2

2 Y Y|X

Y2
4 Y2
Strength of Association (2)
How much of variance in Y is
2 2
( 1 2 ) 2
associated with the IV? 2 Y
2
Y |X

4 Y2
Y

Compare the 1st (left-most) curve with the curve in the


middle and the one on the right.
In each case, how 0.4

much of the variance


0.3
in Y is associated
with the IV, group 0.2

membership? More
in the second 0.1

comparison. As
mean diff gets big, so 0.0

-4 -2 0 2 4 6

does variance acct.


Association & Significance
Power increases with association (effect
size) and sample size.
Effect size: d ( X X ) /
1 2 p

Significance = effect size X sample


(X )
size. t
( X X
1 ) 2 t
1
(single 2
1 N
(independent p2
N
1 N 2 sample)
samples)
td N
Increasing sample size does not increase effect size
(strength of association). It decreases the standard
error so power is greater, |t| is larger.
Estimating Power (1)
If the null is false, the statistic is no
longer distributed as t, but rather as
noncentral t. This makes power
computation difficult.
Howell introduces the noncentrality
parameter delta to use for estimating
power. For the one-sample t,
d n Recall the relations between t
and d on the previous slide
Estimating Power (2)
Suppose (Howell, p. 231) that we have
25 people, a sample mean of 105, and a
hypothesized mean and SD of 100 and
15, respectively. Then
105 100
d Howell presents an appendix
1 / 3 .33
15
where delta is related to
d n .33 25 1.65 power. For power = .8, alpha
power .38 = .05, delta must be 2.80. To
solve for N, we compute:
2 2
2.8
d n; n 8.48 71.91
2

d .33
Estimating Power (3)
Dependent t can be cast as a single
sample t using difference scores.
Independent t. To use Howells
method, the result is n per group, so
double it. Suppose d = .5 (medium
effect) and n =25 per group.
n 25 From Howells appendix, the
d .50 .5 12.5 1.77
2 2
value of delta of 1.77 with
2 2
2.8
n 2 2 62.72
alpha = .05 results in power of
d .5 .43. For a power of .8, we
Need 63 per group. need delta = 2.80
SAS Proc Power single
sample example
proc power;
onesamplemeans test=t
nullmean = 100 The POWER Procedure
mean = 105 One-sample t Test for Mean
stddev = 15 Fixed Scenario Elements
power = .8 Distribution Normal
ntotal = . ; Method Exact
run; Null Mean 100
Mean 105
Standard Deviation 15
Nominal Power 0.8
Number of Sides 2
Alpha 0.05
Computed N Total
Actual N
Power Total
0.802 73
;

2 sample t Power
Calculate sample size
proc power; Two-sample t Test for Mean Difference
twosamplemeans Fixed Scenario Elements
meandiff= .5 Distribution Normal
stddev=1 Method Exact
power=0.8 Mean Difference 0.5
ntotal=.;
Standard Deviation 1
run;
Nominal Power 0.8
Number of Sides 2
Null Difference 0
Alpha 0.05
Group 1 Weight 1
Group 2 Weight 1
Computed N Total
Actual N
Power Total
0.801 128
2 sample t Power The POWER Procedure
Two-Sample t Test for Mean Difference
proc power;
Fixed Scenario Elements
twosamplemeans
meandiff = 5 [assumed Distribution Normal
difference] Method Exact
Number of Sides 1
stddev =10 [assumed SD] Mean Difference 5
sides = 1 [1 tail] Standard Deviation 10
ntotal = 50 [25 per group] Total Sample Size 50
Null Difference 0
power = .; *[tell me!]; Alpha 0.05
run; Group 1 Weight 1
Group 2 Weight 1

Computed Power
Power
0.539
Typical Power in Psych
Average effect size is about d=.40.
Consider power for effect sizes between
.3 and .6. What kind of sample size do
we need for power of .8?
proc power; Two-sample t Test for 1
twosamplemeans Computed N Total
meandiff= .3 to .6 by .1 Mean Actual N
stddev=1 Index Diff Power Total
power=.8 1 0.3 0.801 352
ntotal=.; 2 0.4 0.804 200
plot x= power min = .5 max=.95; 3 0.5 0.801 128
run; 4 0.6 0.804 90

Typical studies are underpowered.


Power Curves

Why a whopper of an IV is helpful.


600

500

400
Total Sample Size

300

200

100

0
0.5 0.6 0.7 0.8 0.9 1.0
Power

Mean Diff 0.3


0.4
0.5
0.6
Review
About how many people total will you
need for power of .8, alpha is .05 (two
tails), and an effect size of .3?
You can only afford 40 people per
group, and based on the literature, you
estimate the group means to be 50 and
60 with a standard deviation within
groups of 20. What is your power
estimate?

Potrebbero piacerti anche