Sei sulla pagina 1di 86

Chapter 2

Getting Started with Completely Randomized


Designs

Li-Shya Chen and Tsung-Chi Cheng

Department of Statistics
National Chengchi University
Taipei 11605, Taiwan

E-mail: chengt@nccu.edu.tw
Outline
• 2.1 Assembling the research design
• 2.2 How to randomize
• 2.3 Preparation of data files for the analysis
• 2.4 A statistical model for the experiment
• 2.5 Estimation of the model parameters with least squares
• 2.6 Sums of squares to identify important sources of variation
• 2.7 A treatment effects model
• 2.8 Degrees of freedom
• 2.9 Summaries in the analysis of variance table
• 2.10 Tests of hypotheses about linear models
• 2.11 Significance testing and tests of hypotheses
• 2.12 Standard errors and confidence intervals for treatment
means
• 2.13 Unequal replication of the treatments
• 2.14 How many replications of the F test?
• 2A.1 Appendix: Expected values
• 2A.2 Appendix: Expected mean squares
Objectives

• A statistical model is developed with parameters to


describe the experiment according to the research
hypothesis.
• Apply the least squares method to estimate the
parameters.
• Experimental error variance is estimated.
• standard error
• confidence interval
• hypothesis test
• The fundamental partition for the sum of squares is
derived.
• ANOVA table
Research design for a study

• Research hypothesis
• Treatment design
• Experiment design or observational study design
Research design for a study

• Research hypothesis
• Treatment design
• Experiment design or observational study design

=⇒ Treatments are developed to address specific research


questions and hypotheses that arise in the research program.
Completely Randomized Design (CRD)

• This chapter focuses research designs with on one-way


classification of treatments in a CRD with equal replication
of treatments.
• One of the simplest designs is CRD.
• CRD can be implemented when there is
• only 1 factor with at least 2 levels or treatments
• no need to block
• treatments can be randomly assigned to EUs
Ex. 2.1
• Suppression of bacterial growth on stored meats
• Existing methods to suppress microbial growth are
• Plastic wrapping
• Vacuum packaging
• Possible alternatives are
• CO2
• Mixture of CO, O2 , and N.
• Research hypothesis: Some form of controlled gas atmosphere
would be more effective for meat storage.
• Treatment design: Considering 4 packaging conditions
1. commercial plastic wrap
2. vacuum package
3. 1% CO, 40% O2 , and 59% N
4. 100% CO2
• Experiment design: Completely randomized design
• 3 beef steaks of approximately the same size were
randomly assigned to each packaging condition.
• Response: number of psychrotrophic bacteria on the meat was
measured after 9 days of storage at 4◦ C.
Psychrotrophic bacteria
• Psychrotrophic bacteria are bacteria that are capable of
surviving or even thriving in a cold environment.
Psychrotrophic bacteria are of particular concern to the
dairy industry.
• Most are killed by pasteurization; however, they can be
present in milk as post-pasteurization contaminants due
to less than adequate sanitation practices.
• According to The Food Science Department at Cornell
University, psychrotrophs are bacteria capable of growth
at temperatures at or less than 7◦ C (44.6◦ F).
• At freezing temperatures, growth of pyschrotrophic
bacteria becomes negligible or virtually stops.
• Psychrotrophic bacteria also fall under the more general
category of psychrophiles.
2.2 How to randomize?
I. Randomizing treatments in experimental designs

• Suppose there are t treatments and the i th treatment has


ri replications.
X t
• N= ri experimental units or sample size.
i =1
• How to randomly assign treatments to EUs?
• Step 1: Assign the sequence of numbers 1 to N to the N
EUs.
• Step 2: Obtain a random permutation of the numbers 1 to
N.
• Step 3: Assign the first r1 numbers from Step 2 to the 1st
treatment,
• the next r 2 numbers from Step 2 to the 2nd treatment,
• .........
Methods to get random permutation

• random number table


• computer program
• drawing cards or paper slips
Ex. 2.1

• t=4, r=3 =⇒ 12 steaks


• Step 1: Assign 1 to 12 to 12 steaks.
• Step 2: suppose the permutation sequence is
• 1, 6, 7, 12, 5, 3, 10, 9, 2, 8, 4, 11
• Step 3:
• treatment A: steak 1, 6, 7
• treatment B: steak 12, 5, 3
• treatment C: steak 10, 9, 2
• treatment D: steak 8, 4, 11
II. (Randomly) Selecting units for comparative
observational studies

• The basic units of the investigation are either self-selected


into the treatment groups or they exist in their
characteristic group.
• A probability sample of units should be selected from
available members of each treatment population.
• Step 1: Identify the populations corresponding to
treatments, construct the list of all available units in each
population, and assign the numbers from 1, 2, . . . , Ni to the
i th population.
• Step 2: For each population, obtain ri distinct numbers
from 1, 2, . . . , Ni as the sample being studies.
2.4+2.7 The statistical model
I. Cell mean model
• Consider the following model

yij = µi + εij , j = 1, · · · , ri , i = 1, · · · , t (2.1)

• yij : the j th observation (or response) from the i th


treatment group
• µi : the mean of the i th treatment group (population)
• εij : the experiment error (or the error term) for the j th
observation from the i th treatment group.
• Note: For balanced case, then ri = r, i = 1, · · · , t.
Xt
• Let N = ri be the total number of observations, then for
i =1
balanced case N = t × r.
• Model assumption: εij iid with E(εij )=0, V(εij )=σ 2 .
• Note: If one need to make inferences, such as constructing
iid
C.I., P.I., or testing hypotheses, then εij ∼ N (0, σ 2 ).
II. Treatment effects model (Section 2.7)
• Consider the following model

yij = µ + τi + εij , j = 1, · · · , ri ; i = 1, · · · , t (2.13)

• µ = µ̄· is the overall mean


• τi = µi − µ is the i th treatment effect
• In the fixed effects model, i.e. µ, τi ’s are fixed but unknown,
which results in (t+1) parameters.
• To make them estimable, we can define
t
X t
,X
µ = µ̄· = ri µi ri ,
i =1 i =1

t
X
which leads the constrains to be ri τi = 0.
i =1
Remarks
• For balanced case, ri = r, i = 1, · · · , t, then
t
X t
,X t
X t
X
µ = µ̄· = rµi r= µi /t, τi = µi − µ̄· , and τi = 0.
i =1 i =1 i =1 i =1
• Eqs (2.1) and (2.13) are special cases of the general linear
model :
y = β0 + β1 x1 + β2 x2 + · · · + βk xk + ε
• Hence, these models can be written in vector notation as

Y = X β + ε~

• Y is the column vector of responses


• X is the design matrix
• β is the parameter vector or vector of the regression
coefficients
• ε
~ is the vector of errors
How to write (2.1) as (2.2)?

• Without intercept:
• β0 = 0, β1 = µ1 , · · · , βt = µt
• xi =I(belonging to the i th treatment group), i = 1, · · · , t.
• With intercept:
• β0 , 0
• Let xi = I(belonging to the i th treatment group),
i = 1, · · · , t-1,
• then β0 = µt , βi = µi − µt
• How to write (2.13) as (2.2)?
Statistical hypothesis
• H0 : µ1 = · · · = µt
Ha : µi , µk for some i , k
• or
H0 : τi = 0, i = 1, · · · , t
Ha : τi , 0 for some i
• Remarks:
• H0 is associated to the reduced model : yij = µ + εij .
• Ha is associated to the full model : yij = µi + εij or
yij = µ + τi + εij .
• May include covariates measured prior to the experiment
in the model:
yij = µi + βxij + εij
or
yij = µ + τi + βxij + εij
• See Chapter 17 Analysis of Covariance.
2.5+2.6+2.13 Estimating parameters with least
squares and SS

• Least squares methods


• estimating parameters of a linear model by minimizing
sum of squares for errors
• Recall: Under normal models, LSEs are just MLEs.
Estimators for full model
• yij = µi + εij =⇒ εij = yij − µi , ε̂ij = yij − µ̂i

X ri
t X ri
t X
X
Q= ε̂ij2 = (yij − µ̂i )2
i =1 j =1 i =1 j =1

• Normal equations (or least squares equations):


ri ri
∂Q ∂ X X
= (yij − µi )2 = −2 (yij − µi ) = 0
∂µi ∂µi
j =1 j =1
ri
X
=⇒ ri µi = yij = yi . , i = 1, · · · , t
j =1
ri
1X
=⇒ µ̂i = yij = ȳi . , i = 1, · · · , t
ri
j =1

which are the LSE’s.


Estimators for full model (continued)
• Hence
ri
t X
X
SSE (or SSEf ) = (yij − µ̂i )2
i =1 j =1
X ri
t X X t
2
= (yij − ȳi . ) = (ri − 1)si2
i =1 j =1 i =1

• si2 is an estimator for σ 2 based on the i th group only.

• Therefore, a pooled estimator of σ 2 is


Pt 2
i =1 (ri − 1)si SSE SSEf
σ̂ 2 = s 2 = P t = or
i =1 (ri − 1)
N − t N −t

• For balanced case, then σ̂ 2 = s 2 = t (SSE


r−1)
Estimator for reduced model

• yij = µ + εij =⇒ εij = yij − µ, ε̂ij = yij − µ̂

X ri
t X ri
t X
X
Q= ε̂ij2 = (yij − µ̂)2
i =1 j =1 i =1 j =1

i t r i t r
∂Q ∂ XX 2
XX
= (yij − µ̂) = −2 (yij − µ̂) = 0
∂µ ∂µ
i =1 j =1 i =1 j =1

t ri
1 XX
µ̂ = yij = ȳ..
N
i =1 j =1
Estimator for reduced model (continued)

• Hence
t X
X r t X
X r
SSEr = (yij − µ̂)2 = (yij − ȳ.. )2
i =1 j =1 i =1 j =1

is the sum of squares for error from the reduced model.


• It is usually called the total sum of squares expressed as
deviations from the grand mean.
Partition of the total sum of squares

X ri
t X X ri
t X t
X
2 2
SSE = (yij − µ̂i ) = (yij − ȳi . ) = (ri − 1)si2
i =1 j =1 i =1 j =1 i =1
X t X ri Xt X ri
SSEr = (yij − µ̂)2 = (yij − ȳ.. )2
i =1 j =1 i =1 j =1
X t X ri X t X ri
SSEr − SSE = (yij − ȳ.. )2 − (yij − ȳi . )2
i =1 j =1 i =1 j =1
Partition of the total sum of squares (continued)

• Since yij − ȳ.. = (yij − ȳi . ) + (ȳi . − ȳ.. )


• Then

X ri
t X ri
t X
X
(yij − ȳ.. )2 = [(yij − ȳi . ) + (ȳi . − ȳ.. )]2
i =1 j =1 i =1 j =1

X ri
t X ri
t X
X ri
t X
X
2 2
(yij − ȳ.. ) = (ȳi . − ȳ.. ) + (yij − ȳi . )2
i =1 j =1 i =1 j =1 i =1 j =1
SST = SSTR + SSE
Partition of the total sum of squares (continued)

• Since yij − ȳ.. = (yij − ȳi . ) + (ȳi . − ȳ.. )


• Then

X ri
t X ri
t X
X
(yij − ȳ.. )2 = [(yij − ȳi . ) + (ȳi . − ȳ.. )]2
i =1 j =1 i =1 j =1

X ri
t X ri
t X
X ri
t X
X
2 2
(yij − ȳ.. ) = (ȳi . − ȳ.. ) + (yij − ȳi . )2
i =1 j =1 i =1 j =1 i =1 j =1
SST = SSTR + SSE
⇐= Partitioning total sum of squares
Notes
• The sum of squares between the treatment means and the
grand mean, the so-called treatment sum of squares, is
denoted as
X t Xri
SSTR = (ȳi . − ȳ.. )2
i =1 j =1
t
X t
X
= ri (ȳi . − ȳ.. )2 = ri τ̂i2
i =1 i =1
• SST = SSTR + SSE =⇒ SSE = SST − SSTR .
• Though the textbook uses SST to denote treatment sum of
squares, it is quite unconventional.
• So we’ll keep the conventional terminology that “SST”
mean “total sum of squares” and “SSTR” means
treatment sum of squares on the notes.
• The total number of observation nT .
Notes (continued)

• The total sum of squares has been partitioned into two


parts
• One represents variation among the treatment means
• The other represents experimental error

Textbook Notes
Treatment sum of squares SST or SS Treatment SSTR
Total sum of squares SSEr or SS Total SST
Error terms of model e ε
Estimating treatment effects

• Since µ = µ̄· , µ̂=ȳ.. , for i = 1, · · · , t

τi = µi − µ̄·
τ̂i = µ̂i − µ̂ = ȳi . − ȳ..

• Directly find LSE for the model

yij = µ + τi + εij

under the constraint that


t
X
ri τi = 0
i =1
2.8 Degrees of Freedom

• The degrees of freedom may be thought of as the number


of statistically independent elements in the sums of
squares.
• The value of the degrees of freedom represents the
number of independent pieces of information in the sums
of squares.
• For easy illustration, consider balanced case.
I. Degrees of freedom for SST

t X
X r
• SST = SSEr = (yij − ȳ.. )2 which is composed by
i =1 j =1
(yij − ȳ.. )’s
t X
X r
• But (yij − ȳ.. ) = 0, which means that (yij − ȳ.. )’s are
i =1 j =1
linearly dependent, and any one of (yij − ȳ.. )’s is the
negative of the sum of the other (N − 1) values.
• Hence, df (SST ) = N − 1
• ȳ.. is used to estimate the grand mean µ (i.e. µ̄· ).
• 1 df is lost due to the estimation of the unknown parameter.
• df (SST ) = (sample size) - (# of parameters to be estimated)
= N −1
II. Degrees of freedom for SSTR

t
X
• SSTR = r (ȳi . − ȳ.. )2 is composed by (ȳi . − ȳ.. )’s.
i =1
t
X
• But (ȳi . − ȳ.. ) = 0, which means that (ȳi . − ȳ.. )’s are
i =1
linearly dependent, and any one of (ȳi . − ȳ.. )’s is the
negative of the sum of the other (t − 1) values.
• Hence, df (SSTR ) = t − 1.
t
X
• Treatment effects are τ1 , · · · , τt , and τi = 0, so only t-1
i =1
of them need to be estimated.
• Hence, df (SSTR ) = t − 1.
III. Degrees of freedom for SSE

t X
X r
• SSE = (yij − ȳi . )2 is composed by (yij − ȳi . )’s.
i =1 j =1
• But for each i , =⇒ r − 1 degrees of freedom, while
i = 1, · · · , t.
• df (SSE ) = t × (r − 1) = N − t.
• Since the full model has t unknown parameters: µ1 , · · · , µt
to be estimated,

df (SSE ) = (sample size) - (# of parameters to be estimated)


= N −t

• df (SSE) = df (SST) − df (SSTR) = (N − 1) − (t − 1) = N − t.


• df (SST) = df (SSTR) + df (SSE) ⇐= Partitioning of d.f.
2.9-2.11 ANOVA Table and Tests of Hypotheses
• The analysis of variance table summarizes our knowledge
about variability in the observations from the experiment.
• Mean square: MS = dSS
.f .
• ANOVA TABLE

Source SS d.f. MS F p-value


MSTR
Treatments SSTR t −1 MSTR F= MSE
Error SSE N −t MSE
Total SST N −1
• Remarks:
• E (MSE ) = σ 2 =⇒ σ̂ 2 = MSE , and MSE is unbiased.
Pt 2
• E (MSTR ) = σ 2 + i =1 ri τi
t−1 . (Please refer to 2A.2 in the
textbook.)
• If there is no treatment effect, then MSTR is also unbiased
for σ 2 .
Remarks

• MSE is the within-treatments estimate of σ 2 which is not


affected by whether H0 is true or not
• When there is no treatment effect, then MSTR
(between-treatments estimator) is also unbiased for σ 2 .
• However, if H0 is false, the between-treatments estimator
overestimates σ 2 , while within-treatments approach
provides a good estimate of σ 2 no matter H0 is true or
false.
• Thus, if H0 is true, then MSTR /MSE will be close to1; if H0 is
false, then this ratio will be large.
The F test for the equality of t population means

• To test the equality of t treatment means


H0 : µ1 =. . . . . . . . . =µt
Ha :µ0j s are not all equal
• Test statistic:

MSTR SSTR /(t − 1) H0


F= = ∼ Ft−1,N −t
MSE SSE /(N − t)

• Rejection Region: F ≥ Fα ;t−1,N −t (right-tailed test)


• p-value=P (Ft−1,N −t ≥ F0 ), where F0 is the value of test
statistic.
• If F0 ≥ Fα ;t−1,N −t , or p-value ≤ α, then reject H0 .
F test

• H0 : µ1 = . . .. . .. . . = µt =⇒ Reduced model
• Ha : µ0j s are not all equal =⇒ Full model
• Hence, the test statistic is

(SSEr − SSEf )/(dfr − dff )


F =
SSEf /dff
(SST − SSE )/(dfTotal − dfError )
=
SSE /dfError
SSTR /(t − 1) MSTR
= =
SSE /(N − t) MSE
Cochrans’s Theorem

• Let Zi ∼ N (0, 1) for i = 1, · · · , ν and

ν
X
Zi2 = Q1 + Q2 + · · · + Qs
i =1

with s ≤ ν.
• Then Q1 , Q2 , · · · , Qs are independent χ2 random variables
with ν1 , ν2 , · · · , νs d.f., respectively if and only if

ν = ν1 + ν2 + · · · + νs
Why F

MSTR SSTR /(t−1) H0


Why MSE = SSE /(N −t ) ∼ Ft−1,N −t ?
Pt Pri
(y −ȳ.. )2
• Since SST = SSTR + SSE , and SST = i =1 j =1 ij
σ2 σ2 σ2 σ2 σ2
iid
• Under H0 :µ1 = . . .. . .. . . = µt =⇒ yij ∼ N (µ, σ 2 )
Pt Pr i 2
j =1 (yij −ȳ.. )
• Hence, SST
2 =
i =1
2
2
∼ χN −1
σ σ
• Since df(Total)=df(Treatment)+df(Error), i.e.,
N − 1 = (t − 1) + (N − t), by Cochran’s Theorem,
• SSTR 2
∼ χt−1 , SSE ∼ χN
2
−t , and they are independent.
σ2  σ2
SSTR t−1
σ2 MSTR H0
= MSE ∼ Ft−1,N −t
• Hence, 
SSE N −t
σ2
2.12 Standard errors and CIs for treatment means

• µi is the treatment mean for the i th group


• Estimator of µi : µ̂i = ȳi .
2
• Variance of ȳi . : σȳ2 = σr
i. i
• Estimator of σ 2 : s 2 =MSE q
• Standard error of ȳi . : sȳ = s2
i. ri .
• 100(1-α)% CI for µi : ȳi . ± tα/2;N −t × sȳi .
q
• 100(1-α)% CI for µi − µl : (ȳi . − ȳl . ) ± tα/2;N −t × MSE ( r1 + r1 )
i l
2.14 How many replications?

• Number of replications is affected by:


• σ 2 : the error variance
• δ: the size of difference being physical significant between
two treatments
• α: the significance level
• 1 − β: the power of the test.
The power of the F test
• The power of the F test when H0 is false is
1 − β = P (F ≥ Fα ;t−1,N −t |H0 is false)
• When H0 is false, then F = MSTR
MSE has the non-central F
t
X
distribution with non-centrality parameter λ = ri τi2 /σ 2
i =1
and degrees of freedom t-1 and nT − t.
• Hence, the power (1 − β) depends on the non-central F
distribution.
• Appendix IX gives charts of the required number of
replications for given α, 1 − β, σ 2 , v1 , v2 , and
v
t t
√ X
Φ = λ/t = ri τi2 /(tσ 2 )
i =1
t
X
• Remark: When H0 is true, then λ = ri τi2 /σ 2 = 0
Ex 2.3
• On p. 64, Φ2 = r × 0.58.√ν1 = 2, ν2 =
√t(r − 1) = 3(r − 1).
• Using r = 5, then Φ = r × 0.58 = 2.9 = 1.7. ν2 = 12.
• From Appendix IX, the power is about 0.63.
Appendix IX
Noncentral F distribution

• Noncentral F with noncentrality λ can be derived from the


ratio of two independent Chi-square random variables,
V1 /v1
V2 /v2 , where the numerator (V1 ) is a noncentral Chi-square
with noncentrality λ.
• Note: The noncentral Chi-square distribution is a
generalization of the chi-square distribution.
• If Wi ∼ N (µ, σ 2 ), i =1,. . . , n, are independent; then
Xn
1
σ2
Wi2 ∼ χ2 (n; λ), i.e. the noncentral chi-square
i =1
distribution with two parameters:
• n : degrees of freedom
• λ = nµ2 /σ 2 : the noncentrality parameter.
2
• Note: λ ∝ E (χv0 ) − E (χv2 )
Example 2.1

Storage method
Observation A B C D
1 7.66 5.26 7.41 3.51
2 6.98 5.44 7.33 2.91
3 7.80 5.80 7.04 3.66
yi . 22.440 16.500 21.780 10.080
• Factor: Storage Method
• 4 treatments , t = 4.
• r1 = · · · = r4 = 3 , N = 4 × 3 = 12.
Example 2.1 (continued)

t X
r 4 X
3
X X y..2
SST = (yij − ȳ.. )2 = yij2 −
N
i =1 j =1 i =1 j =1
70.82
= 451.5106 − = 33.7996
12

X ri
t X 4
X
2
SSTR = (ȳi . − ȳ.. ) = 3(ȳi . − ȳ.. )2
i =1 j =1 i =1
4
X yi2. y..2 1351.778
= − = − 417.72 = 32.8728
3 N 3
i =1

4
X
• Note: Also, SSTR= 3τ̂i2
i =1
Example 2.1 (continued)

• H0 :µ1 =. . . . . . . . . =µ4
• Ha : µj , µl , for some j , l
• Test statistic:

MSTR SSTR /(t − 1) 32.8728/3


F= = = = 94.58
MSE SSE /(N − t) 0.9268/8
• Critical value: Fα ;t−1,N −t = F0.05;3,8 = 4.07.
• Hence, reject H0 .
• Conclude the treatments differ w.r.t. the number of
psychrotrophic bacteria observed on the meat store under
different storage method.
Example 2.1 (continued)- SAS
• options ls=75 ps=60 nocenter nonumber;
• data steak;
• input method y@@;
• datalines;
• 1 7.66 1 6.98 1 7.80
• 2 5.26 2 5.44 2 5.80
• 3 7.41 3 7.33 3 7.04
• 4 3.51 4 2.91 4 3.66
• ;
• title1 ’Meat storage experiment’;
• proc print data=steak;
• run;
• symbol1 v=circle;
• title1 ’Scatter Plot of Y vs. Method’;
• proc gplot data=steak;
• plot y*method;
• run;
Example 2.1 (continued)- SAS
• options nocenter nodate nonumber;
• data steak;
• input method y@@;
• datalines;
• 1 7.66 1 6.98 1 7.80
• 2 5.26 2 5.44 2 5.80
• 3 7.41 3 7.33 3 7.04
• 4 3.51 4 2.91 4 3.66
• ;
• proc glm;
• class method;
• model y = method/solution e;
• lsmeans method;
• run;
Example 2.1 (continued)-Output

The SAS System


The GLM Procedure
Class Level Information
Class Levels Values
method 4 1 2 3 4
Number of Observations Read 12
Number of Observations Used 12

The GLM Procedure


Dependent Variable: y
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 3 32.87280000 10.95760000 94.58 <.0001
Error 8 0.92680000 0.11585000
Corrected Total 11 33.79960000

R-Square Coeff Var Root MSE y Mean


0.972580 5.768940 0.340367 5.900000

Source DF Type I SS Mean Square F Value Pr > F


method 3 32.87280000 10.95760000 94.58 <.0001

Source DF Type III SS Mean Square F Value Pr > F


method 3 32.87280000 10.95760000 94.58 <.0001

Standard
Parameter Estimate Error t Value Pr > |t|
Intercept y4• 3.360000000 B 0.19651124 17.10 <.0001
method 1 y1• - y4• 4.120000000 B 0.27790886 14.83 <.0001
method 2 y2• - y4• 2.140000000 B 0.27790886 7.70 <.0001
method 3 y3• - y4• 3.900000000 B 0.27790886 14.03 <.0001
method 4 0.000000000 B . . .

NOTE: The X'X matrix has been found to be singular, and a generalized inverse was
used to solve the normal equations. Terms whose estimates are followed by the
letter 'B' are not uniquely estimable.
Least Squares Means
method y LSMEAN
1 7.48000000 y1•
2 5.50000000 y2•
3 7.26000000 y3•
4 3.36000000 y4•
Example 2.2 Detecting the onset of Phlebitis

• CRD
• EU: rabbit
• Treatments: 3 intravenous treatments
• a vehicle solution + amiodarone
• a vehicle solution (as control)
• a saline solution (as placebo)
• Response: ear temperature difference
• Complications with the experiment protocol resulted in a
unbalanced design.
Example 2.2 –SAS
• options nocenter nodate nonumber;
• data phlebitis;
• input method y@@;
• datalines;
• 1 2.2 1 1.6 1 0.8 1 1.8 1 1.4 1 0.4 1 0.6 1 1.5 1 0.5
• 2 0.3 2 0.0 2 0.6 2 0.0 2 -0.3 2 0.2
• 3 0.1 3 0.1 3 0.2 3 -0.4 3 0.3 3 0.1 3 0.1 3 -0.5
• ;
• proc glm;
• class method;
• model y = method/solution e;
• lsmeans method;
• run;
Example 2.2 (continued)-Output

The SAS System


The GLM Procedure
Class Level Information
Class Levels Values
method 3 1 2 3
Number of Observations Read 23
Number of Observations Used 23
General Form of Estimable Functions
Effect Coefficients
Intercept L1
method 1 L2
method 2 L3
method 3 L1-L2-L3

Dependent Variable: y
Sum of
Source DF Squares Mean Square F Value Pr > F
Model 2 7.21623188 3.60811594 16.58 <.0001
Error 20 4.35333333 0.21766667
Corrected Total 22 11.56956522

R-Square Coeff Var Root MSE y Mean


0.623725 92.50513 0.466548 0.504348

Source DF Type I SS Mean Square F Value Pr > F


method 2 7.21623188 3.60811594 16.58 <.0001
Source DF Type III SS Mean Square F Value Pr > F
method 2 7.21623188 3.60811594 16.58 <.0001

Standard
Parameter Estimate Error t Value Pr > |t|
Intercept y3• -0.000000000 B 0.16494949 -0.00 1.0000
method 1 y1• − y3• 1.200000000 B 0.22670139 5.29 <.0001
method 2 y2• − y3• 0.133333333 B 0.25196450 0.53 0.6025
method 3 0.000000000 B . . .

NOTE: The X'X matrix has been found to be singular, and a generalized inverse was
used to solve the normal equations. Terms whose estimates are followed by the
letter 'B' are not uniquely estimable.
Computations
Pt Pri
• y.. = i =1 j =1 yij
Pri
• yi . = j =1 yij
P Pri P Pri 2
2 y..
• SST = ti=1 j = 1 (yij− ȳ.. )2 = ti=1 j = 1 y ij − N
Pt 2 Pt 2 Pt yi2. y..2
• SSTR = i =1 ri (ȳi . − ȳ.. ) = i =1 ri τ̂i = j =1 r − N ,
i
Pt Pri r yi2.
• SSE = i =1 j =1 (yij − ȳi . )2 = ti=1 j = 2 t
P P i
P
1 y ij − i = 1 ri =
SST − SSTR
• Note: If balanced design (r1 = r2 = ....... = rt = r), then
2
Pt Pr 2 Pt Pr 2 y..
• SST = i =1 j =1 (yij − ȳ.. ) = i =1 j =1 yij − rt
y2 y2
• SSTR = ti=1 r(ȳi . − ȳ.. )2 = r ti=1 τ̂i2 = tj=1 ri . − rt..
P P P
i
• SSE = ti=1 rj=1 (yij − ȳi . )2 = ti=1 rj=1 yij2 -
P P P P
Pt yi2.
i =1 r = SST − SSTR
Remarks

• If r1 = r2 = ....... = rt = r, then
Pt 2
i =1 r(ȳi . − ȳ.. )
MSTR = = rSȳ2
t −1
• If r1 = r2 = ....... = rt = r, then N = rt, N − t = t(r − 1), and
Pt Pr
− ȳi . )2
Pt
i =1 j =1 (yij − 1)Si2 i =1 (r
MSE = =
N −t N −t
Pt 2 t
(r − 1) i =1 Si 1X 2
= = Si ,
t(r − 1) t
i =1

which is an average of sample variances.


Remarks

• Proc GLM is the preferred procedure for doing univariate


analysis of variance (ANOVA), multivariate analysis of
variance (MANOVA), and most types of regression.
• Proc anova could also be used to do the analysis of
variance when the design is balanced.
• The only advantage in using proc anova is that it uses less
computational resources.
• Its use is recommended only for large projects in which
experts have insured that the design is balanced.
Treatment effects model (Section 2.7)

• Consider the following model

yij = µ + τi + εij , j = 1, · · · , ri ; i = 1, · · · , t (2.13)

which can be written in vector and matrix notation as

Y = X β + ε~
I The estimator β̂ of Y = X β + ε~
Pt
• Without the restriction i = 1 ri τi = 0.
• Then
 
 y11 
 .. 

 . 

 
 y1r1 
..
 
Y = 
 
 . 


 yt1 

 .. 

 . 

 
yt,rt N ×1

···
 
 1r1 1r1 0r1 0r1 
 1r2 0r2 1r2 ··· 0r2
 

···
 
X =  1r3 0r3 0r3 0r3 
 .. .. .. ..

.. 
 .
 . . . . 

1rt 0rt 0rt · · · 1rt N ×(t +1)
I (continued)
 
 µ 
 τ1
 

 
 τ2 
β =  ..
 


 . 

 τ 
 t−1 
τt
 

• 1ri , 0ri are vectors of 1.’s and 0’s, respectively.


• Notice that the columns of X are linearly dependent.
···
 
 N r1 r2 rt 
 r1 r1 0 ··· 0
 

···
 
0
• X X =  2  r 0 r2 0 
 . .. .. .. ..

 . .

 . . . . 

· · · rt

rt 0 0 (t +1)×(t +1)
is non-full rank
I (continued)

 
 y.. 
 y1.
 

 
 y2. 
0
• X y =  .
..
 


 . 

 y 
 t−1,. 
yt.
 
(t +1)×1
• Hence, β̂ = (X X )− X 0 y, where
0 (X 0 X )− is a generalized
inverse for X 0 X .
Case 1
··· 0
 
 0 0 0 
0
1/r1 0 ··· 0
 
 
··· 0
 
0 −
• Let (X X ) =  0 0 1/r2 

 .. .. .. .. .. 

 . . . . . 

· · · 1/rt
 
0 0 0
0 0
   
   

 ȳ1. 

 

 µ̂ + τ̂1 

ȳ2. µ̂ + τ̂2
   
   
0 − 0
• Then β̂ = (X X ) X y = 

= 

. (**)
. .

 
 
 
 ..   .. 
   
 ȳt−1,.   µ̂ + τ̂t−1 
   
   
ȳt. µ̂ + τ̂t
• Notice that β̂ derived in case 1 is a biased estimator, because
0 µ
   
   

 µ + τ1 
 
 τ1 

µ + τ2 τ2
   
   
E (β̂) =  . .  = β
   
,
.. ..

 

   
   
 µ + τt−1   τt−1 
   
µ + τt τt
Case 2
" #
(X10 X1 )−1 0
• Let (X 0 X )− = be another generalized inverse
0 0
for X 0 X .
ȳt. µ̂ + τ̂t
   
   
 ȳ − ȳ
 1. t.
 
  τ̂1 − τ̂t 

 ȳ2. − ȳt. τ̂2 − τ̂t
   
  
• Then, β̂ = (X 0 X )− X 0 y =  . .=  .. . (***)

 .
.
 
  .


  
 ȳt−1,. − ȳt.   τ̂t−1 − τ̂t
 
   
0 0
µ + τt µ
   
   
 τ1 − τt   τ1 
   
 τ2 − τt   τ2 
• E (β̂) =   = β,
   
..  ,  ..

 . 
 


 . 

 τt−1 − τt   τt−1 
   
   
0 τt
which is also biased.
Example 2.2
   
 23 9 6 8   11.6 
 9 9 0 0  10.8 
   
X 0 X =   , X 0 y = 


 6 0 6 0 

 0.8 
 
8 0 0 8 0

• Case 1) Let
 −  
 23 9 6 8   0 0 0 0 
 9 9 0 0  0 1/9 0 0
   
0 −  
(X X ) =   =  
 6 0 6 0 

 0 0 1/6 0



8 0 0 8 0 0 0 1/8

• Hence, by direct computation,


 
 0 
1.2
 
0 − 0
β̂ = (X X ) X y = 




.
 0.1333 

0

Example 2.2 (continued)

• Also, according to (**), since ȳ1. = 1.2, ȳ2. = 0.1333,


ȳ3. = 0.0, ȳ.. = 0.5043,
   
 0   0 
 ȳ1.   1.2 
β̂ =   =   =⇒
 ȳ2.   0.1333 
  
ȳ3. 0
   

τ̂1 = 1.2 − ȳ.. = 0.6957


τ̂2 = 0.1333 − ȳ.. = −0.371
τ̂3 = 0 − ȳ.. = −0.5043.
Example 2.2 (continued)–Case 2
− 
−1/8 −1/8

 23 9 6 8   1/8 0


 9 9 0 0   −1/8 17/72 1/8 0 
• Let (X 0 X )− =    = 
 
 −1/8 .

 6 0 6 0   1/8 7/24 0 
  
8 0 0 8 0 0 0 0

0

 
 1.2 
• Hence, by direct computation, β̂ = (X X ) X y = 0 − 0
.
 
 0.1333 

0
• On the other hand, since ȳ1. =1.2, ȳ2. =0.1333, ȳ3. = 0.0,
ȳ.. =0.5043,

ȳ3.
 
0

   
 ȳ − ȳ   1.2 − 0 
• According to (**), β̂ =  1. 3.
=

 
 
 
 ȳ2. − ȳ3.   0.1333 − 0 
   
0 0
µ̂ + τ̂3 = 0
τ̂1 − τ̂3 = 1.2

τ̂2 − τ̂3 = 0.1333
SAS

• Remark: SAS PROC GLM doesn’t incorporate the


constraint of ti=1 ri τi = 0 to formulate the design matrix
P
X and estimates of β is based on case 2.
• proc glm;
• class method;
• model y = method/solution e;
• lsmeans method;
• run;
SAS output
• General Form of Estimable Functions
• Effect Coefficients
• Intercept L1
• method L2
• method L3
• method L1-L2-L3
Parameter Estimate Standard Error t Value Pr > |t|
Intercept -0.000000000 B 0.16494949 -0.00 1.0000
method 1 1.200000000 B 0.22670139 5.29 <.0001
method 2 0.133333333 B 0.25196450 0.53 0.6025
method 3 0.000000000 B . . .
⇐= PROC GLM set it to be 0
• NOTE: The X’X matrix has been found to be singular, and a generalized
inverse was used to solve the normal equations. Terms whose
estimates are followed by the letter ’B’ are not uniquely estimable.
• Least Squares Means
• method y LSMEAN
• 1 1.20000000
• 2 0.13333333
• 3 -0.00000000
II The estimator β̂ of Y = X β + ε~
• Under the restriction ti=1 ri τi = 0.
P
Pt−1
• Since τt = − i =1 ri τi /rt , so no need to estimate τt .
• Hence, for the tth treatment
r r r
ytj = µ − 1 τ1 − 2 τ2 − · · · − t−1 τt−1 + εtj , j = 1, · · · , rt
rt rt rt

1r1 1r1 0r1 · · · 0r1


 
 

 1r2 0r2 1r2 · · · 0r2 

1r3 0r3 0r3 · · · 0r3
 
X = 
 
.. .. .

.. ..
. ..
 

 . . . 

r
− rr1 1rt − rr2 1rt · · · − t−1 r 1rt
 
1rt t t t N ×t

µ
 
 
τ1
 
 
τ2
 
β = 
 

 .. 

 . 

τt−1
 
II (continued)

··· 0
 
 N 0 0 
 0
 r1 (1 + r1 /rt ) r1 r2 /rt · · · r1 rt−1 /rt 

X X =  0 r1 r2 /rt r2 (1 + r2 /rt ) · · · r2 rt−1 /rt
 
0 
 .. .. .. . . ..


 .
 . . . . 

0 r1 rt−1 /rt r2 rt−1 /rt ··· rt−1 (1 + rt−1 /rt ) t×t

 
 y.. 

 r1 (ȳ1. − ȳt. ) 

r2 (ȳ2. − ȳt. )
 
 
0 
..

X y =  
 . 

..
 

.
 
 
rt−1 (ȳt−1. − ȳt. )

t×1
II (continued)
• Note that the normal equations (or least squares
equations) are X 0 X β̂ = X 0 y.
• Solve the first equation: N µ̂ = y.. =⇒ µ̂ = ȳ..
• As ti=1 ri τi = 0, we obtain τ̂i = ȳi . − ȳ.. , i =1,. . . , t.
P

• Or applying the matrix operation,


X 0 X β̂ = X 0 y =⇒ (X 0 X )−1 X 0 X β̂ = (X 0 X )−1 X 0 y, we obtain
   
 µ̂   ȳ.. 
 τ̂1   ȳ1. − ȳ.. 
   
   
 τ̂2   ȳ2. − ȳ.. 
 = (X 0 X )−1 X 0 y = 
   
β̂ =  .. .. 
 . 
 
 . 

.. ..
   
  
 .   . 

  
τ̂t−1 ȳt−1. − ȳ..

• Moreover, E (β̂) = β, i.e. β̂ is unbiased for β.


One-way ANOVA with unequal variances
• In SAS no tests are available for pairwise comparisons for
one-way anova when variances in groups are not equal.
• In SPSS, the following tests are available: Tamhane T2,
Dunnett’s T3, Games and Howell and Dunnett’s C.
• None of these tests is exact. T2, T3 and C are conservative
procedures, that is, for all of them the FWE does not exceed
ALPHA.
• T2 is more conservative than T3, for large samples they
are approximately equal.
• T3 is more conservative than C for large samples, while C
is more conservative for smaller. The Games and Howell
test is an extension of the Tukey-Kramer test to the case of
unequal variances.
• It has higher power (narrower confidence intervals) than
T2, T3 or C, but its FWE may exceed ALPHA.
• The Games and Howell test is most liberal (its FWE is most
likely to exceed ALPHA) when the variances of the sample
means, σi2 /ni , are approximately equal.
• Recommendation: Dunnett’s T3 or Dunnett’s C should be
used for pairwise comparisons.
• T3 is recommended when sample sizes in groups are small
• C is recommended when sample sizes are large.
Randomizations in SAS

• Randomly Assigning Subjects to Treatments


• Use the PLAN procedure to design a completely
randomized design. Suppose you have 12 experimental
units, and you want to assign one of two treatments to
each unit.
• Use a DATA step to store the unrandomized design in a
SAS data set, and then call PROC PLAN to randomize it by
specifying one factor with the default type of RANDOM,
having 12 levels.
Randomizations in SAS (continued)
• title ’Completely Randomized Design’;
• /* The unrandomized design */
• data Unrandomized;
• do Unit=1 to 12;
• if (Unit <= 6) then Treatment=1;
• else Treatment=2;
• output;
• end;
• run;
• /* Randomize the design */
• proc plan seed=27371;
• factors Unit=12;
• output data=Unrandomized out=Randomized;
• run;
• proc sort data=Randomized;
• by Unit;
• proc print;
• run;
Randomizations in SAS
• The following output shows that the 12 levels of the unit
factor have been randomly reordered and then lists the
new ordering.
• Completely Randomized Design
The PLAN Procedure
Factor Select Levels Order
Unit 12 12 Random
—————-Unit—————
8 5 1 4 6 2 12 7 3 9 10 11
Obs Unit Treatment
111
221
332
441
551
661
772
881
992
10 10 2
Randomizations in SAS
• You can also generate the plan by using a TREATMENTS
statement instead of a DATA step.
• The following statements generate the same plan.
• proc plan seed=27371;
factors Unit=12;
treatments Treatment=12 cyclic (1 1 1 1 1 1 2 2 2 2 2 2);
output out=Randomized;
run;
• Use ranuni to generate random numbers:
• data randnum;
seed=date( );
do i = 1 to 12;
x=ranuni(seed);
output;
end;
drop seed;
run;
Randomizations in SAS (continued)

• proc sort data=randnum out=out;


• by x;
• proc print data=out;
• id i x;
• run;
ix
2 0.07039
5 0.21552
3 0.25233
1 0.27681
11 0.29604
10 0.35907
4 0.58179
6 0.61062
9 0.77538
12 0.82892
8 0.85409
7 0.92517
Randomizations in SAS (continued)
• data randnum;
• do i = 1 to 12;
• seed=9876543*i;
• x=ranuni(seed);
• output;
• end;
• run;
• proc sort data=randnum out=out;
• by x;
• proc print data=out;
• id i x;
• run;
• i x seed
11 0.09301 108641973
7 0.15171 69135801
8 0.16317 79012344
10 0.40925 98765430
6 0.42375 59259258
5 0.48920 49382715
12 0.73172 118518516
1 0.77424 9876543
9 0.84099 88888887
Noncentrality parameter

• Noncentrality parameters occur in distributions that are


transformations of a normal distribution.
• The “central” versions are derived from normal
distributions that have a mean of zero; the noncentral
versions generalize to arbitrary means.
• For example, the standard (central) chi-squared
distribution is the distribution of a sum of squared
independent standard normal distributions, i.e., normal
distributions with mean 0, variance 1.
• The noncentral chi-squared distribution generalizes this
to normal distributions with arbitrary mean and variance.
Example 2.3: Use R to determine power and sample
size
• t=3
• df1=t-1
• for(r in 4: 10){
df2=t*(r-1)
cv=qf(0.95,df1,df2)
ncp=r*.38/.22
power1=1-pf(cv,df1,df2,ncp)
print(r)
print(power1)}
• Output [1] 4
[1] 0.4970056
[1] 5
[1] 0.6338798
[1] 6
[1] 0.7418642
[1] 7
[1] 0.8228101
Example 2.3: Use SAS to determine power and sample
size
• data power;
• t=3;
• df1=t-1;
• do r= 2 to 20;
• df2=t*(r-1);
• cv=finv(0.95,df1,df2);
• ncp=r*.38/.22;
• power1=1-probf(cv,df1,df2,ncp);
• output;
• if power1 ge 0.9 then leave;
• end;
• proc print; title ’Sample size’; run;
• proc plot; plot power1*r;
• title ’Sample size’; run;
Example 2.3: Use SAS to determine power and sample
size–Output

’Sample size’
Obs t df1 r df2 cv ncp power1
1 3 2 2 3 9.55209 3.4545 0.16643
2 3 2 3 6 5.14325 5.1818 0.33545
3 3 2 4 9 4.25649 6.9091 0.49701
4 3 2 5 12 3.88529 8.6364 0.63388
5 3 2 6 15 3.68232 10.3636 0.74186
6 3 2 7 18 3.55456 12.0909 0.82281
7 3 2 8 21 3.46680 13.8182 0.88113
8 3 2 9 24 3.40283 15.5455 0.92184
Example 2.3 - SAS

• proc power ;
• onewayanova groupmeans = 0.8 |0.1 |0.0
• stddev = 0.469
• alpha = 0.05
• npergroup = .
• power = 0.9;
• run;
Example 2.3 - Output

The POWER Procedure


Overall F Test for One-Way ANOVA

Fixed Scenario Elements


Method Exact
Alpha 0.05
Group Means 0.8 0.1 0
Standard Deviation 0.469
Nominal Power 0.9

Computed N Per Group


Actual Power N Per Group
0.922 9

Potrebbero piacerti anche