Sei sulla pagina 1di 23

8/02/2011

An Introduction to Confirmatory Factor


Analysis (CFA) and Structural
Equation Modeling (SEM)
Gavin T L Brown, PhD
Presentation to Research Development Office, Continuing Professional Development
Programme, HKIEd, 10 February 2011

What is a CFA or SEM model?


A

theoretically informed simplification of


the complexities
p
of realityy created to test
or generate hypotheses about how
various constructs are related

8/02/2011

Theory

We have theories that explain the way things are (not just
descriptions)
Th
Theory
and
d data
d t are inter-twined
i t t i d

We see phenomena and seek to explain them with theories


We have theories and seek to test them with phenomena
Theories Knowledge

but theories that do not explain phenomena are certainly false


[Knowledge--Popper]

CFA/SEM is situated in hypothetico-deductive or


abductive approaches to meaning

Models

Everything is connected to everything in the real world

BUT

Its messy and hard to make sense of


in a model we select for theoretical reasons the important
connections that we THINK explain most of what is going on
in the phenomenon of interest
It is not the real thing, but a simplification

The arrangement of the connections between and among


variables of interest constitute testable expressions of
our theories about how things go together

8/02/2011

Prediction, Causation, Association

CFA/SEM models assume linear (i.e., correlations and


regressions) relationships (paths) exist among constructs.
F example:
For
l

(A B) [2 things are correlated]


(A B) C [2 correlated things jointly influence a 3rd
thing]
(A + B) C [2 things separately and/or jointly influence a 3rd
thing]
A B C [1 thing influences a 2nd which influences a 3rd]
And so on. [moderation, mediation, complex interrelationships]

CFA/SEM Involves Mathematical Testing of


Models

A sophisticated correlational-causal mathematical testing


of a model against a data set
H
How
close
l
are they?
th ? Does
D
th
the model
d l fit th
the d
data?
t ?

Models are rejected if they do NOT have close fit to the data

the data cant be wrongits the reality we are trying to model

Models are NOT accepted if they have close fit to the data

They are NOT YET DISCONFIRMEDPopper


Multiple models can fit equally well the same data
Fit could be attributable to chance factors in the data we collected

8/02/2011

CFA/SEM: Extending Latent Trait Theory

Observed manifest behaviours

e.g., test scores, attitude item responses, observed frequencies


of behaviours, etc.

Are shaped and influenced by invisible (LATENT) shared


causes. For example,

Answers to items [manifest observed] on a test are caused (in


part) by INTELLIGENCE [latent unobserved] traits
Student responses to Browns Conceptions of Assessment
inventory are shaped in part by the hypothesised beliefs that:

ASSESSMENT IS FOR IMPROVEMENT;


ASSESSMENT IS IRRRELEVANT;
ASSESSMENT HAS AFFECTIVE/SOCIAL BENEFITS
ASSESSMENT REFLECTS EXTERNAL CAUSES

Latent Trait Theory

Multiple manifest indicators are required to have stable


estimation of the latent traits existence, strength, and
direction

Hence, factor analysis expects 3 to 6 items per factor


Hence
Hence, test scores rely on 5 to 30 test questions

WHY?

CHANCE.ERROR.DEFICIENCIES IN STIMULI
Observed behaviour is not perfectly controlled or reflective of our
TRUE intelligence, attitude, etc.

Our response mechanism interferes

I chose B but I meant A; I chose response 3 but I meant 4


I want 3.4 but I had to choose 3 or 4

Hence, all values are ESTIMATES

A range of most likely values exists


Multiple indicators reduces error/chance effects

8/02/2011

Interpreting Output Values in CFA/SEM


CFA and SEM models
use concepts
p of
causation and prediction

linear regression
Changes in XXX cause a
linear change (increase or
decrease) in YYY
Formula:Y= m*X + b

m=slope [standardised
beta = a proportion of
standard deviation]
b=intercept [starting point
of equation; represents all
the unknown stuff]

Y variablee

b
intercept

X
variable
i bl

Interpretations:
1. For every 1 SD change in X, you
will get m*SD change in Y.
2. This relationship explains x% of
variance in Y

Looking Under the Hood: Components of


CFA and SEM models

Variables

Manifest [observed behaviours,


usually dependent,
dependent rectangles]
Latent [unobserved, explanatory, ovals]
Residual [unobserved, unexplained, ovals]

Manifest variables are predicted by both Latent traits and


residuals

Goal to have large proportion of variance in manifest explained


by latent rather than residual disturbances

8/02/2011

Looking Under the Hood: Components of


CFA and SEM models

Paths

Fixed: equations require SEED values to solve; 1 is the


conventional seed
seed. All latent traits must have one path to their
predicted manifest variables with a fixed value. All other values
are estimated relative to the seed value.
Free: All other paths are allowed to be estimated freely based
on the data provided to the model; they may be stronger than
the fixed path, but better to make the strongest path in a
factor the fixed path.
p
Zero: Paths not required by the model are forced to be nonexistent. This contrasts to EFA where all paths have some
freely estimated value.

Example of Path Values

EFA indicated Grades was


the strongest
g value

Thus, seed value on path

Residual terms exist and


have seed value of 1 because
they are equal to each other
Note: manifest variables
ONLY have paths from the
c nce t al LATENT trait
conceptual

Grades

Ticks

Well-being
Evaluative

Praise

Stickers

Answers

1
e12

1
e13

1
e14

1
e15

1
e16

Zero between each other


If 2 or more factors, items
should have ZERO paths to
other factors

8/02/2011

Estimation

Maximum likelihood

The parameter values in the data set (a sample) are the most likely
values in the population (not present, but to which we wish to
generalise)
l )
Hence, procedure attempts to maximise the input values (means,
standard deviations, covariances) when estimating the solution
Hence, it matters that the sample reflects the population and is
sufficiently large that parameters are likely to apply to population

N500; if <100 very problematic

Number of cases influences number of items per factor that can be


estimated
ti t d accurately
t l

If N=50, each factor must have 12 items


If N=100, each factor must have 6 items
When N=500, factors can have 3 items

Model Evaluation: Fit to Data

Because of MLE, it is possible to evaluate the fit of the


model relative to the data by comparing the distributions

The chi-squared
chi squared (2) test is the fundament of model evaluation
2 test: difference between Observed (model) and Expected
(Data) adjusted by number of parameters and cases (degrees
of freedom)
However, 2 penalises falsely large N (i.e., >100) and large
number of manifest variables
So it is a poor test
test, notwithstanding vehement objections by
some researchers

8/02/2011

Evaluating Results: Which Fit indices &


What Values?
Goodness of Fit
Decision

p of

2/df

Badness of fit

CFI
gamma hat
h t

RMSEA

SRMR*

Good

>.05

>.95

<.05

<.06

Acceptable

>.05

>.90

<.08

<.08

Marginal

>.01

.85-.89

<.10

Reject

<.01

<.85

>.10

>.08

Note.
Report multiple indices but beware..
beware
CFI punishes falsely complex models (i.e., >3 factors)
RMSEA rewards falsely complex models with mis-specification
See Fan & Sivo, 2007
*AMOS only generates SRMR if NO missing data;
thus, important to clean up missing values prior to any analysis. Recommend
expectation maximization (EM) procedure

More on the RMSEA Statistic

RMSEA is a point estimate in the middle of a range.

The 90% confidence interval should be reported.


Th PCLOSE statistic
The
t ti ti shows
h
whether
h th it is
i probable
b bl that
th t
RMSEA is <.05; accuracy effected by sample size

RMSEA

Model
Default model
Independence

RMSEA LO 90
.048
.045
.127
.124

HI 90
.051
.129

PCLOSE
.899
.000

Comparison to independence model not terribly interesting.


The real question should be:
Is there a better model to explain these responses than the model I
have used?

8/02/2011

Distinguishing CFA from SEM

CFA = measurement model of a construct

CFA models can have multiple dimensions and complex


structures

An achievement score can be hierarchical

An attitude or opinion can be multi-correlated

total consists of surface AND deep cognitive processes


Total consists of correlations between 3 or more related dimensions

SEM = structural model of paths between constructs

SEM models arrange predictive paths

Attitudes towards X influence performance on Y


Attitude towards X is related to attitude towards Y

Example: CFA + SEM


(Brown & Hirschfeld, 2008)

CFA: Measurement Model-4 correlated factors


Note. Accurate measurement models are also needed for
reading score, year, sex, & ethnicity
Note.
If measurements of each construct are NOT robust, do NOT
use them for anything!!!

Structural model:
multiple predictors
of performance

8/02/2011

Linear Models are Recursive


(Brown et al., 2009)

CFA/SEM assume models are recursive

Have
a beginning
H
b i i and
d an end
d which
hi h are nott the
th same
NOT circular

origins

endings

How to Test Reciprocal Models?

Make it longitudinal

Time 2
A2 B2C2

Use 2 different methods of measuring construct A

Time 1
A1B1C1

AM1 BCAM2

These approaches honour the reciprocal effects in theory


without invalidating the linear regression equations
Longitudinal analysis is an advanced topic in SEM and
beyond todays talk

10

8/02/2011

Interpreting a Model

Statistical significance of paths


The weights & directions of each path
The proportion of variance explained (the effect size)

Evaluating Results

Statistically significant paths

The strength of the path should exceed what might occur by


chance
option to remove such paths or indicate as ns

If p>.05 path
not stat sig
Note. Fixed
paths have no
probability.

11

8/02/2011

Evaluating Results

Variance explained (SMC)

Equivalent to R2
effect
ff t size
i f2 = R2 /(1 - R2)

Small: .02 to .14


Medium: .15 to .34
Large: >.35
(Cohen, 1992)

f2 =.19/.81=.23 (medium)
.19

e42

MQQ44

14
e41.14

MQQ63

.23

e40

.17

.48

MQQ5

.41
MQQ25

e38.29

MQQ8

.34

e48

.37

e39

e37

.44

.08
Evaluation

-.54

MQQ23

-.28
.58
58

Note. SMC = Beta squared


Balanced not explained is in the
residual (goal small residuals, so
target >.50)

Developing a Model

Evidence from theory


Evidence from Exploratory Factor Analysis
Evidence from Regression Analysis

12

8/02/2011

The Role of Theory in Designing Models to


Test

My research questions:

Do conceptions of assessment influence performance?

Th
Theoretical
i l fframework:
k

Icek Ajzen: Reasoned or Planned Behaviour


Beliefs & Intentions influence Behaviour & Outcomes
Beliefs are inter-correlated

Outcomes

Criterion of effectiveness

EFA to CFA
Statement
29.Assessmentfostersstudents'character.
22.Assessmentcultivatesstudents'positiveattitudestowardslife.
20.Assessmentisusedtoprovokestudentstobeinterestedinlearning.
14.Assessmenthelpsstudentssucceedinauthentic/realworldexperiences.
13.Assessmentensuresstudentspayattentionduringclass.
34.Assessmentmeasuresstudents'higherorderthinkingskills.
g
g
27.Assessmentallowsdifferentstudentstogetdifferentinstruction.
24.Assessmentstimulatesstudentstothink.
49.Assessmentforcesteacherstoteachinawayagainsttheirbeliefs.
31.Assessmentinterfereswithteaching.
10.Assessmenthaslittleimpactonteaching.
26.Assessmentisanimpreciseprocess.
23.Assessmentresultsarefiled&ignored.
45.Teachersconductassessmentsbutmakelittleuseoftheresults.

Note. Non-zero values on other


factors, but all weak.
1
2
3
4
5
6
7
0.556 0.023 0.11 0.154 0.097 0.047 0.072
0.685 0.049 0.02 0.074 0.065 0.059 0.008
0.591
0.04 0.084 0.066 0.059 0.02 0.048
0.446 0.085 0.105 0.216 0.092 0.14 0.124
0.533 0.066 0.131 0.012 0.007 0.22 0.224
0.509 0.167 0.007 0.03 0.176 0.11 0.077
0.487 0.017 0.102 0.128 0.011 0.15 0.213
0.678 0.061 0.074 0.008 0.001 0.12 0.105
0.083 0.458 0.03 0.121 0.071 0.19 0.106
0.102
0.54 0.08 0.06 0.086 0.13 0.066
0.134 0.384 0.19 0.034 0.062 0.01 0.067
0.004 0.629 0.034 0.008 0.021 0.057 0.094
0.017 0.646 0.01 0.057 0.02 0.022 0.056
0.019 0.493 0.045 0.003 0.193 0.008 0.012

NB. This is the SPSS


EFA steps
pattern matrix of
1. Run MLE, oblimin allowing eigenvalues>1.00
re ressi ns
regressions
2. Remove items with cross-loadings >.30
3. Remove items with no loading >.30
4. Remove items which did not logically fit their factor
5. Remove items that seem literally repetitive in content
6. Remove factors that are repetitive in meaning to earlier factors
RESULT
Items kept fit conceptually and have strong unique loadings on 1 factor

13

8/02/2011

Strategies for Evaluating a Model

1
MQQ69

(Brown, Harris, & Harnett, 2010)

1
MQQ70

MQQ57

Student
Involvement

Check that the model is admissible

e43

1
MQQ40

1
MQQ68

1
MQQ59

The model is recursive

MQQ50

1
MQQ58

e42

MQQ44

e41

e48
48

MQQ63

e40

MQQ45

Well-being

Evaluation

1
MQQ35

1
MQQ24

MQQ8

e47

MQQ41
MQQ10

e34

MQQ20

e33

MQQ29

MQQ6
MQQ15

e45
1

e30
e29

MQQ42

e28

MQQ21

e27

MQQ32

Growth

e46
46

MQQ1

MQQ11

1
MQQ4

1
MQQ22

1
MQQ12

1
MQQ16

MQQ43

e7
e8
e9
e10
e11
e12
e13

e16
e17
e18
e19
e20
e21
e22
1

MQQ55

Irrelevance

e23
1

MQQ2

e24
1

e6

e15
1

TCoF e49

Timeliness

MQQ61

e5

e14

e31

e4

MQQ31

1
1

e3

MQQ23

e35

e2

MQQ18

e36

e32
32

MQQ25

e38

e37

MQQ49

MQQ66

MQQ5

e39

e44
1

e1

MQQ53

e25
1

MQQ17

e42

e26

MQQ44

e41

MQQ63

e48
1

e40

MQQ5

Evaluation

e39

MQQ25

e38
38

e37

MQQ8

1
MQQ23

Check that it is
IDENTIFIED.
OOOPS!
Seed value omitted

14

8/02/2011

Additional Causes of Broken Models

To be examined in the next session

negative error variances


covariance
i
m
matrices
ti
th
thatt are nott positive
iti definite
d fi it

Recommended solutions to be discussed as well

Testing Multiple Models

analyst job is to identify which model fits best and makes


sense in terms of what we already know and believe
about reality
Instrument: Teachers Conceptions of Feedback

Theoretically expected 10 factors

Data: independent samples from Louisiana and New


Zealand
Analysis: independent EFA and CFA for both samples,
comparison of 2 groups, re-analysis of NZ sample
Results: multiple structures and many possible valid
models could fit; better model found in a series of studies

15

8/02/2011

e42

e34

e33

MQQ20

MQQ17

.49

MQQ16

.81
.65
.80

Strategy
Development

.49

.60
e32

e41

MQQ19

e30

MQQ14

e29

MQQ38

e28

MQQ42

e27

MQQ39

e43

e4

e43

e5

e42

MQQ44

MQQ32

e6

.13
e41

MQQ63

MQQ2

e7

e48

Organised
Planned

e9

.45 MQQ4

e10

.51

MQQ60

Conceptions
C
i
of Feedback

.70

MQQ58

e23

MQQ25

e22

MQQ24

e44

.71

.70
.68

e47

e46

.70
.75

MQQ55

.63

.71

.77
Encourage
Improvement

MQQ53

e40

mqq56

MQQ21

e13

MQQ34

e14

MQQ28

e16

MQQ8

.43
.32

.69
Independence

.57
.81

e59

e17

MQQ43

e18

MQQ64

e35

MQQ50

e36

MQQ48

e37

.22

e47
e46
e45
e44
e43

MQQ38
MQQ37

e35

MQQ10

MQQ30
MQQ44
MQQ63

e38

e41

MQQ70

e49

e40

MQQ5

e39

MQQ25

e37

MQQ23

e36

MQQ41

e35

MQQ10

e34

MQQ20

e33

MQQ29

e31
e30

MQQ11

e28

MQQ21

e27

MQQ32

e51

.80

.52

.30

.69
.43

.25
.20
.15
-.22

.34
.52

.35
.50 Accuracy of
Grading

.41

.69
.05

.22
-.12
.65
.67
.83
.42
.12

Timeliness

.65
.55

Student
Well-being

-.14

.62

.54

.59

.34
.39
.15 -.02
.01 .10
-.46

.63

.57

.64
.72
.41
.55

.22
Tchr-Only
Valid

.58
.58

.28
.42

.34
.57
.54
-.02
.03

MQQ8
MQQ26
MQQ64
MQQ67

.67

.57

.64

.60

.53

Ineffective Grading

-.07
.70

.51
.66

.53

-.39.80

.43

Student
Learning
Growth

.63
.55
.55
.46

Interactive

MQQ26

.53
.46
.57
.59
.49
.56

MQQ64

.29

e1

e52

MQQ67

MQQ70

e2

e51

MQQ65

MQQ57

e3

MQQ40

e4

-.39

.63

.59

MQQ58

.86

.69

MQQ49

e60

.30

e7

.16

e5

e6

e6

e5

.57

.52
.40

MQQ50
MQQ59
MQQ68

e7

e4

MQQ40

e8

e3.43

MQQ57

.58

MQQ49

e9

e2

MQQ70

MQQ45

e10

e1.27

MQQ69

MQQ66

e11

MQQ35

e12

MQQ24

e13

MQQ31

e15

e16
e20

MQQ16

e21

MQQ43

e22

MQQ55

e23

MQQ2

e24

MQQ53

e25

MQQ17

e26

MQQ65

.30

.97

MQQ24

.50

Learning

.21

.66

.89

.94

.55
.45

Growth

.45

e65

.55
.40
.72
Student
Involvement

.64
.66

.39
.72

.68
.76

e58

MQQ22
MQQ12
MQQ16

.57

MQQ2
MQQ53

e15

.29

e16

.25

e19
e20

.30

e21

.21

e22

.33

e23
e24

.24

e25

.33
MQQ17
MQQ37
MQQ38

.51

e13

.18

.33

.57

.80

e12

.34

.43

MQQ55

.80

.56

.52

MQQ6

.57

Student
Response

e57

MQQ31

MQQ43

.49

.83

e11

.32
MQQ35

.42
.54

Interactive

MQQ66

.58

.25
e64

-.20

e9

e10

MQQ36

e26
e45
e46

.64
.63

.26

e47

NZ: 10 Hierarchical factors,,


good fit
Imposes meaning on the
mess of 10 factors

e50
e49

e19

.32
.64

e61

e63

MQQ58

MQQ6

MQQ45

.57

Well-being

.65

.54

MQQ50

MQQ12

.45

e8

.48
.43

.66

.75

MQQ59

MQQ22

.35

Evaluation

.41
.66

e31

MQQ61

e62

Timeliness

.53

.20

MQQ68

MQQ3

.53

.22
.20 .11
-.46

.42
e54

MQQ69

MQQ54

.70

.21-.19-.31
-.10 .54

MQQ29

.32

Expected

MQQ61

e54
e53
e52

-.12

MQQ20

e33

e53

.66
Student
Led

Process
Oriented

e34

e30

.40
.46

Grading

.62
.59

.44

.51

.61

MQQ9

e42

e55

.52
.79

MQQ36

.39 MQQ47

Louisiana: 7 Hierarchical
factors, marginal fit
But this is not what we
really expectedsample or
model?

MQQ41

e28

.43
MQQ11

.55

.39
e36
.28

.76

e15

.63
.66 MQQ22

Make Ss
Feel Good

e45
.66

e12

.75

MQQ23.64

e19

.66

.61

.34

NZ: 10 inter-correlated
factors, acceptable fit
But its a mess

e11

MQQ1

MQQ40

.42

e27
.25

MQQ32
MQQ21

.65

Irrelevance

.48

.93

.42

MQQ23

.50

e8

MQQ7

.43

.42

MQQ25

.37

.25

.44

.50

.17

MQQ5

e39

-.23

.96

.73
.36

.20
e56
Tchr-Only
Valid

.52

e37

MQQ3

.40

.61

MQQ41

.27
.18

.63
.57

Irrelevance

.57

MQQ30

.18

Expected

.57

.58

MQQ9

.54

.70

MQQ3
MQQ54

.33

e44

e40

.54

.73

e24

e20

e3

MQQ30

.71

.78

e21

MQQ37

e49 .33
e50

.48

e31

e25

e2

.64 MQQ36

.72

e26

e1

MQQ45

.72

.88

Required
Expected

.49

MQQ46

Test Multiple Competing Alternatives


Model

Factors Items

df

1. LAhierarchical

40

1758.12

733

2. NZinter
correlated
3. NZ
intercorrelated,
bifactor
hierarchical

10

48

2444.97

1035

10

46

2378.58

1019

2/df,
p
2.40,
.12
2.36,
.12
2.33,
.13

CFI
.78

Gamma RMSEA SRMR


hat
.86
.067
.080

.79

.90

.051

.061

.79

.90

.051

.063

Reduction in 2 given reduction in df was statistically significant between NZ Models so


model 3 preferred
Model 3 is better fitting with acceptable values for RMSEA, SRMR, and 2/df.
But note that NZ model does NOT fit Louisiana sample.populations matter

16

8/02/2011

What is Confirmation in CFA?

Most studies follow this process

An inventory is developed using theory


The validity of the questionnaire may be explored
EFA identifies a plausible model within a data set
CFA tests the fit of the EFA model to the data
CFA refines the EFA model with the same data
This process is better considered Restrictive analysis not CFA

True confirmation comes when an existing model is


TESTED with an independent sample

No EFA needed
Just run the model, does it fit?
If NOT, then EFA must begin again

True Confirmatory Study

TCoA: 9 factors in 4 factor


structure developed in NZ
New Sample: Cyprus primary &
secondary teachers
Tested:

CFA NZ Model (original &


simplified);
EFA Cyprus Model;
joint hierarchical model

Result: Model D fits both


groups satisfactorily

Brown & Michaelides, 2011

17

8/02/2011

Developing a structural model (SEM)

Identify possible structural paths between important


variables in measurement models

Correlation analysis
Regression analysis

If theory suggests causal relations use regressions


If no idea, look at correlations
Note. In SEM, a correlation and a regression will have the
same value between any 2 variables
variables.You need to decide
theoretically if there is cause or temporal precedent

Why Use SEM instead of Multiple


Regressions?

Limitations of multiple regressions

only 1 construct can be predicted at a time; its not


simultaneous
The joint correlations among predictor constructs is not taken
into account
The paths from origin to terminus cannot be accounted for

Thus, SEM is better able to test for statistical significance


of regressions

Provided N is large enough

18

8/02/2011

Example: SCoA to Definitions of Assessment


(Brown, Irving, et al., 2009)

Hypothesis

beliefs about the nature and purpose of assessment predict the


type of practices students would define as assessment

Multiple regression analysis

2 latent traits were predicted by 8 latent traits in 2 separate


analyses; only 4 were statistically significant
Interactive-Informal assessment practices (R2=.02):

Class Environment, = .12, p = .01,


Assessment is ignored (Ignore), = .10, p = .06.

The Teacher-Controlled assessment practices (R2=.08):

Teacher Improves Student Learning, = .14, p = .02, and


Personal Enjoyment, = -.14, p = .003.

Example: SCoA to Definitions of Assessment


SEM

Beta values much


higher than
regression values
Proportion
variance
explained much
higher than
regression

19

8/02/2011

CFA/SE
M: Belief
to Belief
((Brown, 2009))

CFA: A change in each latent trait predicts a large


change in responses for each contributing variable.
Range = .38 to .88; proportion variance explained = 2
, hence 13% to 77%. Relatively low proportion of
unexplained. This is required for good measurement in
CFA.
Conclusion: Latent Traits predict responses on
Observed Variables.

SEM: Only statistically


significant paths kept in
model.

Consider This Model

Theory

Inventory development

Multiple studies, multiple versions, multiple samples

Include measure of academic performance

Self-regulation involves increasing adaptive beliefs & practices


and decreasingg maladaptive
p
ones

N=520; #manifest variables=46; 9 factors; 3 measurement


models; 2 models are hierarchical.
Fit: 2= 2146.58;; df=970;
f
; 2/df=2.21
f
(p=.13);
(p
);
gamma hat=.91; RMSEA=.048; SRMR=.064; SMC=.20

What beliefs are adaptive or maladaptive to performance


in mathematics? Does it matter?

20

8/02/2011

Summary

Theories are used to devise models that attempt to


explain how changes occur in various constructs and in
how various constructs are related to each other
CFA/SEM mathematical equations are based on linear
regressions to identify the strength of relationships
among Latent, Manifest, and Unexplained variables
CFA/SEM models are used to establish validity of
q
measurements and answer substantive questions
CFA/SEM are powerful because of simultaneous
properties and tighter specification of model

21

8/02/2011

Summary

Same techniques used to validate measurement models


and explore relations between constructs
R i
Requires
llarge N and
d sophisticated
hi ti t d mathematical
th
ti l fformulae
l
Is powerful to test and generate hypotheses
Logically depends on the notion of causation and
prediction
Can be done relatively easily with modern software but
many things can go wrongsee 2nd part of this lecture in
2 weeks

References: Studies used

Brown, G. T. L., Harris, L. R., & Harnett, J. (2010, July). Teachers conceptions of feedback:
Results from a national sample of New Zealand teachers. Paper presented at the
International Test Commission biannual conference, Hong Kong.
Brown, G. T. L., Harris, L. R., OQuinn, C., & Lane, K. E. (2011, April). New Zealand and
Louisiana practicing teachers
teachers conceptions of feedback: Impact of Assessment of Learning
versus Assessment for Learning policies? Paper accepted for presentation to the
Classroom Assessment SIG at the annual meeting of the American Educational
Research Association, New Orleans, LA.
Brown, G. T. L., & Hirschfeld, G. H. F. (2008). Students conceptions of assessment:
Links to outcomes. Assessment in Education: Principles, Policy and Practice, 15(1), 3-17.
Brown, G. T. L., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. F. (2009). Use of
interactive-informal assessment practices: New Zealand secondary students
conceptions of assessment. Learning & Instruction, 19(2), 97-111.
Brown, G. T. L., & Michaelides, M. (2011). Ecological rationality in teachers
teachers
conceptions of assessment across samples from Cyprus and New Zealand. European
Journal of Psychology of Education. doi:10.1007/s10212-010-0052-3
Brown, G. T. L., Peterson, E. R., & Irving, S. E. (2009). Self-regulatory beliefs about
assessment predict mathematics achievement. In D. M. McInerney, G. T. L. Brown, &
G. A. D. Liem (Eds.) Student perspectives on assessment: What students can tell us about
assessment for learning (pp. 159-186). Charlotte, NC: Information Age Publishing.

22

8/02/2011

References: Authorities

Ajzen, I. (2005). Attitudes, personality and behavior (2nd ed.). New York:
Open University Press.
Byrne, B. M. (2001). Structural Equation Modeling with AMOS: Basic
Concepts, Applications, and Programming. Mahwah, NJ: LEA.
Fan, X., & Sivo, S. A. (2007). Sensitivity of fit indices to model
misspecification and model types. Multivariate Behavioral Research,
42(3), 509529.
Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules:
Comment on hypothesis-testing approaches to setting cutoff values
for fit indexes and dangers in overgeneralizing Hu and Bentler's
(1999) fifindings.
di
S
Structural
lE
Equation
i Modeling,
M d li 11(3),
11(3) 320-341.
320 341
Marsh, H. W., Hau, K.-T., Balla, J. R., & Grayson, D. (1998). Is more
ever too much? The number of indicators per factor in confirmatory
factor analysis. Multivariate Behavioral Research, 33(2), 181-220.

Basic Readings on CFA/AMOS

. (2007). AMOS. Taipei, Taiwan: .


Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory
factor analysis: Four recommendations for getting the most from
your analysis.
analysis Practical Assessment Research & Evaluation,
Evaluation 10(7)
10(7),
Available online: http://www.pareonline.net/pdf/v10n17.pdf.
Klem, L. (2000). Structural equation modeling. In L. G. Grimm & P. R.
Yarnold (Eds.), Reading and Understanding More Multivariate Statistics
(pp. 227-260). Washington, DC: APA.
Kline, P. (1994). An easy guide to factor analysis. London: Routledge.
Kim, J.-O., & Mueller, C. W. (1978). Factor Analysis: Statistical methods
and practical issues (Vol. 14). Thousand Oaks, CA: Sage
Thompson, B. (2000). Ten commandments of structural equation
modeling. In L. G. Grimm & P. R.Yarnold (Eds.), Reading and
Understanding More Multivariate Statistics (pp. 261-283). Washington,
DC: APA.

23

Potrebbero piacerti anche