Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Correlation:
They are quality control analysis used for confirmation whether all dimensions lead to the
respective construct
For cross loading use rotation
Gives you the idea that what you are trying to measure you are actually measuring the same
thing
Factor loading
Analysis
Check plot & warning to see whether there are more than 1 factors
The negative signs in the component matrix shows negatively coded variables (will need to
recode them by Transform> recode into same variable> select variable> old & new>ok)
If KMO>0.7 data is adequate to conduct FA (it will be 0.5 if there are only 2 qs) & sig<0.05
In total variance table if eigenvalues are greater than 1 then it’s considered a separate factor
Reliability
Never compute your variables until you clarify the construct with its dimension and its reliability
Computing Variable
Analyze>dimension reduction> factor> select Variables> scores> save as variable regression> continue>
ok
Turning data set into Binary to do Chi-square test for Two Binary variables
Transform>compute variable>set new name>1>if>define value lable using if condition variable_name <=
"median".
Chi-square
Analyze>Descriptive stats> cross tabs> select rows and columns>statistic> (Chi square, phi and
crammers V)> continue> Cells (Percentages)>continue> display> clustered bar charts>ok
Can add a layer to it by setting in crosstabs add variable in layer>ok & display layer in variable
(keep in mind data is not split)
Best for two variables with two categories (it can be done for more than one too)
Can use only binary data
Analyze sig level and graphical representation
Phi and crammers coefficient is read as the same as correlation strength and association
Paired T test
We use "paired t-test" when you have a two observations and they are in a scale format and they are
coming from the same source. It cannot handle more than two data points. Eg t1 & t2
it's interpretation: small effect 0.01 Moderate effect 0.06 Large effect 0.14
Result: A paired samples t-test was conducted to evaluate the impact of the intervention on student's
scores of the of Statitics
test (FOST). There was a statistically significant decrease in FOST scores from Time 1 (M=40.17, SD=5.16)
to Time 2 [M-37.5, SD 5.15, t (29)=5.39. At the probability of p-value of less than 0.05 the ETA squared
shows (0.5) small effect.
Analyze> compare means> independent sample t test> select test variable (scale) and group
variables (nominal/categorical)> define
We use "independent t-test" when you have a two observations and they are in a scale format
and they are coming from the different source. It cannot handle more than two data points.
p-value 0.05 =>> significance
Formula: (To test the magnitude of the significance) eta squared = (t^2) / (t^2+N1 + N2-2)
For sig check Levine’s sig if >0.05 equal variance assumed and check sig in respective row.
Significance (Levine’s –and corresponding sig)
Effect Size (ETA squared – T 2 / T2 + (N1+N2-2)
o Small (.01)
o Moderate (0.06)
o large (0.14)
Analyze>> General Linear Model >> Repeated Measures> new variable >levels>define
to check significance we look at Wilks Lambda, Look at the multi variate test for Wilks Lambda
and it's significant
eta-squared (larger = better)
is the relation significant with respect to the group and it's eta (either larger or smaller than the
whole)
graph will show you the clear and concise picture
Split file
Analyze> Compare means> post hoc (check Tukey)> Options (check Descriptive & Means Plot)> ok
Correlation
Analyze>correlate>bivariate
Regression
Freidman Test
Analyze> non parametric test > related sample fields > run