Sei sulla pagina 1di 4

Factor Correlation Matrix

A correlation matrix of the factors. In orthogonal methods, it is an identity matrix (diagonals equal 1 and all offdiagonal elements equal 0). Oblique rotations produce factors that are correlated.
factor loading
The coefficients used to express a standardized variable as a linear combination of the factors. If the factors are
uncorrelated with each other, it is also the correlation between the variable and the factor. Also called the factor
pattern matrix. factor loading
The coefficients used to express a standardized variable as a linear combination of the factors. If the factors are
uncorrelated with each other, it is also the correlation between the variable and the factor. Also called the factor
pattern matrix.
factor loading plot
A plot of the observed variables using the factor loadings as coordinates.
Factor Matrix
A matrix in which each row corresponds to an observed variable and each column corresponds to a common factor.
Each row contains the coefficients used to express the standardized observed variable in terms of the factors.
These coefficients are called factor loadings. Factors with large coefficients (in absolute value) for a variable are
closely related to the variable. It is also called the factor pattern matrix. When the factors are uncorrelated with
each other, the factor loadings are the correlations between the factors and the variables. In that case, it is also the
factor structure matrix.
Factor Score Coefficient
The matrix of coefficients used to calculate factor scores. Each column contains the coefficient for one of the
common factors. There is a coefficient for each variable. The factor score for each case is computed by
multiplying the standardized value of each observed variable by its respective coefficient and summing them up.
A separate factor score is generated for each common factor.
factor structure matrix
A matrix in which each element is the correlation between an observed variable and a common factor. The common
factors are the columns, the observed variables, the rows. When the factors are uncorrelated (orthogonal), the
factor structure matrix and the factor pattern matrix are identical.
Anti-Image Correlation Matrix
A matrix of the negatives of the partial correlation coefficients. If the variables share common factors, the partial
correlation coefficients should be small, since the linear effects of other variables are eliminated. If the proportion
of large coefficients in this matrix is high, you should reconsider the use of a factor analysis model for the variables.
Measures of Sampling Adequacy
Measure that compares the magnitudes of the observed correlation coefficients involving a particular variable to
the magnitude of partial correlation coefficients involving that variable. They are printed on the diagonal of the
anti-image correlation matrix. Values range from 0 to 1. Reasonably large values are needed for a good factor
analysis. You might consider eliminating variables with small values for this measure. See also Kaiser-MeyerOlkin.
part correlation coefficient
The correlation between the dependent variable and an independent variable when the linear effects of the other
independent variables in the model have been removed from the independent variable. It is related to the change in
R squared when a variable is added to an equation.
Partial
Used to assess the linear relationship between two variables, controlling for the effects of other variables. In linear
regression it is the correlation between an independent variable and the dependent variable when the linear effects
1

of the other independent variables have been removed from both the dependent variable and that independent
variable.
image covariance matrix
A matrix with the squares of images on the diagonal and adjusted correlation coefficients in the off-diagonals. The
adjusted correlations are based on the assumption that the observed variables are a sample from the universe of all
possible observed variables.
Kaiser-Meyer-Olkin
An index for comparing the magnitudes of the observed correlation coefficients to the partial correlation
coefficients. If the sum of the squared partial correlation coefficients between all pairs of variables is small when
compared to the sum of the squared correlation coefficients, it is close to 1. Small values indicate that a factor
analysis may not be a good idea, since correlations between pairs of variables cannot be explained by the other
variables. Kaiser describes values in the 0.90's as marvelous, in the 0.80's as meritorious, in the 0.70's as middling,
in the 0.60's as mediocre, in the 0.50's as miserable, and below 0.50 as unacceptable.
residual (FACTOR)
The difference between the observed and the estimated correlation coefficients for a pair of variables. Large
residuals indicate that the factor analysis model does not fit the data very well.
Bartlett method (FACTOR)
A method of estimating factor score coefficients. The scores produced have a mean of 0. The sum of squares of the
unique factors over the range of variables is minimized.
ALPHA (FACTOR)
A method for factor extraction that considers the variables in the analysis to be a sample from the universe of
potential variables. It maximizes Cronbach's Alpha for the factors.
STANDARDIZED ITEM ALPHA
The standardized form of Cronbach's Alpha. Standardized Alpha will result if Cronbach's Alpha is computed from
items that have been standardized to have a variance of 1.
Anderson-Rubin method
A method of computing factor score coefficients; a modification of the Bartlett method of factors scores to ensure
orthogonality of the estimated factors. The scores produced have a mean of 0, a standard deviation of 1, and are
orthogonal.
IMAGE factoring
Factor extraction method developed by Guttman and based on image theory.
unweighted least squares (FACTOR)
A method of extracting factors that minimizes the sum of the squared differences between the observed and
reproduced correlation matrices ignoring the diagonals.
regression method
A method for estimating factor score coefficients. The scores produced have mean of 0 and a variance equal to the
squared multiple correlation between the estimated factor scores and the true factor values. The sum of squared
discrepancies between true and estimated factors over individuals is minimized. The scores may be correlated even
when factors are orthogonal.
Bartlett Test
A statistic that can be used to test the hypothesis that the correlation matrix is an identity matrix (a matrix in which
all diagonal terms are 1 and off-diagonal terms 0). It requires that the data be a sample from a multivariate normal
2

population. If the null hypothesis that the population correlation matrix is an identity matrix cannot be rejected,
and the sample size is reasonably large, you should reconsider the use of multivariate analysis.
Scree
A plot of the variance associated with each factor. It is used to determine how many factors should be kept.
Typically the plot shows a distinct break between the steep slope of the large factors and the gradual trailing of the
rest (the scree).
rotation
A general method for making a factor solution easier to interpret. The axes of the factor loadings are rotated to
achieve a 'simple' structure. There are several different methods for rotation. Rotation does not affect the goodness
of fit for the solution.
varimax
An orthogonal rotation method that minimizes the number of variables that have high loadings on each factor. It
simplifies the interpretation of the factors.
VARIMAX rotated correlation
The correlations between the rotated principal component and the dependent variable.
minimizes the number of variables that have high loadings on each factor.

VARIMAX rotation

quartimax
A rotation method that minimizes the number of factors needed to explain each variable. It simplifies the
interpretation of the observed variables.
equamax
A method of rotation that minimizes both the number of variables that load highly on a factor and the number of
factors needed to explain a variable.
direct oblimin criterion
A general criterion that defines oblique rotation. When delta equals 0 (the default), solutions are most oblique. As
delta becomes more negative, the factors become less oblique.
oblique
A method of rotation that results in factors which are correlated.
Reproduced Correlation Matrix
A matrix of correlation coefficients between pairs of variables that is estimated from the results of factor analysis.
maximum likelihood
A method for factor extraction that results in the parameter estimates that are most likely to have produced the
observed correlation matrix if the sample is from a multivariate normal distribution. The correlations are weighted
by the inverse of the uniqueness of the variables.
maximum-likelihood estimation
An estimation technique which, for a given model and set of data, finds those parameter estimates which are most
likely to have produced the observed data.
principal axis factoring
A method of extracting factors from the original correlation matrix with squared multiple correlation coefficients
placed in the diagonal as initial estimates of the communalities. These factor loadings are used to estimate new
communalities that replace the old communality estimates in the diagonal. Iterations continue until the changes in
the communalities from one iteration to the next satisfy the convergence criterion for extraction.
3

SS Loadings
Statistics provided in place of the eigenvalues in image and alpha factoring. Both of these models assume that the
observed variables are a sample from all possible variables.
Canonical Correlation
The canonical correlation for a discriminant function is the square root of the ratio of the between-groups sum of
squares to the total sum of squares. Squared, it is the proportion of the total variability explained by differences
between groups.
Discriminant score
A score computed for each case by multiplying the unstandardized discriminant coefficients by the values of the
independent variables, summing these products, and adding the constant. A separate discriminant score is
calculated for each discriminant function in the analysis. The mean score for all cases combined is 0 and the pooled
within-groups variance is 1.
Component loadings (Categories)
In optimal scaling, the projection of the quantified variable in the object space. When there are no missing data, the
component loading for dimension p is equivalent to the Pearson correlation between the quantified variable and the
object scores in dimension p. For multiple variables, the component loadings are given in a matrix. Because the
quantifications for a single variable are the same across dimensions, the component loadings for single variables
are given in a single row for each variable. Conceptually, for single variables the component loadings may be
thought of as analogous to the factor loadings given in Factor Analysis.
Controlling (PARTIAL)
Removing the effects of one or more variables from the relationship between sets of variables. In partial
correlation, the linear relationships between target variables and the control variable(s) are averaged out by
essentially holding the control variable(s) constant. This is distinct from controlling for a third variable by
analyzing a bivariate relationship separately within levels of the third variable.
Mahalanobis' distance
A measure of how much a case's values on the independent variables differ from the average of all cases. For a
single independent variable, it is simply the square of the standardized value of the independent variable. A large
Mahalanobis' distance identifies a case as having extreme values on one or more of the independent variables.
Multivariate test for homogeneity
A test of the hypothesis that the variance-covariance matrices are equal.
Spearman Correlation Coefficient
A product-moment correlation coefficient suitable for ordinal data. It is equivalent to ranking observations and
computing a Pearson r. Takes into account both amount of disagreement between pairs of ranks and the degree of
disagreement.

Potrebbero piacerti anche