Sei sulla pagina 1di 40

What statistical analysis should I use?

Statistical analyses using SPSS


Introduction
This page shows how to perform a number of statistical tests using SPSS. Each section
gives a brief description of the aim of the statistical test, when it is used, an example
showing the SPSS commands and SPSS (often abbreviated) output with a brief
interpretation of the output. You can see the page Choosing the Correct Statistical Test for a
table that shows an overview of when each test is appropriate to use. In deciding which test
is appropriate to use, it is important to consider the type of variables that you have (i.e.,
whether your variables are categorical, ordinal or interval and whether they are normally
distributed), see What is the difference between categorical, ordinal and interval variables?
for more information on this.
What is the difference between categorical, ordinal and interval variables?

In talking about variables, sometimes you hear variables being described as categorical (or
sometimes nominal), or ordinal, or interval. Below we will define these terms and explain
why they are important.
Categorical

A categorical variable (sometimes called a nominal variable) is one that has two or more
categories, but there is no intrinsic ordering to the categories. For example, gender is a
categorical variable having two categories (male and female) and there is no intrinsic
ordering to the categories. Hair color is also a categorical variable having a number of
categories (blonde, brown, brunette, red, etc.) and again, there is no agreed way to order
these from highest to lowest. A purely categorical variable is one that simply allows you to
assign categories but you cannot clearly order the variables. If the variable has a clear
ordering, then that variable would be an ordinal variable, as described below.
Ordinal

An ordinal variable is similar to a categorical variable. The difference between the two is
that there is a clear ordering of the variables. For example, suppose you have a variable,
economic status, with three categories (low, medium and high). In addition to being able to
classify people into these three categories, you can order the categories as low, medium and
high. Now consider a variable like educational experience (with values such as elementary
school graduate, high school graduate, some college and college graduate). These also can
be ordered as elementary school, high school, some college, and college graduate. Even
though we can order these from lowest to highest, the spacing between the values may not
be the same across the levels of the variables. Say we assign scores 1, 2, 3 and 4 to these

four levels of educational experience and we compare the difference in education between
categories one and two with the difference in educational experience between categories
two and three, or the difference between categories three and four. The difference between
categories one and two (elementary and high school) is probably much bigger than the
difference between categories two and three (high school and some college). In this
example, we can order the people in level of educational experience but the size of the
difference between categories is inconsistent (because the spacing between categories one
and two is bigger than categories two and three). If these categories were equally spaced,
then the variable would be an interval variable.
Interval

An interval variable is similar to an ordinal variable, except that the intervals between the
values of the interval variable are equally spaced. For example, suppose you have a
variable such as annual income that is measured in dollars, and we have three people who
make $10,000, $15,000 and $20,000. The second person makes $5,000 more than the first
person and $5,000 less than the third person, and the size of these intervals is the same. If
there were two other people who make $90,000 and $95,000, the size of that interval
between these two people is also the same ($5,000).
Why does it matter whether a variable is categorical, ordinal or interval?

Statistical computations and analyses assume that the variables have a specific levels of
measurement. For example, it would not make sense to compute an average hair color. An
average of a categorical variable does not make much sense because there is no intrinsic
ordering of the levels of the categories. Moreover, if you tried to compute the average of
educational experience as defined in the ordinal section above, you would also obtain a
nonsensical result. Because the spacing between the four levels of educational experience
is very uneven, the meaning of this average would be very questionable. In short, an
average requires a variable to be interval. Sometimes you have variables that are "in
between" ordinal and interval, for example, a five-point likert scale with values "strongly
agree", "agree", "neutral", "disagree" and "strongly disagree". If we cannot be sure that the
intervals between each of these five values are the same, then we would not be able to say
that this is an interval variable, but we would say that it is an ordinal variable. However, in
order to be able to use statistics that assume the variable is interval, we will assume that the
intervals are equally spaced.
Does it matter if my dependent variable is normally distributed?

When you are doing a t-test or ANOVA, the assumption is that the distribution of the
sample means are normally distributed. One way to guarantee this is for the distribution of
the individual observations from the sample to be normal. However, even if the

distribution of the individual observations is not normal, the distribution of the sample
means will be normally distributed if your sample size is about 30 or larger. This is due to
the "central limit theorem" that shows that even when a population is non-normally
distributed, the distribution of the "sample means" will be normally distributed when the
sample size is 30 or more, for example see Central limit theorem demonstration .
If you are doing a regression analysis, then the assumption is that your residuals are
normally distributed. One way to make it very likely to have normal residuals is to have a
dependent variable that is normally distributed and predictors that are all normally
distributed, however this is not necessary for your residuals to be normally distributed.
About the hsb data file
Most of the examples in this page will use a data file called hsb2, high school and beyond.
This data file contains 200 observations from a sample of high school students with
demographic information about the students, such as their gender (female), socio-economic
status (ses) and ethnic background (race). It also contains a number of scores on
standardized tests, including tests of reading (read), writing (write), mathematics (math)
and social studies (socst). You can get the hsb data file by clicking on hsb2.
One sample t-test
A one sample t-test allows us to test whether a sample mean (of a normally distributed
interval variable) significantly differs from a hypothesized value. For example, using the
hsb2 data file, say we wish to test whether the average writing score (write) differs
significantly from 50. We can do this as shown below.
t-test
/testval = 50
/variable = write.

The mean of the variable write for this particular sample of students is 52.775, which is
statistically significantly different from the test value of 50. We would conclude that this
group of students has a significantly higher mean on the writing test than 50.
One sample median test
A one sample median test allows us to test whether a sample median differs significantly
from a hypothesized value. We will use the same variable, write, as we did in the one
sample t-test example above, but we do not need to assume that it is interval and normally
distributed (we only need to assume that write is an ordinal variable). However, we are
unaware of how to perform this test in SPSS.
Binomial test
A one sample binomial test allows us to test whether the proportion of successes on a twolevel categorical dependent variable significantly differs from a hypothesized value. For
example, using the hsb2 data file, say we wish to test whether the proportion of females
(female) differs significantly from 50%, i.e., from .5. We can do this as shown below.
npar tests
/binomial (.5) = female.

The results indicate that there is no statistically significant difference (p = .229). In other
words, the proportion of females in this sample does not significantly differ from the
hypothesized value of 50%.

Chi-square goodness of fit


A chi-square goodness of fit test allows us to test whether the observed proportions for a
categorical variable differ from hypothesized proportions. For example, let's suppose that
we believe that the general population consists of 10% Hispanic, 10% Asian, 10% African
American and 70% White folks. We want to test whether the observed proportions from
our sample differ significantly from these hypothesized proportions.
npar test
/chisquare = race
/expected = 10 10 10 70.

These results show that racial composition in our sample does not differ significantly from
the hypothesized values that we supplied (chi-square with three degrees of freedom =
5.029, p = .170).
Two independent samples t-test
An independent samples t-test is used when you want to compare the means of a normally
distributed interval dependent variable for two independent groups. For example, using the
hsb2 data file, say we wish to test whether the mean for write is the same for males and
females.
t-test groups = female(0 1)
/variables = write.

The results indicate that there is a statistically significant difference between the mean
writing score for males and females (t = -3.734, p = .000). In other words, females have a
statistically significantly higher mean score on writing (54.99) than males (50.12).
See also

SPSS Learning Module: An overview of statistical tests in SPSS

Wilcoxon-Mann-Whitney test
The Wilcoxon-Mann-Whitney test is a non-parametric analog to the independent samples ttest and can be used when you do not assume that the dependent variable is a normally
distributed interval variable (you only assume that the variable is at least ordinal). You will
notice that the SPSS syntax for the Wilcoxon-Mann-Whitney test is almost identical to that
of the independent samples t-test. We will use the same data file (the hsb2 data file) and the
same variables in this example as we did in the independent t-test example above and will
not assume that write, our dependent variable, is normally distributed.
npar test
/m-w = write by female(0 1).

The results suggest that there is a statistically significant difference between the underlying
distributions of the write scores of males and the write scores of females (z = -3.329, p =
0.001).
See also

FAQ: Why is the Mann-Whitney significant when the medians are equal?

Chi-square test
A chi-square test is used when you want to see if there is a relationship between two
categorical variables. In SPSS, the chisq option is used on the statistics subcommand of
the crosstabs command to obtain the test statistic and its associated p-value. Using the
hsb2 data file, let's see if there is a relationship between the type of school attended
(schtyp) and students' gender (female). Remember that the chi-square test assumes that the
expected value for each cell is five or higher. This assumption is easily met in the examples
below. However, if this assumption is not met in your data, please see the section on
Fisher's exact test below.
crosstabs
/tables = schtyp by female
/statistic = chisq.

These results indicate that there is no statistically significant relationship between the type
of school attended and gender (chi-square with one degree of freedom = 0.047, p = 0.828).
Let's look at another example, this time looking at the linear relationship between gender
(female) and socio-economic status (ses). The point of this example is that one (or both)

variables may have more than two levels, and that the variables do not have to have the
same number of levels. In this example, female has two levels (male and female) and ses
has three levels (low, medium and high).
crosstabs
/tables = female by ses
/statistic = chisq.

Again we find that there is no statistically significant relationship between the variables
(chi-square with two degrees of freedom = 4.577, p = 0.101).
See also

SPSS Learning Module: An Overview of Statistical Tests in SPSS

Fisher's exact test


The Fisher's exact test is used when you want to conduct a chi-square test but one or more
of your cells has an expected frequency of five or less. Remember that the chi-square test
assumes that each cell has an expected frequency of five or more, but the Fisher's exact test
has no such assumption and can be used regardless of how small the expected frequency is.
In SPSS unless you have the SPSS Exact Test Module, you can only perform a Fisher's
exact test on a 2x2 table, and these results are presented by default. Please see the results
from the chi squared example above.
One-way ANOVA

A one-way analysis of variance (ANOVA) is used when you have a categorical independent
variable (with two or more categories) and a normally distributed interval dependent
variable and you wish to test for differences in the means of the dependent variable broken
down by the levels of the independent variable. For example, using the hsb2 data file, say
we wish to test whether the mean of write differs between the three program types (prog).
The command for this test would be:
oneway write by prog.

The mean of the dependent variable differs significantly among the levels of program type.
However, we do not know if the difference is between only two of the levels or all three of
the levels. (The F test for the Model is the same as the F test for prog because prog was
the only variable entered into the model. If other variables had also been entered, the F test
for the Model would have been different from prog.) To see the mean of write for each
level of program type,
means tables = write by prog.

From this we can see that the students in the academic program have the highest mean
writing score, while students in the vocational program have the lowest.
See also

SPSS Textbook Examples: Design and Analysis, Chapter 7

SPSS Textbook Examples: Applied Regression Analysis, Chapter 8

SPSS FAQ: How can I do ANOVA contrasts in SPSS?

SPSS Library: Understanding and Interpreting Parameter Estimates in


Regression and ANOVA

Kruskal Wallis test


The Kruskal Wallis test is used when you have one independent variable with two or more
levels and an ordinal dependent variable. In other words, it is the non-parametric version of
ANOVA and a generalized form of the Mann-Whitney test method since it permits two or
more groups. We will use the same data file as the one way ANOVA example above (the
hsb2 data file) and the same variables as in the example above, but we will not assume that
write is a normally distributed interval variable.
npar tests
/k-w = write by prog (1,3).

If some of the scores receive tied ranks, then a correction factor is used, yielding a slightly
different value of chi-squared. With or without ties, the results indicate that there is a
statistically significant difference among the three type of programs.
Paired t-test
A paired (samples) t-test is used when you have two related observations (i.e., two
observations per subject) and you want to see if the means on these two normally
distributed interval variables differ from one another. For example, using the hsb2 data file
we will test whether the mean of read is equal to the mean of write.
t-test pairs = read with write (paired).

These results indicate that the mean of read is not statistically significantly different from
the mean of write (t = -0.867, p = 0.387).
Wilcoxon signed rank sum test
The Wilcoxon signed rank sum test is the non-parametric version of a paired samples t-test.
You use the Wilcoxon signed rank sum test when you do not wish to assume that the
difference between the two variables is interval and normally distributed (but you do
assume the difference is ordinal). We will use the same example as above, but we will not
assume that the difference between read and write is interval and normally distributed.
npar test
/wilcoxon = write with read (paired).

The results suggest that there is not a statistically significant difference between read and
write.
If you believe the differences between read and write were not ordinal but could merely be
classified as positive and negative, then you may want to consider a sign test in lieu of sign
rank test. Again, we will use the same variables in this example and assume that this
difference is not ordinal.
npar test
/sign = read with write (paired).

We conclude that no statistically significant difference was found (p=.556).


McNemar test
You would perform McNemar's test if you were interested in the marginal frequencies of
two binary outcomes. These binary outcomes may be the same outcome variable on

matched pairs (like a case-control study) or two outcome variables from a single group.
Continuing with the hsb2 dataset used in several above examples, let us create two binary
outcomes in our dataset: himath and hiread. These outcomes can be considered in a twoway contingency table. The null hypothesis is that the proportion of students in the himath
group is the same as the proportion of students in hiread group (i.e., that the contingency
table is symmetric).
compute himath = (math>60).
compute hiread = (read>60).
execute.
crosstabs
/tables=himath BY hiread
/statistic=mcnemar
/cells=count.

McNemar's chi-square statistic suggests that there is not a statistically significant difference
in the proportion of students in the himath group and the proportion of students in the
hiread group.
One-way repeated measures ANOVA
You would perform a one-way repeated measures analysis of variance if you had one
categorical independent variable and a normally distributed interval dependent variable that
was repeated at least twice for each subject. This is the equivalent of the paired samples ttest, but allows for two or more levels of the categorical variable. This tests whether the
mean of the dependent variable differs by the categorical variable. We have an example
data set called rb4wide, which is used in Kirk's book Experimental Design. In this data set,
y is the dependent variable, a is the repeated measure and s is the variable that indicates the
subject number.

glm y1 y2 y3 y4
/wsfactor a(4).

You will notice that this output gives four different p-values. The output labeled "sphericity
assumed" is the p-value (0.000) that you would get if you assumed compound symmetry in
the variance-covariance matrix. Because that assumption is often not valid, the three other
p-values offer various corrections (the Huynh-Feldt, H-F, Greenhouse-Geisser, G-G and
Lower-bound). No matter which p-value you use, our results indicate that we have a
statistically significant effect of a at the .05 level.

See also

SPSS Textbook Examples from Design and Analysis: Chapter 16

SPSS Library: Advanced Issues in Using and Understanding SPSS


MANOVA

SPSS Code Fragment: Repeated Measures ANOVA

Repeated measures logistic regression


If you have a binary outcome measured repeatedly for each subject and you wish to run a
logistic regression that accounts for the effect of multiple measures from single subjects,
you can perform a repeated measures logistic regression. In SPSS, this can be done using
the GENLIN command and indicating binomial as the probability distribution and logit as
the link function to be used in the model. The exercise data file contains 3 pulse
measurements from each of 30 people assigned to 2 different diet regiments and 3 different
exercise regiments. If we define a "high" pulse as being over 100, we can then predict the
probability of a high pulse using diet regiment.
GET FILE='C:\mydata\exercise.sav'.
GENLIN highpulse (REFERENCE=LAST)
BY diet (order = DESCENDING)
/MODEL diet
INTERCEPT=YES
DISTRIBUTION=BINOMIAL
LINK=LOGIT
/REPEATED SUBJECT=id.

These results indicate that diet is not statistically significant (Wald Chi-Square = 1.562, p =
0.211).
Factorial ANOVA
A factorial ANOVA has two or more categorical independent variables (either with or
without the interactions) and a single normally distributed interval dependent variable. For
example, using the hsb2 data file we will look at writing scores (write) as the dependent
variable and gender (female) and socio-economic status (ses) as independent variables, and
we will include an interaction of female by ses. Note that in SPSS, you do not need to have
the interaction term(s) in your data set. Rather, you can have SPSS create it/them
temporarily by placing an asterisk between the variables that will make up the interaction
term(s).
glm write by female ses.

These results indicate that the overall model is statistically significant (F = 5.666, p =
0.00). The variables female and ses are also statistically significant (F = 16.595, p = 0.000

and F = 6.611, p = 0.002, respectively). However, that interaction between female and ses
is not statistically significant (F = 0.133, p = 0.875).
See also

SPSS Textbook Examples from Design and Analysis: Chapter 10

SPSS FAQ: How can I do tests of simple main effects in SPSS?

SPSS FAQ: How do I plot ANOVA cell means in SPSS?

SPSS Library: An Overview of SPSS GLM

Friedman test
You perform a Friedman test when you have one within-subjects independent variable with
two or more levels and a dependent variable that is not interval and normally distributed
(but at least ordinal). We will use this test to determine if there is a difference in the
reading, writing and math scores. The null hypothesis in this test is that the distribution of
the ranks of each type of score (i.e., reading, writing and math) are the same. To conduct a
Friedman test, the data need to be in a long format. SPSS handles this for you, but in other
statistical packages you will have to reshape the data before you can conduct this test.
npar tests
/friedman = read write math.

Friedman's chi-square has a value of 0.645 and a p-value of 0.724 and is not statistically
significant. Hence, there is no evidence that the distributions of the three types of scores
are different.

Factorial logistic regression


A factorial logistic regression is used when you have two or more categorical independent
variables but a dichotomous dependent variable. For example, using the hsb2 data file we
will use female as our dependent variable, because it is the only dichotomous variable in
our data set; certainly not because it common practice to use gender as an outcome
variable. We will use type of program (prog) and school type (schtyp) as our predictor
variables. Because prog is a categorical variable (it has three levels), we need to create
dummy codes for it. SPSS will do this for you by making dummy codes for all variables
listed after the keyword with. SPSS will also create the interaction term; simply list the
two variables that will make up the interaction separated by the keyword by.
logistic regression female with prog schtyp prog by schtyp
/contrast(prog) = indicator(1).

The results indicate that the overall model is not statistically significant (LR chi2 = 3.147, p
= 0.677). Furthermore, none of the coefficients are statistically significant either. This
shows that the overall effect of prog is not significant.
See also

Annotated output for logistic regression

SPSS Topics: Logistic Regression

Correlation
A correlation is useful when you want to see the relationship between two (or more)
normally distributed interval variables. For example, using the hsb2 data file we can run a
correlation between two continuous variables, read and write.
correlations
/variables = read write.

In the second example, we will run a correlation between a dichotomous variable, female,
and a continuous variable, write. Although it is assumed that the variables are interval and
normally distributed, we can include dummy variables when performing correlations.
correlations

/variables =

female write.

In the first example above, we see that the correlation between read and write is 0.597. By
squaring the correlation and then multiplying by 100, you can determine what percentage of
the variability is shared. Let's round 0.597 to be 0.6, which when squared would be .36,
multiplied by 100 would be 36%. Hence read shares about 36% of its variability with
write. In the output for the second example, we can see the correlation between write and
female is 0.256. Squaring this number yields .065536, meaning that female shares
approximately 6.5% of its variability with write.
See also

Annotated output for correlation

SPSS Learning Module: An Overview of Statistical Tests in SPSS

SPSS FAQ: How can I analyze my data by categories?

Missing Data in SPSS

Simple linear regression


Simple linear regression allows us to look at the linear relationship between one normally
distributed interval predictor and one normally distributed interval outcome variable. For
example, using the hsb2 data file, say we wish to look at the relationship between writing
scores (write) and reading scores (read); in other words, predicting write from read.
regression variables = write read
/dependent = write
/method = enter.

We see that the relationship between write and read is positive (.552) and based on the tvalue (10.47) and p-value (0.000), we would conclude this relationship is statistically
significant. Hence, we would say there is a statistically significant positive linear
relationship between reading and writing.
See also

Regression With SPSS: Chapter 1 - Simple and Multiple Regression

Annotated output for regression

SPSS Topics: Regression

SPSS Textbook Examples: Introduction to the Practice of Statistics, Chapter


10

SPSS Textbook Examples: Regression with Graphics, Chapter 2

SPSS Textbook Examples: Applied Regression Analysis, Chapter 5

Non-parametric correlation
A Spearman correlation is used when one or both of the variables are not assumed to be
normally distributed and interval (but are assumed to be ordinal). The values of the

variables are converted in ranks and then correlated. In our example, we will look for a
relationship between read and write. We will not assume that both of these variables are
normal and interval.
nonpar corr
/variables = read write
/print = spearman.

The results suggest that the relationship between read and write (rho = 0.617, p = 0.000) is
statistically significant.
Simple logistic regression
Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1). We
have only one variable in the hsb2 data file that is coded 0 and 1, and that is female. We
understand that female is a silly outcome variable (it would make more sense to use it as a
predictor variable), but we can use female as the outcome variable to illustrate how the
code for this command is structured and how to interpret the output. The first variable
listed after the logistic command is the outcome (or dependent) variable, and all of the rest
of the variables are predictor (or independent) variables. In our example, female will be
the outcome variable, and read will be the predictor variable. As with OLS regression, the
predictor variables must be either dichotomous or continuous; they cannot be categorical.
logistic regression female with read.

The results indicate that reading score (read) is not a statistically significant predictor of
gender (i.e., being female), Wald = .562, p = 0.453. Likewise, the test of the overall model
is not statistically significant, LR chi-squared - 0.56, p = 0.453.
See also

Annotated output for logistic regression

SPSS Topics: Logistic Regression

SPSS Library: What kind of contrasts are these?

Multiple regression
Multiple regression is very similar to simple regression, except that in multiple regression
you have more than one predictor variable in the equation. For example, using the hsb2
data file we will predict writing score from gender (female), reading, math, science and
social studies (socst) scores.
regression variable = write female read math science socst
/dependent = write
/method = enter.

The results indicate that the overall model is statistically significant (F = 58.60, p = 0.000).
Furthermore, all of the predictor variables are statistically significant except for read.
See also

Regression with SPSS: Chapter 1 - Simple and Multiple Regression

Annotated output for regression

SPSS Topics: Regression

SPSS Frequently Asked Questions

SPSS Textbook Examples: Regression with Graphics, Chapter 3

SPSS Textbook Examples: Applied Regression Analysis

Analysis of covariance

Analysis of covariance is like ANOVA, except in addition to the categorical predictors you
also have continuous predictors as well. For example, the one way ANOVA example used
write as the dependent variable and prog as the independent variable. Let's add read as a
continuous variable to this model, as shown below.
glm write with read by prog.

The results indicate that even after adjusting for reading score (read), writing scores still
significantly differ by program type (prog), F = 5.867, p = 0.003.
See also

SPSS Textbook Examples from Design and Analysis: Chapter 14

SPSS Library: An Overview of SPSS GLM

SPSS Library: How do I handle interactions of continuous and categorical


variables?

Multiple logistic regression


Multiple logistic regression is like simple logistic regression, except that there are two or
more predictors. The predictors can be interval variables or dummy variables, but cannot
be categorical variables. If you have categorical predictors, they should be coded into one
or more dummy variables. We have only one variable in our data set that is coded 0 and 1,
and that is female. We understand that female is a silly outcome variable (it would make
more sense to use it as a predictor variable), but we can use female as the outcome variable
to illustrate how the code for this command is structured and how to interpret the output.
The first variable listed after the logistic regression command is the outcome (or
dependent) variable, and all of the rest of the variables are predictor (or independent)

variables (listed after the keyword with). In our example, female will be the outcome
variable, and read and write will be the predictor variables.
logistic regression female with read write.

These results show that both read and write are significant predictors of female.
See also

Annotated output for logistic regression

SPSS Topics: Logistic Regression

SPSS Textbook Examples: Applied Logistic Regression, Chapter 2

SPSS Code Fragments: Graphing Results in Logistic Regression

Discriminant analysis
Discriminant analysis is used when you have one or more normally distributed interval
independent variables and a categorical dependent variable. It is a multivariate technique
that considers the latent dimensions in the independent variables for predicting group
membership in the categorical dependent variable. For example, using the hsb2 data file,
say we wish to use read, write and math scores to predict the type of program a student
belongs to (prog).
discriminate groups = prog(1, 3)
/variables = read write math.

Clearly, the SPSS output for this procedure is quite lengthy, and it is beyond the scope of
this page to explain all of it. However, the main point is that two canonical variables are
identified by the analysis, the first of which seems to be more related to program type than
the second.
See also

discriminant function analysis

SPSS Library: A History of SPSS Statistical Features

One-way MANOVA
MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or
more dependent variables. In a one-way MANOVA, there is one categorical independent
variable and two or more dependent variables. For example, using the hsb2 data file, say we
wish to examine the differences in read, write and math broken down by program type
(prog).
glm read write math by prog.

The students in the different programs differ in their joint distribution of read, write and
math.
See also

SPSS Library: Advanced Issues in Using and Understanding SPSS


MANOVA

GLM: MANOVA and MANCOVA

SPSS Library: MANOVA and GLM

Multivariate multiple regression

Multivariate multiple regression is used when you have two or more variables that are to be
predicted from two or more predictor variables. In our example, we will predict write and
read from female, math, science and social studies (socst) scores.
glm write read with female math science socst.

These results show that all of the variables in the model have a statistically significant
relationship with the joint distribution of write and read.
Canonical correlation
Canonical correlation is a multivariate technique used to examine the relationship between
two groups of variables. For each set of variables, it creates latent variables and looks at
the relationships among the latent variables. It assumes that all variables in the model are
interval and normally distributed. SPSS requires that each of the two groups of variables
be separated by the keyword with. There need not be an equal number of variables in the
two groups (before and after the with).
manova read write with math science
/discrim.
* * * * * * A n a l y s i s
o f
V a r i a n c e -- design
* *

1 * * * *

EFFECT .. WITHIN CELLS Regression


Multivariate Tests of Significance (S = 2, M = -1/2, N = 97 )
Test Name
Pillais

Value
.59783

Approx. F Hypoth. DF
41.99694

4.00

Error DF

Sig. of F

394.00

.000

Hotellings
1.48369
72.32964
4.00
Wilks
.40249
56.47060
4.00
Roys
.59728
Note.. F statistic for WILKS' Lambda is exact.

390.00
392.00

.000
.000

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

EFFECT .. WITHIN CELLS Regression (Cont.)


Univariate F-tests with (2,197) D. F.
Variable
READ
WRITE
Variable
READ
WRITE

Sq. Mul. R

Adj. R-sq.

Hypoth. MS

Error MS

.51356
.43565

.50862
.42992

5371.66966
3894.42594

51.65523
51.21839

103.99081
76.03569

Sig. of F
.000
.000

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Raw canonical coefficients for DEPENDENT variables


Function No.
Variable
READ
WRITE

1
.063
.049

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Standardized canonical coefficients for DEPENDENT variables


Function No.
Variable
READ
WRITE

1
.649
.467

* * * * * * A n a l y s i s
* *

o f

V a r i a n c e -- design

1 * * * *

Correlations between DEPENDENT and canonical variables


Function No.
Variable
READ
WRITE

1
.927
.854

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Variance in dependent variables explained by canonical variables


CAN. VAR.

Pct Var DE Cum Pct DE Pct Var CO Cum Pct CO

79.441

79.441

47.449

47.449

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Raw canonical coefficients for COVARIATES


Function No.
COVARIATE
MATH
SCIENCE

1
.067
.048

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Standardized canonical coefficients for COVARIATES


CAN. VAR.
COVARIATE
MATH
SCIENCE

1
.628
.478

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Correlations between COVARIATES and canonical variables


CAN. VAR.
Covariate
MATH
SCIENCE

1
.929
.873

* * * * * * A n a l y s i s
* *

o f

V a r i a n c e -- design

1 * * * *

Variance in covariates explained by canonical variables


CAN. VAR.
1

Pct Var DE Cum Pct DE Pct Var CO Cum Pct CO


48.544

48.544

81.275

81.275

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Regression analysis for WITHIN CELLS error term


--- Individual Univariate .9500 confidence intervals
Dependent variable .. READ
reading score
COVARIATE
MATH
SCIENCE
COVARIATE

Beta

Std. Err.

t-Value

Sig. of t

.48129
.36532

.43977
.35278

.070
.066

6.868
5.509

.000
.000

Lower -95%

CL- Upper

MATH
.343
SCIENCE
.235
Dependent variable .. WRITE

.619
.496

writing score

COVARIATE
MATH
SCIENCE
COVARIATE
MATH
SCIENCE

Beta

Std. Err.

t-Value

Sig. of t

.43290
.28775

.42787
.30057

.070
.066

6.203
4.358

.000
.000

Lower -95%

CL- Upper

.295
.158

.571
.418

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

* * * * * * A n a l y s i s
* *

o f

V a r i a n c e -- design

1 * * * *

EFFECT .. CONSTANT
Multivariate Tests of Significance (S = 1, M = 0, N = 97 )
Test Name

Value

Exact F Hypoth. DF

Pillais
.11544
12.78959
Hotellings
.13051
12.78959
Wilks
.88456
12.78959
Roys
.11544
Note.. F statistics are exact.

2.00
2.00
2.00

Error DF

Sig. of F

196.00
196.00
196.00

.000
.000
.000

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

EFFECT .. CONSTANT (Cont.)


Univariate F-tests with (1,197) D. F.

Variable
of F

Hypoth. SS

Error SS Hypoth. MS

Error MS

READ
6.52329
WRITE
23.62202

336.96220 10176.0807 336.96220


.011
1209.88188 10090.0231 1209.88188
.000

51.65523

Sig.

51.21839

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

EFFECT .. CONSTANT (Cont.)


Raw discriminant function coefficients
Function No.
Variable
READ
WRITE

1
.041
.124

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Standardized discriminant function coefficients


Function No.

Variable
READ
WRITE

1
.293
.889

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Estimates of effects for canonical variables


Canonical Variable
Parameter
1

1
2.196

* * * * * * A n a l y s i s
* *

o f

V a r i a n c e -- design

1 * * * *

EFFECT .. CONSTANT (Cont.)


Correlations between DEPENDENT and canonical variables
Canonical Variable
Variable
READ
WRITE

1
.504
.959

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The output above shows the linear combinations corresponding to the first canonical
correlation. At the bottom of the output are the two canonical correlations. These results
indicate that the first canonical correlation is .7728. The F-test in this output tests the
hypothesis that the first canonical correlation is equal to zero. Clearly, F = 56.4706 is
statistically significant. However, the second canonical correlation of .0235 is not
statistically significantly different from zero (F = 0.1087, p = 0.7420).
Factor analysis
Factor analysis is a form of exploratory multivariate analysis that is used to either reduce
the number of variables in a model or to detect relationships among variables. All variables
involved in the factor analysis need to be interval and are assumed to be normally
distributed. The goal of the analysis is to try to identify factors which underlie the
variables. There may be fewer factors than variables, but there may not be more factors
than variables. For our example, let's suppose that we think that there are some common
factors underlying the various test scores. We will include subcommands for varimax
rotation and a plot of the eigenvalues. We will use a principal components extraction and
will retain two factors. (Using these options will make our results compatible with those
from SAS and Stata and are not necessarily the options that you will want to use.)
factor
/variables read write math science socst
/criteria factors(2)

/extraction pc
/rotation varimax
/plot eigen.

Communality (which is the opposite of uniqueness) is the proportion of variance of the


variable (i.e., read) that is accounted for by all of the factors taken together, and a very low
communality can indicate that a variable may not belong with any of the factors. The scree
plot may be useful in determining how many factors to retain. From the component matrix
table, we can see that all five of the test scores load onto the first factor, while all five tend
to load not so heavily on the second factor. The purpose of rotating the factors is to get the
variables to load either very high or very low on each factor. In this example, because all of
the variables loaded onto factor 1 and not on factor 2, the rotation did not aid in the
interpretation. Instead, it made the results even more difficult to interpret.
See also

SPSS FAQ: What does Cronbach's alpha mean?

Potrebbero piacerti anche