Sei sulla pagina 1di 13

Oneway Annova (For Age)

Table 1
Test of Homogeneity of Variances
Age
Levene Statistic

df1

df2

.077

Sig.
46

.972

Table 2
ANOVA
Age
Sum of Squares
Between Groups

df

Mean Square

1.838

.613

Within Groups

36.642

46

.797

Total

38.480

49

Table 3
Robust Tests of Equality of Means
Age
Statistica
Welch

df1

.815

a. Asymptotically F distributed.

Post Hoc Tests (Age)


Table 4

df2
3

22.458

Sig.
.499

Sig.
.769

.517

Multiple Comparisons
Attractiveness of offers
Tukey HSD
95% Confidence Interval

Mean Difference
(I) Age

(J) Age

(I-J)

18-25

26-35

.593

.570

.727

-.93

2.11

36-45

1.000

.629

.395

-.68

2.68

.778

.639

.620

-.93

2.48

18-25

-.593

.570

.727

-2.11

.93

36-45

.407

.394

.730

-.64

1.46

45 above

.185

.410

.969

-.91

1.28

18-25

-1.000

.629

.395

-2.68

.68

26-35

-.407

.394

.730

-1.46

.64

45 above

-.222

.489

.968

-1.53

1.08

18-25

-.778

.639

.620

-2.48

.93

26-35

-.185

.410

.969

-1.28

.91

36-45

.222

.489

.968

-1.08

1.53

45 above
26-35

36-45

45 above

Std. Error

Sig.

Lower Bound

Upper Bound

Gread Point Average


Table 5
Attractiveness of offers
Tukey HSDa,b
Subset for alpha =
0.05
Age
36-45

1
10

2.00

2.22

26-35

27

2.41

18-25

3.00

45 above

Sig.

.249

Means for groups in homogeneous subsets are


displayed.

INTERPRETation
Table 1
The Test of Homogeneity of Variances output tests H 0: 218-25 = 226-35 = 236-45 = 245above.
This is an important assumption made by the analysis of variance. To interpret this

output, look at the column labeled Sig. This is the p value. If the p value is less
than or equal to your level for this test, then you can reject the H 0 that the
variances are equal. If the p value is greater than level for this test, then we fail to
reject H0 which increases our confidence that the variances are equal and the
homogeneity of variance assumption has been met. The p value is .499. Because
the p value is greater than the level, we fail to reject H 0 implying that there is
little evidence that the variances are not equal and the homogeneity of variance
assumption may be reasonably satisfied.

Table 2
The ANOVA output gives us the analysis of variance summary table. There are six
columns in the output:
Column

Description

Unlabeled
(Source of
variance)

The first column describes each row of the ANOVA summary table. It tells us
that the first row corresponds to the between-groups estimate of variance (the
estimate that measures the effect and error). The between-groups estimate of
variance forms the numerator of the F ratio. The second row corresponds to
the within-groups estimate of variaince (the estimate of error). The withingroups estimate of variance forms the denominator of the F ratio. The final
row describes the total variability in the data.

Sum of
Squares

The Sum of squares column gives the sum of squares for each of the
estimates of variance. The sum of squares corresponds to the numerator of
the variance ratio.
The third column gives the degrees of freedom for each estimate of variance.

df

The degrees of freedom for the between-groups estimate of variance is given


by the number of levels of the ages. In this example there are four levels of
the ages, so there are 4 - 1 = 3 degrees of freedom for the between-groups
estimate of variance.
The degrees of freedom for the within-groups estimate of variance is
calculated by subtracting one from the number of people in each condition /
category and summing across the conditions / categories.

Mean Square

The fourth column gives the estimates of variance (the mean squares.) Each
mean square is calculated by dividing the sum of square by its degrees of
freedom.
MSBetween-groups = SSBetween-groups / dfBetween-groups
MSWithin-groups = SSWithin-groups / dfWithin-groups

The fifth column gives the F ratio. It is calculated by dividing mean square
between-groups by mean square within-groups.
F = MSBetween-groups / MSWithin-groups

Sig.

The final column gives the significance of the F ratio. This is the p value. If
the p value is less than or equal your level, then you can reject H0 that all
the means are equal. In this example, the p value is .517 which is greater than
the level, so we fail to reject H0. That is, there is insufficient evidence to
claim that some of the means may be different from each other.

We would write the F ratio as: The one-way, between-subjects analysis of variance
failed to reveal a reliable effect of other major on GPA, F = 0.769, p = .
517, MSerror = 0.797, = .05.

The 3 is the between-groups degrees of freedom, 46 is the within-groups degrees


of freedom, 0.769 is the F ratio from the F column, .517 is the value in the Sig.
column (the p value), and 0.797 is the within-groups mean square estimate of
variance.
Decide whether to reject H0: If the p value associated with the F ratio is less than or
equal to the level, then you can reject the null hypothesis that all the means are
equal. In this case, the p value equals .517, which is greater than the level (.05),
so we fail to reject H0.
When the F ratio is statistically significant, we need to look at the multiple
comparisons output. Even though our F ratio is not statistically significant, we will
look at the multiple comparisons to see how they are interpreted.
Table 4

The Multiple Comparisons output gives the results of the Post-Hoc tests that you
requested. In this example, I requested Tukey multiple comparisons, so the output
reflects that choice. Different people have different opinions about when to look at
the multiple comparisons output. One of the leading opinions is that the multiple
comparison output is only meaningful if the overall F ratio is statistically
significant. In this example, it is not statistically significant, so technically I should
not check the multiple comparisons output.
The output includes a separate row for each level of the independent variable. In
this example, there are four rows corresponding to the four levels of the ages. Lets
consider the first row, the one with major equal to 18-25. There are three sub-rows
within in this row. Each sub-row corresponds to one of the other levels of the ages.
Thus, there are three comparisons described in this row:
Comparison

H0

H1

18-25 vs 26-35

H0: 18-25 = 26-35

H1: 18-25 26-35

18-25 vs 36-45

H0: 18-25 = 36-45

H1: 18-25 36-45

18-25 vs 45 above

H0: 18-25 = 45 above

H1: 18-25 45 above

The second column in the output gives the difference between the means. In this
example, the difference between the GPA of the people who would be 18-25
majors and those who would be 26-35 majors is 0.593. The third column gives the
standard error of the mean. The fourth column is the p value for the multiple
comparison. In this example, the p value for comparing the GPAs of people who
would be 18-25 majors with those those who would be 26-35 majors is 0.727,
meaning that it is unlikely that these means are different (as you would expect
given that the difference (0.593) is small.) If the p values is less than or equal to the
level, then you can reject the corresponding H 0. In this example, the p value is .

727 which is larger than the level of .05, so we fail to reject H 0 that the mean
GPA of the people who would be 18-25 majors is different from the mean GPA of
the people who would be 26-35 majors. The final two columns give you the 95%
confidence interval.

Table 5

This part of the SPSS output (shown above) summarizes the results of the multiple
comparisons procedure. Often there are several subset columns in this section of
the output. The means listed in each subset column are not statistically reliably
different from each other. In this example, all four means are listed in a single
subset column, so none of the means are reliably different from any of the other
means. That is not to say that the means are not different from each other, but only
that we failed to observe a difference between any of the means. This is consistent
with the fact that we failed to reject the null hypothesis of the ANOVA.

T-Test (For Gender)


Table 1
Group Statistics
gender
Attractiveness of offers

Mean

Male

Std. Deviation

Std. Error Mean

46

2.28

1.068

.157

3.00

.816

.408

Female

Table 2
Independent Samples Test
Levene's Test for
Equality of
Variances

t-test for Equality of Means


95% Confidence
Interval of the

F
Attractiveness

Equal

of offers

variances

Sig.

2.550

df

Difference

Sig. (2-

Mean

Std. Error

tailed)

Difference

Difference

Lower

Upper

.117 -1.306

48

.198

-.717

.549

-1.822

.387

-1.640

3.953

.177

-.717

.438

-1.938

.503

assumed
Equal
variances not
assumed

INTERPRETATION
TABLE 1

Table provides useful descriptive statistics for the two


groups that you compared, including the mean and
standard deviation.

Oneway Annova (For Occupation)


Table 1
Test of Homogeneity of Variances
Attractiveness of offers
Levene Statistic
.263

df1

df2
2

Sig.
47

.770

Table 2
ANOVA
Attractiveness of offers
Sum of Squares
Between Groups

df

Mean Square

.071

.035

Within Groups

55.149

47

1.173

Total

55.220

49

Table 3

Sig.
.030

.970

Robust Tests of Equality of Means


Attractiveness of offers
Statistica
Welch

df1

.028

df2
2

Sig.

24.753

.973

a. Asymptotically F distributed.

Table 4

Post Hoc Tests (occupation)

Multiple Comparisons
Attractiveness of offers
Tukey HSD
95% Confidence Interval

Mean Difference
(I) occupation

(J) occupation

Student

Service

.102

.424

.968

-.92

1.13

Bussiness

.027

.353

.997

-.83

.88

Student

-.102

.424

.968

-1.13

.92

Bussiness

-.075

.397

.980

-1.04

.89

Student

-.027

.353

.997

-.88

.83

Service

.075

.397

.980

-.89

1.04

Service

Bussiness

Table 5

Homogeneous Subsets

(I-J)

Std. Error

Sig.

Lower Bound

Upper Bound

Attractiveness of offers
Tukey HSD

a,b

Subset for alpha =


0.05
occupation

Service

11

2.27

Bussiness

23

2.35

Student

16

2.38

Sig.

.963

Means for groups in homogeneous subsets are


displayed.

Interpretation
Table 1
The Test of Homogeneity of Variances output tests H 0: 2service = 2business = 2student This
is an important assumption made by the analysis of variance. To interpret this
output, look at the column labeled Sig. This is the p value. If the p value is less
than or equal to your level for this test, then you can reject the H 0 that the
variances are equal. If the p value is greater than level for this test, then we fail to
reject H0 which increases our confidence that the variances are equal and the
homogeneity of variance assumption has been met. The p value is .770. Because
the p value is greater than the level, we fail to reject H 0 implying that there is
little evidence that the variances are not equal and the homogeneity of variance
assumption may be reasonably satisfied.

Table 2
The ANOVA output gives us the analysis of variance summary table. There are six
columns in the output:
Column

Description

Unlabeled
(Source of
variance)

The first column describes each row of the ANOVA summary table. It tells us
that the first row corresponds to the between-groups estimate of variance (the
estimate that measures the effect and error). The between-groups estimate of
variance forms the numerator of the F ratio. The second row corresponds to
the within-groups estimate of variaince (the estimate of error). The withingroups estimate of variance forms the denominator of the F ratio. The final
row describes the total variability in the data.

Sum of
Squares

The Sum of squares column gives the sum of squares for each of the
estimates of variance. The sum of squares corresponds to the numerator of
the variance ratio.

df

The third column gives the degrees of freedom for each estimate of variance.

The degrees of freedom for the between-groups estimate of variance is given


by the number of levels of the occupations. In this example there are four
levels of the occupations, so there are 3 - 1 = 2 degrees of freedom for the
between-groups estimate of variance.
The degrees of freedom for the within-groups estimate of variance is
calculated by subtracting one from the number of people in each condition /
category and summing across the conditions / categories.

Mean Square

The fourth column gives the estimates of variance (the mean squares.) Each
mean square is calculated by dividing the sum of square by its degrees of
freedom.
MSBetween-groups = SSBetween-groups / dfBetween-groups
MSWithin-groups = SSWithin-groups / dfWithin-groups

The fifth column gives the F ratio. It is calculated by dividing mean square
between-groups by mean square within-groups.
F = MSBetween-groups / MSWithin-groups

Sig.

The final column gives the significance of the F ratio. This is the p value. If
the p value is less than or equal your level, then you can reject H0 that all
the means are equal. In this example, the p value is .970 which is greater than
the level, so we fail to reject H0. That is, there is insufficient evidence to
claim that some of the means may be different from each other.

We would write the F ratio as: The one-way, between-subjects analysis of variance
failed to reveal a reliable effect of other major on GPA, F = 0.030, p = .
970, MSerror = 1.173, = .05.
The 2 is the between-groups degrees of freedom, 47 is the within-groups degrees
of freedom, 0.030 is the F ratio from the F column, .970 is the value in the Sig.
column (the p value), and 1.173 is the within-groups mean square estimate of
variance.
Decide whether to reject H0: If the p value associated with the F ratio is less than or
equal to the level, then you can reject the null hypothesis that all the means are
equal. In this case, the p value equals .970, which is greater than the level (.05),
so we fail to reject H0.
When the F ratio is statistically significant, we need to look at the multiple
comparisons output. Even though our F ratio is not statistically significant, we will
look at the multiple comparisons to see how they are interpreted.
Table 4

The Multiple Comparisons output gives the results of the Post-Hoc tests that you
requested. In this example, I requested Tukey multiple comparisons, so the output
reflects that choice. Different people have different opinions about when to look at
the multiple comparisons output. One of the leading opinions is that the multiple
comparison output is only meaningful if the overall F ratio is statistically
significant. In this example, it is not statistically significant, so technically I should
not check the multiple comparisons output.
The output includes a separate row for each level of the independent variable. In
this example, there are three rows corresponding to the three types of occupations.
Lets consider the first row, the one with major equal to Service. There are two subrows within in this row. Each sub-row corresponds to one of the other type of
occupations. Thus, there are three comparisons described in this row:
Comparison

H0

H1

service vs business

H0: 1 service = business

H1: service business

service vs student

H0: 1 service = student

H1: service student

The second column in the output gives the difference between the means. In this
example, the difference between the GPA of the people who would be service
majors and those who would be business majors is 0.102. The third column gives
the standard error of the mean. The fourth column is the p value for the multiple
comparison. In this example, the p value for comparing the GPAs of people who
would be service majors with those those who would be business majors is 0.404,
meaning that it is unlikely that these means are different (as you would expect
given that the difference (0.102) is small.) If the p values is less than or equal to the
level, then you can reject the corresponding H 0. In this example, the p value is .
404 which is smaller than the level of .05, so we pass to reject H 0 that the mean
GPA of the people who would be service majors is not different from the mean
GPA of the people who would be business majors. The final two columns give you
the 95% confidence interval.

Table 5

This part of the SPSS output (shown above) summarizes the results of the multiple
comparisons procedure. Often there are several subset columns in this section of
the output. The means listed in each subset column are not statistically reliably
different from each other. In this example, all three means are listed in a single
subset column, so none of the means are reliably different from any of the other
means. That is not to say that the means are different from each other, but only
that we observe a difference between the means. This is consistent with the fact
that we failed to accept the null hypothesis of the ANOVA.

Potrebbero piacerti anche