0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

35 visualizzazioni81 pagineSep 21, 2011

© Attribution Non-Commercial (BY-NC)

PDF, TXT o leggi online da Scribd

Attribution Non-Commercial (BY-NC)

0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

35 visualizzazioni81 pagineAttribution Non-Commercial (BY-NC)

Sei sulla pagina 1di 81

Lecture 9

Survey Research & Design in Psychology

James Neill, 2011

Summary of MLR I Partial correlations Residual analysis Interactions Analysis of change

Readings - MLR

1. Howell (2009). Correlation & regression [Ch 9]

2. Howell (2009). Multiple regression [Ch 15; not 15.14 Logistic Regression] 3. Tabachnick & Fidell (2001). Standard & hierarchical regression in SPSS (includes example write-ups) [Alternative chapter from eReserve]

Summary of MLR I

Check assumptions LOM, N,

Normality, Linearity, Homoscedasticity, Collinearity, MVOs, Residuals

Stepwise, Forward, Backward

2

5

r between X and Y after controlling for

p

Does years of marriage (IV1) predict marital satisfaction (DV) after number of children is controlled for (IV2)? Does time management (IV1) predict university student satisfaction (DV) after general life satisfaction is controlled for (IV2)?

When interpreting MLR coefficients, compare the 0-order and partial correlations for each IV in each model draw a Venn diagram Partial correlations will be equal to or smaller than the 0-order correlations If a partial correlation is the same as the 0-order correlation, then the IV operates independently on the DV.

To the extent that a partial correlation is smaller than the 0-order correlation, then the IV's explanation of the DV is shared with other IVs. An IV may have a sig. 0-order correlation with the DV, but a nonsig. partial correlation. This would indicate that there is non-sig. unique variance explained by the IV.

9

The sr indicate the %s of variance in the DV which are uniquely explained by each IV. In PASW, the srs are labelled part. 2 You need to square these to get sr . For more info, see Allen and Bennett (2008) p. 182

10

2

In Linear Regression - Statistics dialog box, check Part and partial correlations

11

Coefficients Correlations Model 1 Worry Ignore the Problem Zero-order -.521 -.325 Partial -.460 -.178

12

.18

.46

.34

.32 .52

X1

X2

13

Residual analysis

Residual analysis

Three key assumptions can be tested using plots of residuals: 1. Linearity: IVs are linearly related to DV 2. Normality of residuals 3. Equal variances (Homoscedasticity)

15

Residual analysis

Assumptions about residuals: Random noise Sometimes positive, sometimes negative but, on average, 0 Normally distributed about 0

16

Residual analysis

Residual analysis

The Normal P-P (Probability) Plot of Regression Standardized Residuals can be used to assess the assumption of normally distributed residuals. If the points cluster reasonably tightly along the diagonal line (as they do here), the residuals are normally distributed. Substantial deviations from the diagonal may be cause for concern.

Allen & Bennett, 2008, p. 183

Residual analysis

Residual analysis

The Scatterplot of standardised residuals against standardised predicted values can be used to assess the assumptions of normality, linearity and homoscedasticity of residuals. The absence of any clear patterns in the spread of points indicates that these assumptions are met.

Allen & Bennett, 2008, p. 183

(i.e., more false positives)

22

Standard error formulae (which are

used for confidence intervals and sig. tests)

work when residuals are wellbehaved. If the residuals dont meet assumptions these formulae tend to underestimate coefficient standard errors giving overly optimistic pvalues and too narrow CIs. 23

Interactions

Interactions

Additivity refers to the assumption that the IVs act independently, i.e., they do not interact. However, there may also be interaction effects - when the magnitude of the effect of one IV on a DV varies as a function of a second IV. Also known as a moderation effect.

25

Interactions

Some drugs interact with each other to reduce or enhance other's effects e.g.,

Pseudoephedrine Arousal Caffeine Arousal Pseudoeph. X Caffeine Arousal

X

26

Interactions

Physical exercise in natural environments may provide multiplicative benefits in reducing stress e.g.,

Natural environment Stress Physical exercise Stress Natural env. X Phys. ex. Stress

27

Interactions

Model interactions by creating crossproduct term IVs, e.g.,:

Pseudoephedrine Caffeine Pseudoephedrine x Caffeine (cross-product)

Compute PseudoCaffeine = Pseudo*Caffeine.

28

Interactions

Y = b1x1 + b2x2 + b12x12 + a + e b12 is the product of the first two slopes (b1 x b2)

b12 can be interpreted as the amount of

change in the slope of the regression of Y on b1 when b2 changes by one unit.

29

Interactions

Conduct Hierarchical MLR Step 1: Pseudoephedrine Caffeine Step 2: Pseudo x Caffeine (cross-product) Examine R2, to see whether the interaction term explains additional variance above and beyond the direct effects of Pseudo and Caffeine.

30

Interactions

Possible effects of Pseudo and Caffeine on Arousal: None Pseudo only (incr./decr.) Caffeine only (incr./decr.) Pseudo + Caffeine (additive inc./dec.) Pseudo x Caffeine (synergistic inc./dec.) Pseudo x Caffeine (antagonistic inc./dec.)

31

Interactions

Cross-product interaction terms may be highly correlated (multicollinear) with the corresponding simple IVs, creating problems with assessing the relative importance of main effects and interaction effects. An alternative approach is to run separate regressions for each level of the interacting variable.

32

Analysis of Change

Analysis of change

Example research question: In group-based mental health interventions, does the quality of social support from group members (IV1) and group leaders (IV2) explain changes in participants mental health between the beginning and end of the intervention (DV)?

35

Analysis of change

Hierarchical MLR

DV = Mental health after the intervention

Step 1

IV1 = Mental health before the intervention

Step 2

IV2 = Support from group members IV3 = Support from group leader

36

Analysis of change

Strategy: Use hierarchical MLR to partial out pre-intervention individual differences from the DV, leaving only the variances of the changes in the DV b/w pre- and postintervention for analysis in Step 2.

37

Analysis of change

Results of interest 2 Change in R how much variance in change scores is explained by the predictors Regression coefficients for predictors in step 2

38

Partial correlation

Unique variance explained by IVs; 2 calculate and report sr .

Residual analysis

A way to test key assumptions.

39

Interactions

A way to model (rather than ignore) interactions between IVs.

Analysis of change

Use hierarchical MLR to partial out baseline scores in Step 1 in order to use IVs in Step 2 to predict changes over time.

40

Student questions

?

41

42

ANOVA I

Analysing differences t-tests One sample Independent Paired

Howell (2010): Ch3 The Normal Distribution Ch4 Sampling Distributions and Hypothesis Testing Ch7 Hypothesis Tests Applied to Means

44

Analysing differences

Correlations vs. differences Which difference test? Parametric vs. non-parametric

Correlation and regression techniques reflect the strength of association Tests of differences reflect differences in central tendency of variables between groups and measures.

46

In MLR we see the world as made of covariation. Everywhere we look, we see relationships. In ANOVA we see the world as made of differences. Everywhere we look we see differences.

47

LR/MLR e.g., What is the relationship between gender and height in humans? t-test/ANOVA e.g., What is the difference between the heights of human males and females?

48

How many groups? (i.e. categories of IV)

1 group = one-sample t-test More than 2 groups = ANOVA models

Independent groups

Dependent groups

50

Parametric statistics inferential test

that assumes certain characteristics are true of an underlying population, especially the shape of its distribution. Non-parametric statistics inferential test that makes few or no assumptions about the population from which observations were drawn (distribution-free tests).

51

There is generally at least one non-parametric equivalent test for each type of parametric test. Non-parametric tests are generally used when assumptions about the underlying population are questionable (e.g., non-normality).

52

Parametric statistics commonly

used for normally distributed interval or ratio dependent variables. Non-parametric statistics can be used to analyse DVs that are nonnormal or are nominal or ordinal. Non-parametric statistics are less powerful that parametric tests.

53

Consider non-parametric tests when (any of the following): Assumptions, like normality, have been violated. Small number of observations (N). DVs have nominal or ordinal levels of measurement.

54

Some Commonly Used Parametric & Nonparametric Parametric Non-parametric Purpose Tests

t test (independent) Mann-Whitney U; Wilcoxon rank-sum Wilcoxon matched pairs signed-rank

Compares two independent samples Compares two related samples Compares three or more groups Compares groups classified by two different factors

t test (paired)

t-tests

t-tests One-sample t-tests Independent sample t-tests Paired sample t-tests

A t-test or ANOVA is used to determine whether a sample of scores are from the same population as another sample of scores. These are inferential tools for examining differences between group means. Is the difference between two sample means real or due to chance?

57

t-tests

One-sample

One group of participants, compared with fixed, pre-existing value (e.g., population norms)

Independent

Compares mean scores on the same variable across different populations (groups)

Paired

Same participants, with repeated measures

58

In general, t-tests and ANOVAs are robust to violation of assumptions, particularly with large cell sizes, but don't be complacent.

59

Use of t in t-tests

t reflects the ratio of between group variance to within group variance Is the t large enough that it is unlikely that the two samples have come from the same population? Decision: Is t larger than the critical value for t? (see t tables depends on critical and N)

60

68% 95% 99.7%

61

Two-tailed test rejects null hypothesis if obtained t-value is extreme is either direction One-tailed test rejects null hypothesis if obtained t-value is extreme is one direction (you choose too high or too low) One-tailed tests are twice as powerful as two-tailed, but they are only focused on identifying differences in one direction.

62

Compare one group (a sample) with a fixed, pre-existing value (e.g., population norms)

Do uni students sleep less than the recommended amount?

e.g., Given a sample of N = 190 uni students who sleep M = 7.5 hrs/day (SD = 1.5), does this differ significantly from 8 hours hrs/day ( = .05)?

63

One-sample t-test

Compares mean scores on the same variable across different populations (groups)

Do Americans vs. Non-Americans differ in their approval of Barack Obama? Do males & females differ in the amount of sleep they get?

65

Assumptions

(Indep. samples t-test)

LOM IV is ordinal / categorical DV is interval / ratio Homogeneity of Variance: If variances unequal (Levenes test), adjustment made Normality: t-tests robust to modest departures from normality, otherwise consider use of Mann-Whitney U test Independence of observations (one participants score is not dependent on any other participants score) 66

Group Statistics gender_R Gender of respondent 1 Male 2 Female N 1189 1330 Mean 7.34 8.24 Std. Deviation 2.109 2.252 Std. Error Mean .061 .062 immrec immediate recall-number correct_wave 1

Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper -1.067 -1.066 -.725 -.725

F 4.784

Sig. .029

t -10.268 -10.306

df 2517 2511.570

68

Group Statistics Type of School Single Sex Co-Educational N 323 168 Mean 4.9995 4.9455 Std. Deviation .7565 .7158 Std. Error Mean 4.209E-02 5.523E-02 SSR

Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper -8.48E-02 -8.26E-02 .1929 .1906

Sig. .897

t .764 .778

df 489 355.220

69

Group Statistics Type of School Single Sex Co-Educational N 327 172 Mean 4.5327 3.9827 Std. Deviation 1.0627 1.1543 Std. Error Mean 5.877E-02 8.801E-02 OSR

Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper -8.48E-02 -8.26E-02 .1929 .1906

Sig. .897

t .764 .778

df 489 355.220

70

Comparison b/w means of 2 independent sample variables = t-test

(e.g., what is the difference in Educational Satisfaction between male and female students?)

(e.g., what is the difference in Educational Satisfaction between students enrolled in four different faculties?)

71

Same participants, with repeated measures Data is sampled within subjects

Pre- vs. post- treatment ratings Different factors e.g., Voters approval ratings of candidates X and Y

72

Assumptions

LOM: (Paired samples t-test)

IV: Two measures from same participants

(w/in subjects)

a variable measured on two occasions or two different variables measured on the same occasion

Normal distribution of difference scores (robust to violation with larger samples) Independence of observations (one

participants score is not dependent on anothers score) 73

There was no significant difference between pretest and posttest scores (t(19) = 1.78, p = .09).

Paired Samples Statistics Mean 4.9787 4.2498 N 951 951 Std. Deviation .7560 1.1086 Std. Error Mean 2.451E-02 3.595E-02

Pair 1

SSR OSR

Paired Samples Test Paired Differences 95% Confidence Interval of the Difference Lower Upper .6675 .7903

Pair 1

SSR - OSR

Mean .7289

t 23.305

df 950

75

Comparison b/w means of 2 within subject variables = t-test Comparison b/w means of 3+ within subject variables = 1-way ANOVA

(e.g., what is the difference in Campus, Social, and Education Satisfaction?)

76

Summary

(Analysing Differences) Non-parametric and parametric tests can be used for examining differences between the central tendency of two of more variables Develop a conceptualisation of when to each of the parametric tests from one-sample t-test through to MANOVA (e.g. decision

chart).

77

Summary

(Analysing Differences) t-tests

One-sample Independent-samples Paired samples

78

1. 1-way ANOVA 2. 1-way repeated measures ANOVA 3. Factorial ANOVA 4. Mixed design ANOVA 5. ANCOVA 6. MANOVA 7. Repeated measures MANOVA

79

References

Allen, P. & Bennett, K. (2008). SPSS for the health and behavioural sciences. South Melbourne, Victoria, Australia: Thomson. Francis, G. (2007). Introduction to SPSS for Windows: v. 15.0 and 14.0 with Notes for Studentware (5th ed.). Sydney: Pearson Education. Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Wadsworth.

80

This presentation was made using Open Office Impress. Free and open source software.

http://www.openoffice.org/product/impress.html

81

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.