Sei sulla pagina 1di 18

Journal of School Psychology 49 (2011) 157174

Contents lists available at ScienceDirect

Journal of School Psychology


j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e /
j s c h p s yc

Delaware School Climate SurveyStudent: Its factor structure,


concurrent validity, and reliability
George G. Bear , Clare Gaskins, Jessica Blank, Fang Fang Chen
University of Delaware, DE, United States

a r t i c l e

i n f o

Article history:
Received 29 May 2009
Received in revised form 15 January 2011
Accepted 21 January 2011
Keywords:
School climate
Program evaluation
Student perceptions
Schoolwide positive behavior supports

a b s t r a c t
The Delaware School Climate SurveyStudent (DSCSS) was developed to provide schools with a brief and psychometrically sound
student survey for assessing school climate, particularly the dimensions of social support and structure. Conrmatory factor analyses,
conducted on a sample of 11,780 students in 85 schools, showed that
a bifactor model consisting of ve specic factors and one general
factor (School Climate) best represented the data. Those ve factors
are represented in ve subscales of the DSCSS: TeacherStudent
Relations, StudentStudent Relations, Fairness of Rules, Liking of
School, and School Safety. The factor structure was shown to be stable
across grade levels (i.e., elementary, middle, and high school), racial
ethnic groups (i.e., Caucasian, African American, and Hispanic), and
gender. As evidence of the survey's concurrent validity, scores for each
of the ve subscales and the total scale correlated moderately, across
groups and at the school level, with academic achievement and
suspensions and expulsions.
2011 Society for the Study of School Psychology. Published by
Elsevier Ltd. All rights reserved.

1. Introduction
During the past several decades, a rapidly growing number of schools have implemented schoolwide
programs for preventing behavior problems and promoting mental health. These include universal-level
prevention and promotion programs for social and emotional learning (Durlak, Weissberg, Dymnicki,

Corresponding author at: School of Education, Willard Hall Education Building, University of Delaware, Newark, DE 19709,
United States. Fax: +1 302 831 4110.
E-mail address: gbear@udel.edu (G.G. Bear).
ACTION EDITOR: Sara Bolt.
0022-4405/$ see front matter 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.jsp.2011.01.001

158

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

Taylor, & Schellinger, 2011; Zins & Elias, 2006) and character education (Berkowitz & Schwartz, 2006),
School-Wide Positive Behavior Support programs (SWPBS; Sailor, Dunlap, Sugai, & Horner, 2009), and
universal programs that focus on preventing more specic behavior problems, such as bullying
(Merrell, Gueldner, Ross, & Isava, 2008; Swearer, Espelage, Vaillancourt, & Hymel, 2010) and school
violence (American Psychological Association Zero Tolerance Task Force, 2008; Jimerson & Furlong,
2006). What many of these programs have in common is the aim of promoting a positive school
climate. Although a wide range of denitions of school climate exist, most refer to positive social
relationships. For example, Haynes, Emmons, and Ben-Avie (1997) dene school climate as the quality
and consistency of interpersonal interactions within the school community that inuence children's
cognitive, social, and psychological development (p. 322). Recognizing the importance of interpersonal relationships but placing additional emphasis on safety, Cohen, McCabe, Michelli, and Pickeral
(2009) recently dened school climate as the quality and character of school life, that includes
norms, values, and expectations that support people feeling socially, emotionally, and physically safe
(p. 182).
School climate has been linked to a wide range of academic, behavioral, and socio-emotional outcomes
(Anderson, 1982; Haynes et al., 1997), including academic achievement (Brand, Felner, Shim, Seitsinger, &
Dumas, 2003; Grifth, 1999); student academic, social, and personal attitudes and motives (Battistich,
Solomon, Kim, Watson, & Schaps, 1995); attendance and school avoidance (Brand et al., 2003; Welsh,
2000); student delinquency (Gottfredson, Gottfredson, Payne, & Gottfredson, 2005; Welsh, 2000);
attitudes and use of illegal substances (Brand et al., 2003), bullying (Nansel et al., 2001); victimization
(Gottfredson et al., 2005; Welsh, 2000); depression and self-esteem (Brand et al., 2003; Way, Reddy, &
Rhodes, 2007); and general behavior problems (Battistich & Horn, 1997; Kuperminc, Leadbeater, & Blatt,
2001; Welsh, 2000). Although a positive school climate is a goal of most schoolwide programs for
preventing behavior problems, school climate is seldom evaluated in studies of program effectiveness.
The most common method of evaluating the effectiveness of programs for preventing behavior
problems in schools has been the use of teacher reports of student behavior (Wilson & Lipsey, 2007).
Likewise, in studies of SWPBS, ofce disciplinary referrals (ODRs) have been the most common outcome
measured (Horner & Sugai, 2007). Both teacher ratings and ODRs have their shortcomings. A major
shortcoming of teacher reports is reporter bias. That is, in rating student behavior, teachers in
intervention schools often are well aware that the interventions implemented are expected to improve
student behavior and that their negative ratings are likely to cast a negative light on their school's
effectiveness and, in some cases, their own effectiveness. This bias may largely explain why intervention
effect sizes tend to be larger when teacher reports, rather than student reports, are used in studies of
program effectiveness (Wilson & Lipsey, 2007). ODRs also have multiple shortcomings (Morrison,
Redding, Fisher, & Peterson, 2006). Perhaps chief among them is that decreases in ODRs may occur
without improvements in student behavior. Instead of improvement in behavior, reduced ODRs may
simply reect normal uctuations in ODRs from year to year and changes in referral policies and
practices (Wright & Dusek, 1998). To be sure, both teacher ratings and ODRs also have their advantages,
especially when used as part of a multimethod system of assessing program needs and effectiveness
(Irvin, Tobin, Sprague, Sugai, & Vincent, 2004; McIntosh, Frank, & Spaulding, 2010). However, in
addition to the disadvantages noted above, they do not assess, nor are they intended to assess, school
climate and student perceptions of their schools.
The primary purpose of the present study was to develop a brief and psychometrically sound
instrument for assessing student perceptions of school climate, the Delaware School Climate Survey
Student version (DSCSS). Initiated and supported by Delaware's SWPBS project, the DSCSS was
intended to complement existing methods and measures that many schools use to indicate a school's
effectiveness, such as state-wide achievement tests, suspension/expulsion rates, teacher ratings of student
behavior, and ODRs. Of particular focus was the development of a valid and reliable self-report survey that
schools could use to assess student perceptions of those aspects of school climate related to the aims of two
program initiatives in Delaware: SWPBS, which is now implemented in approximately 60% of schools in
Delaware, and bullying prevention programs, which are mandated by state law and thus implemented to one
degree or another in all schools. Those program initiatives focus on improving relations among students and
between teachers and students, establishing clear and fair expectations and rules, reducing student conduct
problems, and increasing school safety.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

159

1.1. Theoretical roots of the DSCSS and supporting research


The development of the DSCSS was guided by two theoretical frameworks: (a) authoritative discipline
theory (Baumrind, 1971, 1996; Bear, 2005; Brophy, 1996; Gregory & Cornell, 2009) and (b) Stockard and
Mayberry's (1992) theoretical framework of school climate.
Supported by research on childrearing (Baumrind, 1971, 1996; Lamborn, Mounts, Steinberg, &
Dornbush, 1991) and research on school discipline and school climate (Brophy, 1996; Gregory et al.,
2010), authoritative discipline theory asserts that the most effective style of discipline, authoritative
discipline, is comprised of a balance of two broad components. Those two components are responsiveness
and demandingness (Baumrind, 1996), which also are called support and structure (Gregory & Cornell,
2009; Gregory et al., 2010). Responsiveness, or social support, refers to the extent to which adults (and
also peers) are responsive to children's social and emotional needs. Responsiveness is seen by others
demonstrating warmth, acceptance, and caring. Demandingness, or structure, refers to the extent to
which adults present clear behavioral expectations and fair rules, enforce those rules consistently and
fairly, and provide necessary supervision and monitoring of student behavior. A healthy balance of
responsiveness and demandingness fosters both willing compliance to rules and the social and
emotional competencies that underlie self-discipline (Bear, 2010; Brophy, 1996). This combination also
has been found to promote student perceptions of safety (Gregory et al., 2010) and liking of teachers and
schools (Osterman, 2000).
1.1.1. Stockard and Mayberrys Theoretical Framework of School Climate
An emphasis on responsiveness and demandingness is also seen in Stockard and Mayberry's
(1992) theoretical framework of school climate. Based on their comprehensive review of sociological,
psychological, and economic theories and research of organizations, which included the effective schools
and school climate literatures, Stockard and Mayberry concluded that school climate is best conceptualized
as consisting of two broad dimensions: social action and social order. Social action is similar to
responsiveness, or social support, in authoritative discipline theory, with its emphasis on the everyday
social interactions among teachers, staff, and students (i.e., the presence of caring, understanding, concern,
and respect). In contrast, social order is similar to demandingness, or structure, with its primary goal being
to curtail behavior problems and promote safety. Several studies by Grifth (1995, 1999) have supported
Stockard and Mayberry's framework, showing that elementary school students' perceptions of social action
and social order, and particularly the former, were related to their self-reports of academic performance
and satisfaction.
1.1.2. Related Student Surveys of School Climate
Other than the nontitled school climate survey developed by Grifth (1995, 1999) that was
specically developed for research purposes to test Stockard and Mayberry's (1992) framework, we
know of no school climate survey that was developed based specically on the responsiveness
demandingness, or supportstructure, dimensions of school discipline and school climate. However, to
one extent or another, most surveys of school and classroom climate include these two dimensions
(Grifth, 1999; Stockard & Mayberry, 1992). These surveys include three with demonstrated reliability
and validity evidence: The Classroom Environment Scale (CES; Moos & Trickett, 1974, 2002; Trickett &
Quinlan, 1979), the Inventory of School ClimateStudent version (ISCS; Brand et al., 2003), and the
School Climate Survey (SCS; Emmons, Haynes, & Comer, 2002; Haynes, Emmons, & Comer, 1994). The
CES, widely used for the past three decades to assess climate at the classroom level, consists of six
subscales. Two subscales assess relationships, or social support (Afliation and Teacher Support), and
two other subscales assess system maintenance and system change, or structure (Order and
Organization and Rule Clarity). The ISCS, similar to the CES in theoretical framework, is designed
specically for middle school students. It consists of 10 subscales, 4 of which are consistent with the
dimension of social support (Teacher Support, Positive Peer Interactions, Negative Peer Interactions, and
Support for Cultural Pluralism) and 3 with the dimensions of structure (Consistency and Clarity of
Rules and Expectations, Disciplinary Harshness, and Safety Problems). Finally, the SCS, which includes
an elementary and middle school version and a high school version, assesses both social support
and structure, with much greater emphasis on the former. Five of the six subscales of the most recent

160

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

elementary and middle school version (Emmons et al., 2002) focus on social support: Student
Interpersonal Relations, StudentTeacher Relations, Parent Involvement, Sharing of Resources, and
Fairness (dened as the equal treatment of students). A sixth subscale, Order and Discipline (which
assesses appropriate behavior) focuses on structure. The most recent high school version (Emmons et al.,
2002) has the same factors (with the exception of the Fairness factor being replaced by a School Building
factor, for reasons that are unclear).
Consistent with authoritative discipline theory and Stockard and Mayberry's (1992) theoretical
framework, each of the school climate surveys above, as well as the one presented in the current study,
assumes a socialecological perspective in which an individual's perceptions of the social environment, and
especially social transactions, rather than objective reality per se, are viewed as most important in
understanding human behavior (Bandura, 1986, 1997; Bronfenbrenner, 1979). Both student and teacher
perceptions are important; thus, many surveys of school climate, including those above, have student and
teacher versions. However, student perceptions, which tend to be less positive than teacher perceptions
(Anderson, 1982; Fisher & Fraser, 1983), are used most often in research on school climate. This pattern is
not only to avoid possible teacher bias in program evaluations, as noted previously, but also because the
value of student perceptions of school climate is widely recognized in school reform efforts (Eccles et al.,
1993; Haynes et al., 1997). It also is consistent with research and theory indicating that students'
perceptions of their classroom or school environments are more important than objective indicators of
those environments in understanding student motivation, adjustment, and wellbeing (Connell & Wellborn,
1991; Eccles et al., 1993).
1.1.3. Research Supporting the Proposed Factors of the DSCSS
Guided by authoritative discipline theory and Stockard and Mayberry's (1992) theoretical framework, the
DSCSS was designed to assess components of social support and structure consistent with the primary goals
of SWPBS and bullying prevention programs. Two subscales were created to assess responsiveness, or social
support: TeacherStudent Relations and StudentStudent Relations. Three subscales were created to assess
demandingness, or structure: Fairness of Rules, School Safety, and Student Conduct Problems. A sixth
subscale, Liking of School, also was included, for reasons described below.
1.1.3.1. Relations among teachers and students. Inclusion of the Teacher-Student Relations subscale was
supported by research showing that students feel more comfortable and supported in schools and classrooms in
which teachers are caring, respectful, and provide emotional support (e.g., Battistich, Solomon, Watson, & Schaps,
1997; Osterman, 2000). In those environments, students experience greater school completion (Croninger &
Lee, 2001), on-task behavior (Battistich et al., 1997), self-reported academic initiative (Danielsen, Wiium,
Wilhelmsen, & Wold, 2010), academic achievement (Fredricks, Blumenfeld, & Paris, 2004; Gregory & Weinstein,
2004), peer acceptance (Hughes, Cavell, & Wilson, 2001), and motivation to act responsibly and prosocially
(Wentzel, 1996). They also engage in less oppositional and antisocial behaviors (Bru, Stephens, & Torsheim,
2002; Hamre, Pianta, Downer, & Mashburn, 2008; Jessor et al., 2003), including bullying (Gregory et al., 2010).
1.1.3.2. Relations among students. In support of the Student-Student Relations subscale, research shows that
students who are rejected by their peers are at increased risk for disruptive behavior, poor achievement, disliking
of school, school avoidance, and not completing school (Buhs, Ladd, & Herald, 2006; Welsh, 2000). Students who
engage in negative peer interactions are more likely to show delinquent and aggressive behaviors and more
likely to report low self-esteem and depression (Brand et al., 2003). In contrast, social support from classmates
has been shown to be related to academic initiative (Danielsen et al., 2010), to moderate victimization and
distress for boys (Davidson & Demaray, 2007), and to predict externalizing and adaptive behaviors for girls
(Reuger, Malecki, & Demaray, 2008).
1.1.3.3. Fairness of rules. Research supporting the Fairness of Rules subscale shows that students' perceptions of
the fairness of school rules are related signicantly to greater student engagement and academic achievement
and to less delinquent behavior, aggression, and student victimization (Arum, 2003; Brand et al., 2003;
Gottfredson et al., 2005). Research also shows that students engage in less offending and misconduct when
they perceive rules to be fair (Welsh, 2000, 2003). Multiple school climate surveys include a subscale designed
to assess fairness of rules (e.g., Brand et al., 2003; Furlong et al., 2005; Gottfredson, 1999).

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

161

1.1.3.4. School safety and student conduct problems. The Safety and Student Conduct Problems subscales were
included in light of research showing that students and teachers perceive school climate more favorably when
they feel safe (Kitasatas, Ware, & Martinez-Arias, 2004) and when aggression and victimization are not
common (Astor, Benbenishty, Zeira, & Vinokur, 2002; Goldstein, Young, & Boyd, 2008). Students who perceive
fewer safety problems at school tend to be more academically adjusted, engage in less delinquent and
aggressive behaviors, and report greater self-esteem and fewer depressive symptoms (Brand et al., 2003;
Horner et al., 2009). Whereas TeacherStudent Relations are commonly found on student surveys of school
climate, until recently (i.e., post-Columbine), school safety and student conduct problems have not generally
been included. When included, some surveys present school safety and student conduct problems items on
the same subscale (e.g., Barnett, Easton, & Israel, 2002; Brand et al., 2003), whereas others (e.g., Center for
Social and Emotional Education [CSEE], 2009; Emmons et al., 2002; Gottfredson, 1999) tend to include items
surveying student perception of either school safety or student conduct problems but not both. In the current
study, we developed items to tap both, expecting two distinct factors to emerge, as found on the California
Safety and School Climate Survey (Furlong et al., 2005).
1.1.3.4. Liking of school. Although liking of school does not fall clearly under either the social support or
structure category, it has nevertheless been shown to be an important aspect of school climate and was of
specic interest in the Delaware Department of Education's (DOE) evaluation of school climate. Thus, we
included a Liking of School subscale. Research shows that liking of school correlates with less delinquency,
alcohol and substance use, violence, suicidality, and emotional stress (Fredricks et al., 2004; Resnick et al.,
1997) and with higher levels of academic achievement (Ding & Hall, 2007; Thompson, Iachan, Overpeck, Ross,
& Gross, 2006). Whereas some studies have treated liking of school as a distinct construct measured separately
from school climate (e.g., Child Development Project, 1993; Ladd & Price, 1987), others have included liking of
school as one of several components of the school climate or environment (e.g., Ding & Hall, 2007).
1.2. Guiding pragmatic concerns and goals of the study
In addition to theory and research, the development of the DSCSS was guided by more pragmatic
concerns. As noted previously, the DOE desired a student survey of demonstrated validity and reliability
that assessed the social support and structure aspects of school climate targeted in SWPBS and bullying
prevention programs. The DOE also desired a survey that could be administered in less than 20 min (to
avoid teacher resistance to its use and to limit time taken from instruction) and one that could be used for
grades 312. The latter was to allow schools to monitor changes in scores on the same subscales from year
to year and across grade levels. Finally, preference was for a survey that was not commercially produced
and costly. We found no survey that t all of those criteria. Given that 29 states currently provide or
mandate school climate assessments and that only one state (i.e., Rhode Island) offers evidence of the
validity and reliability of the assessment instruments used (Cohen et al., 2009), the need for a survey such
as the DSCSS is clearly evident not only in Delaware but also in most other states.
The goals of the present study were twofold. The rst goal was to provide evidence of the DSCSS's
construct validity using conrmatory factor analyses to test the proposed six-factor model of school
climate, as well as alternative models that might better t the data. The second goal was to provide
evidence of the survey's concurrent validity. Consistent with the research reviewed previously, we
predicted that favorable student perceptions of school climate would correlate positively with academic
achievement and negatively with school suspensions and expulsions.
2. Method
2.1. The numerical method: TISC
The original sample consisted of 12,262 students enrolled in 85 schools in grades 312 in the state of
Delaware. Among those students, 482 (3.9% of the sample) were deleted because responses on demographic
items (i.e., gender, raceethnicity, and grade) were either missing or unreadable by the Scantron computerized
scoring system used (i.e., bubbles were not lled in dark enough or more than one response was given to the
same item). Rather than estimating missing values, we deleted these cases because gender, raceethnicity, and

162

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

Table 1
Demographic information of the sample.
Grade level

Gender
Boys
Girls
Race/ethnicity
Caucasian
African American
Hispanic
Asian
Other
Free/reduced
Lunch b

Elementary

Middle

High

Full sample

Statewide

3873 (49.1%)
4016 (50.9%)

1240 (49.2%)
1282 (50.8%)

667 (48.7%)
702 (51.3%)

5923 (50.3%)
5857 (49.7%)

51.8%
48.4%

3962 (50.2%)
2355 (29.9%)
819 (10.4%)
203 (2.6%)
550 (7.0%)

1203 (47.7%)
832 (33.0%)
251 (10.0%)
54 (2.1%)
182 (7.2%)

803 (58.7%)
341 (24.9%)
93 (6.8%)
58 (4.2%)
74 (5.4%)

5968 (50.7%)
3528 (29.9%)
1163 (9.9%)
315 (2.7%)
806 (6.8%)

53.9%
33.0%
8.8%
3.0%
a

47%

45%

34%

44%

42%

This category was not reported by the state.


Average percentage of students across schools who qualied for free or reduced lunch, based on each school's report for the
entire student body of the school.
b

grade level were used as grouping variables in the multigroup conrmatory factor analyses. In this nal sample,
missing responses to individual items on the survey ranged from 0.1% to 2.1%. The sample, as used in all
statistical analyses that follow, included 11,780 students and 85 schools. There were 7889 students in 58
elementary schools, 2522 in 17 middle schools, and 1369 in 10 high schools.1 The sample represented
approximately 50% of public schools in the state, excluding alternative education schools, special education
schools, and preschools, and consisted of approximately 10% of the state's public school population of 122,281
students.
Table 1 provides student demographic information on the sample, as obtained from the surveys, and the
percentage of students in each category statewide as reported by the Delaware DOE. The Delaware DOE
website was used to obtain the percentage of students who qualied for free or reduced lunch within each of
the 85 participating schools (which included all students in each school and not just those in the study). As
seen in the table, the demographics for the nal sample closely approximated those for the state. However, the
percentages of African Americans and Caucasians in the sample were slightly less (approximately 3% in each
category) than those reported by the state. We speculate that these differences in percentages can be
attributed to the differences in the racialethnic identication categories used in the current study and those
used by the DOE in its reporting of racialethnic composition of each school. That is, in addition to White,
Black, Hispanic, and Asian, which are the four categories reported by DOE based on parent registration of
their children upon entering school, students in the current study were given the option of self-identifying as
Other, including mixed races. This Other category was chosen by 6.8% of the students. It is speculated that
the Other category included many multiracial students identied by DOE as either Black or White.
2.2. Measures
2.2.1. Delaware School Climate SurveyStudent (DSCSS)
The proposed 29 items of the DSCSS resulted from a two-year process during which multiple items
were drafted and rened, with reviews of each draft conducted by ve members of the Delaware's SWPBS
training team. This team consisted of two university professors in school psychology, a DOE supervisor of
special education, and the SWPBS project director and coordinator. Two years prior to the current study, an
initial draft of the survey consisting of 29 items was eld-tested in two schools. The survey was completed
1
For purpose of this study and consistent with the classication used by the Delaware DOE, an elementary school was dened as
a school with students below the 6th-grade level, a middle school was dened as a school with grades no higher than the 8th grade
or lower than 5th grade, and a high school was dened as a school with grades 9 to 12. Among the elementary schools contributing
to this study's sample, 2 consisted of grades kindergarten (K) to 3, 11 consisted of grades K to 4, 1 consisted of grades 2 to 4, 32
consisted of grades K to 5, 5 consisted of grades 1 to 5, 3 consisted of grades K to 6, and 4 consisted of grades 4 to 6. Among the
middle schools, 2 included grades 5 to 6, 2 included grades 5 to 8, 9 included grades 6 to 8, and 4 included grades 7 to 8. All 10 high
schools included grades 9 to 12.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

163

by 221 students in an elementary school (grades K-5) and 155 students in a middle school (grades 68).
Items were designed to assess four school climate factors: Caring Relationships, Fairness of Rules, School
Safety, and Liking of School. Following this administration, and based on feedback from the faculty at each
school, six items were deleted, and others were revised. As a result of the difculty teachers observed
among younger students when completing the survey, it was recommended that the survey not be given to
students below third grade, even if read aloud.
Following the above revisions, and the addition of six items designed to assess student conduct problems
(e.g., ghting, stealing, and cheating), the survey was re-administered to a larger number of students the
following school year for purposes of conducting exploratory factor analyses. The revised 29 items were
administered to 4103 students in grades 312 enrolled in 21 elementary schools (n = 2896), 6 middle schools
(n = 899), and 2 high schools (n = 308). Results of exploratory factor analyses, conducted for all grades
combined using principal axis factor analyses, with both orthogonal (i.e., varimax) and oblique (i.e., oblimin)
rotations, indicated ve factors: TeacherStudent Relations, StudentStudent Relations, Student Conduct
Problems, School Safety, and Liking of School. A forced six-factor solution, with oblimin rotation, yielded the
expected sixth factor, Fairness of Rules, with an eigenvalue of 1.03 (and .945 for School Safety). These results
led to minor revisions in the wording of several items, with the intent of producing a more robust and
consistent six-factor solution across elementary, middle, and high schools. For this purpose, and for the
current study, the revised survey was administered the following year to a large number of students, and
conrmatory factor analyses were conducted, as explained later in the data analyses and results sections.
As used in the current study, and described later, the DSCSS consisted of a total of 29 items designed to
assess those aspects of school climate reviewed previously: TeacherStudent Relations (8 items), Student
Student Relations (4 items), Fairness of Rules (4 items), Student Conduct Problems (6 items), School Safety
(3 items), and Liking of School (4 items). TeacherStudent Relations refers to the perceived quality of students'
interactions with adults in the school; in particular, adults' demonstration of warmth, respect, recognition, and
fairness. StudentStudent Relations captures students' beliefs about the quality of students' interactions with
other students, such as peers showing friendliness and respecting one another. Fairness of Rules assesses
students' perceived fairness of school rules and their consequences. Student Conduct Problems was intended
to assess the degree to which students perceive ghting, harming others, bullying, stealing, cheating, and
cheating, as problems in their school (e.g., Stealing [drugs, ghting, etc.] is a problem in this school.). School
Safety taps how students generally feel about the level of safety in their school. Liking of School assesses how
students generally feel about their school. Consistent with the goal of producing a survey appropriate for
students as early as the third grade, the readability level for the nal version of the DSCSS was 2.6 based on
the FleschKincaid readability formula (Kincaid, Fishburne, Rogers, & Chissom, 1975). The survey and its
guidelines for interpreting scores are available from www.delawarepbs.org.
Students responded to the 29 items using a 4-item Likert scale, with 1 = Strongly Disagree, 2 = Disagree,
3 = Agree, and 4 = Strongly Agree. For computing the total school climate score and all other factor and
subscale scores items reecting a negative school climate (e.g., I wish I went to another school.) were
reversed scored. In the current study, the items and item responses were presented on a Scantron sheet in
which students lled in the bubble representing their selected responses to each item. Each response sheet
was scanned and scored by computer by the DOE.
2.2.2. Academic achievement
Academic achievement data were obtained at the school level from the school proles website of each
school, which is maintained by the DOE. The data consisted of the percentage of students in each school
that passed the English-Language Arts (reading and writing; ELA) and Mathematics portions of the
Delaware Student Testing Program (DSTP) during the year that the DSCSS was administered (March,
2007). The DSTP is a state-wide standards-based assessment of student progress towards the Delaware
content standards in adherence to No Child Left Behind (NCLB) accountability requirements. Items, aligned
with state standards, were developed by teachers and external consultants with expertise in each content
area. Reliability exceeds .90 for each portion of the DSTP (Delaware Department of Education, 2009). A
passing score indicates that a student met or exceeded the state standards. Five criterion-based
performance scores are used: 1 = well below the standard, 2 = below the standard, 3 = meets the standard,
4 = exceeds the standard, and 5 = distinguished. Thus, a score of 3 or above is passing. (Note that, in this
study, only dichotomized data at the school level were employed, based on availability.)

164

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

2.2.3. Suspensions and expulsions


A school-level suspension and expulsion score consisted of the percentage of students in each school
(using a non-duplicated count) who were suspended from school or expelled during the school year of the
study. All schools in Delaware are required to report to the DOE the number of students suspended or
expelled, and those data were obtained from the DOE. The types of violations were not reported, but they
generally ranged from repeated acts of disruptive behavior to criminal offenses, such as possession of drugs
and ghting.
2.3. Procedures
All 85 schools participating in the study volunteered to do so upon an invitation from the Delaware DOE
in a letter sent to each school district ofce. In return for their participation, each school was promised a
report of the survey results. Because of cost limitations, each school was sent 200 surveys. The average
school return rate was 73% with a range of 19.5% to 100%, indicating that each school returned an average
of 144 completed surveys out of the 200 provided. The average school return rate was 70% for elementary
schools, 79% for middle schools, and 72% for high schools. Each school was directed to administer the
school climate surveys to classrooms they were to select at random and was given steps for doing so. In the
written directions for administering the surveys, schools were specically informed to either (a) enter all
regular classroom teachers' names into a hat and draw enough classes to complete the 200 surveys or (b)
select teachers randomly from the school roster by choosing teacher's names in alphabetical order (e.g.,
every 3rd teacher on the school roster) to obtain a sufcient number of classrooms. If students changed
classrooms, as done in middle and high schools, schools were told to select teachers in only one subject
area that was required of all students (e.g., math, English, and social studies). Written instructions also
were given for the administration of the survey; teachers were asked to read the survey aloud in
elementary schools and assure all students that their responses were condential. To ensure
condentiality, students were told not to record their names or any identication numbers. Likewise, as
requested by the DOE, no method was used to identify the student's classroom or teacher. However, items
on the survey asked students to identify their grade, race (White, Black, Hispanic, Asian, or Other,
including mixed races), and gender. All surveys were completed in late February and March.
2.4. Data Analyses
In our analyses of the factor structure of the DSCS-S we used conrmatory factor analyses because it is
the recommended method of choice when the factorial structure of a scale is hypothesized, based on
theory or previous research, including previous exploratory factor analyses of the data (Thompson, 2004).
We rst tested the proposed six-factor model, with each item specied as an indicator of a factor
corresponding to the assigned subscale. In addition, we estimated a bifactor model with a general factor
and six specic factors. After discovering that the Student Conduct Problems factor did not correlate
appreciably with other factors and items on this factor did not load on the general factor of school climate,
we eliminated this factor and examined three alternative CFA models that might better represent the
structure of the DSCSS. These alternative models included a one-factor model, a ve-factor model, and a
bifactor model with one general factor and ve specic factors. The SatorraBentler scaled chi-square
difference test was used to compare nested models (Asparouhov & Muthn, 2010) and the Akaike
Information Criterion (AIC) values were used to compare non-nested models.
For cross-validation purposes, the sample was randomly divided into two subsamples. The rst sample
was used to examine model t for the hypothesized model and the three alternative models. The second
sample was used to verify and replicate the nal model derived from the rst sample.
Mplus 5.2 (Muthn & Muthn, 19982008) was used for conducting the conrmatory factor analyses.
We performed missing data analysis using the full information maximum likelihood (FIML) estimator in
Mplus. FIML is a recommended procedure for estimating parameters with incomplete data. Because
students were nested within schools, we calculated intraclass correlations (ICCs) for each of the factor
scores to assess the degree to which variability in student responses could be accounted for at a school
level. The ICCs on the factor scores in elementary schools ranged from .06 to .15. In middle and high
schools, ICCs ranged from .04 to .12 and from .08 to .18, respectively. ICCs were much higher when

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

165

calculated without controlling for school type (e.g., elementary, middle, and high school). Because the ICCs
indicated that student responses were nonindependent and a portion of the variance was accounted for at
the school level, all conrmatory factor analyses accounted for the nesting of students within schools, and
individual item responses were centered around the school mean by utilizing the centering command in
Mplus. Group mean centering addressed the clustering issue by removing the school mean differences
from the item responses, thereby producing ICCs of zero for each item.
Chi-square difference tests were calculated using the SatorraBentler scaled chi-square difference test
(Asparouhov & Muthn, 2010). Given that chi-square t statistics are sensitive to sample size and violation
of normality assumption, three other commonly used t indices were also employed to assess model t:
the Comparative Fit Index (CFI), the Root Mean-Square Error of Approximation (RMSEA), and the
Standardized Root Mean-Square Residual (SRMR). The CFI, an incremental t index, assesses the degree to
which the tested model is superior to an alternative model in reproducing the observed covariance matrix.
Because this t index evaluates the goodness of t, the larger the value the better the model t. Generally a
value greater than or equal to .95 is considered an adequate t (Hu & Bentler, 1998). The RMSEA and SRMR
are absolute t indexes that assess the degree to which the model-implied covariance matrix matches the
observed covariance matrix. Because these indexes estimate the degree of poor model t, the smaller the
number the better the model t. Commonly applied rules of thumb for these t indexes suggest that values
less than or equal to .08 reect an adequate t (Hu & Bentler, 1998).
In order to investigate whether the surveys were of comparable factor structure across different groups
of respondents (i.e., elementary, middle, and high school students; racialethnic groups; and boys and
girls), measurement invariance was tested in a hierarchical fashion by testing congural invariance, weak
factorial invariance, and strong factorial invariance (Meredith, 1993; Widaman & Reise, 1997).2 The purpose
of testing congural invariance is to investigate whether groups share the same structure (or if the same
items are loading on the same latent factors) in the conrmatory factor analysis. When testing for this type
of invariance, the pattern of freed and xed parameters is kept the same across groups, however the
estimates for the parameters in the groups are independent. Congural invariance is supported if the t
indices for the groups are adequate. If congural invariance is not achieved, comparing groups on the same
scale would be similar to comparing apples with oranges (Chen, 2007; Chen & West, 2008).
If congural invariance between groups is found, the next step is to test for weak factorial invariance to
examine whether the groups use an equal unit of measurement in their responses to the survey items. This
test is done by constraining the factor loadings of the groups to be equal with all other parameters
estimated independently. Because the subsequent models are nested within one another, the difference or
change between the t indices for the models was calculated and used to evaluate the pattern invariance.
Stringent criteria have been recommended for evaluating weak factorial invariance with total sample sizes
greater than 300: a decrease in CFI of at least .010 supplemented by an increase in RMSEA of at least .015 or
an increase in SRMR of at least .030 indicates noninvariance (Chen, 2007). When groups have large
differences in sample size, even more stringent criteria may be imposed in which a decrease in CFI of at
least .010 alone indicates noninvariance. After weak factorial invariance is found, strong factorial invariance
is tested by constraining the factor loadings and intercepts to be equal across the groups. If strong factorial
invariance is found, it suggests that the point of origin for the scale is equal across groups. We used the
following criteria for evaluating strong factorial invariance: a decrease in CFI of at least .010 supplemented
by an increase in RMSEA of at least .015 or increase in SRMR of at least .010 indicates noninvariance (Chen,
2007).
After conducting the conrmatory factor analyses, we determined the correlation between the factors
and the internal consistency of each factor. Next, using schoolwide-level data, we examined evidence of
concurrent validity by correlating the mean schoolwide scores on each factor with the percentage of
students earning passing scores for English-Language Arts (ELA) and Mathematics and with suspension
and expulsion rates. For purposes of examining differences in correlations among variables as a function of
the school's grade levels (i.e., 58 elementary versus 27 middle and high schools), middle and high school
samples were combined because of the small number of middle schools (n = 17) and high schools (n = 10).
2
Strict factorial invariance, in which the residuals are constrained to be equal, is the most rigorous test of measurement
invariance and is seldom found, particularly in developmental research. Thus, strict factorial invariance was not examined in this
study.

166

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

3. Results
3.1. Conrmatory Factor Analysis
3.1.1. Exclusion of Student Conduct Problems subscale
The proposed six-factor model yielded adequate t indices, 2 = 2766.47 (362, N = 5907), p b .001;
CFI = .934, RMSEA = .034, and SRMR = .042. However, the Student Conduct Problems factor did not
correlate signicantly with two of the factors: TeacherStudent Relations, r = .02 (p = .390), and Fairness
of Rules, r = .02 (p = .450). It was only weakly correlated with the three other factors: StudentStudent
Relations, r = .24 (p b .001); Liking of School, r = .10 (p b .001), and School Safety, r = .24 (p b .001). These
weak correlations were particularly striking given that the estimated correlations between the ve
other factors ranged from .50 to .75. A bifactor model with six specic factors and a general factor was
also estimated, 2 = 2513.61 (348, N = 5907), p b .001; CFI = .942, RMSEA = .032, and SRMR = .037.
Notably, two of the six items from the proposed Student Conduct Problems factor did not load
signicantly onto the general factor (with loadings of .02 and .03), and although the loadings for the
other four items were statistically signicant (p b .05) their values were low, ranging from .05 to .26. The
absence of strong relations between the Student Conduct Problems factor and the other factors,
together with the low factor loadings of items from the Student Conduct Problems factor on the general
factor, provided strong evidence that the Student Conduct Problems factor did not measure the same
construct measured by the other ve factors. Thus, the Student Conduct Problems subscale was
excluded from subsequent analyses.
3.1.2. Testing alternative models
As illustrated in Table 2, a one-factor model, the rst and most parsimonious of the three alternative
models, yielded poor t statistics. When compared to a nested ve-factor model that yielded better t
statistics, the SatorraBentler scaled chi-square difference test indicated that the ve-factor model was
signicantly better tting than the one-factor model, 2 = = 340.71 (df = 14), p b .001. Finally, a
bifactor model with one general factor (i.e., School Climate) and ve specic factors was estimated. Each of
the t indices for this model exceeded the recommended cutoff criteria. The Akaike Information Criterion
(AIC) values from the ve-factor model (AIC = 284,305.73) and the bifactor model (AIC = 284,047.44)
were compared. Because the bifactor model had a lower AIC value, it was selected as the nal model.
3.1.3. Conrming t of nal model
Conrmatory factor analyses on the second randomly selected half of the sample also generated robust
t statistics for the bifactor model with one general factor and ve specic factors, 2 = 1179.08 (207,
N = 5873), p b .001; CFI = .965, RMSEA = .028, and SRMR = .028. The completely standardized factor
loadings were also compared to ensure that there were no large differences across the randomly selected
samples. As illustrated in Table 3, the items had similar factor loadings on the ve specic factors and the
general factor in both halves of the sample. The 23 items generally had similar factor loadings on the ve
specic factors as well as the general factor in the two random samples. Because no appreciable differences
in the t indices or factor loadings were found for the two halves of the sample, all subsequent analyses
were run with the full sample. A summary of the t statistics for the bifactor model with full sample and
subsamples is presented in Table 4.

Table 2
Fit statistics for models tested.
Model

df

CFI

SRMR

RMSEA

One-factor model
Five-factor model
Bifactor model

7226.12
1547.41
1359.48

234
220
207

.765
.955
.961

.064
.029
.028

.071
.032
.031

Models were tested on one half of sample, randomly selected. N's = 5907. 2 = chi-square statistic; df = degrees of freedom;
CFI = Comparative Fit Index; SRMR = Standardized Root Mean Square Residual; RMSEA = Root Mean Square Error of Approximation.
p b .001.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

167

Table 3
Conrmatory factor analysis of the DSCSS: bifactor model.
Sample 1

Sample 2

Item

Loading

SE

Loading

SE

General factor
I am proud of my school.
I like this school.
I feel safe in this school.
Adults in this school treat students fairly.
Teachers listen to you when you have a problem.
The school's Code of Conduct is fair.
School rules are fair.
Adults who work in this school care about the students.
This school feels like a prison.
I like my teachers.
This school is safe.
Students feel safe in this school.
Teachers treat students with respect.
Teachers are fair when correcting misbehavior.
Teachers care about their students.
Students treat each other with respect.
Students really care about each other.
I wish I went to another school.
Students are friendly towards most other students.
Teachers let you know when you are doing a good job.
Students get along with one another.
Consequences of breaking rules are fair.
The rules are too harsh.

.70
.69
.62
.62
.59
.57
.55
.54
.53
.53
.53
.53
.52
.52
.51
.50
.48
.48
.45
.41
.40
.35
.32

.01
.01
.02
.01
.01
.01
.02
.01
.02
.02
.02
.02
.01
.02
.01
.02
.02
.02
.02
.02
.02
.02
.02

51.60
55.54
40.68
46.24
48.64
39.27
36.70
40.15
34.23
31.54
35.56
32.67
37.78
31.70
35.41
29.65
31.10
27.01
29.62
22.24
23.74
20.98
13.92

.69
.66
.63
.61
.59
.55
.55
.55
.51
.51
.53
.55
.52
.51
.51
.51
.48
.48
.42
.40
.38
.32
.33

.01
.01
.01
.01
.01
.01
.01
.01
.02
.02
.01
.01
.02
.02
.02
.01
.01
.02
.02
.02
.02
.02
.02

50.11
57.93
46.47
49.09
46.99
38.15
37.50
38.91
26.84
31.45
45.03
38.85
31.08
31.41
31.96
41.64
33.54
29.28
26.59
25.29
23.06
16.03
14.55

Factor 1: TeacherStudent Relations


Teachers care about their students.
Adults who work in this school care about the students.
Teachers treat students with respect.
I like my teachers.
Teachers listen to you when you have a problem.
Adults in this school treat students fairly.
Teachers let you know when you are doing a good job.
Teachers are fair when correcting misbehavior.

.53
.40
.38
.34
.28
.26
.22
.21

.02
.02
.02
.02
.02
.03
.03
.02

27.35
18.01
18.00
15.12
11.81
10.44
8.57
9.11

.53
.40
.40
.33
.31
.29
.25
.26

.02
.02
.02
.02
.02
.02
.03
.02

23.81
17.62
16.51
15.43
13.65
13.32
10.11
13.46

Factor 2: StudentStudent Relations


Students really care about each other.
Students get along with one another.
Students treat each other with respect.
Students are friendly towards most other students.

.56
.54
.52
.46

.02
.02
.02
.02

29.60
29.67
25.54
27.04

.53
.56
.47
.45

.02
.02
.02
.02

26.98
30.95
30.14
26.38

Factor 3: Fairness of Rules


School rules are fair.
The rules are too harsh.
The school's Code of Conduct is fair.
Consequences of breaking rules are fair.

.54
.33
.28
.21

.03
.03
.03
.03

18.39
12.12
8.65
6.28

.47
.32
.35
.28

.03
.03
.03
.03

16.42
11.94
12.72
9.77

Factor 4: School Safety


Students feel safe in this school.
This school is safe.
I feel safe in this school.

.49
.47
.47

.02
.02
.02

20.89
22.16
20.65

.43
.47
.45

.02
.02
.02

19.58
21.66
19.97

Factor 5: Liking of School


I wish I went to another school.
I like this school.
I am proud of my school.
This school feels like a prison.

.54
.50
.30
.17

.03
.02
.02
.02

18.39
20.42
13.49
7.79

.52
.55
.30
.12

.03
.03
.02
.02

19.76
23.78
15.58
6.13

Loading = standardized factor loading; SE = standard error; z = robust z score.

168

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

Table 4
Fit statistics between groups for bifactor model.
Model

df

CFI

SRMR

RMSEA

Full sample
Elementary
Middle
High
Caucasian
African American
Hispanic
Boys
Girls

11,780
7889
2522
1369
5968
3528
1163
5923
5857

2250.69
1493.26
816.46
589.85
1298.51
803.68
472.05
1242.63
1244.32

207
207
207
207
207
207
207
207
207

.963
.965
.955
.950
.964
.963
.952
.965
.963

.027
.026
.038
.037
.028
.028
.034
.028
.027

.029
.028
.034
.037
.030
.029
.033
.029
.029

2 = chi-square statistic; df = degrees of freedom; CFI = Comparative Fit Index; SRMR = Standardized Root Mean-Square Residual;
RMSEA = Root Mean-Square Error of Approximation.

3.1.4. Measurement invariance across grade level


A model testing the congural invariance of the conrmatory factor analysis across elementary, middle,
and high school grade levels yielded t statistics suggesting adequate model t (see Table 5). The difference
between test statistics for the weak factorial (Model 2) and congural (Model 1) invariance models indicated
that there was weak factorial invariance across grade level, SatorraBentler scaled chi-square difference
test = 137.70 (df= 46), p b .001, CFI = .000, RMSEA = .001, and SRMR= .004. When the test statistics
for the strong factorial (Model 3) and weak factorial (Model 2) invariance were compared, invariance in the
starting point of origin for the subscale was found across grade level, SatorraBentler scaled chi-square
difference test= 748 (df= 34), p b .001, CFI = .002, RMSEA = .000, and SRMR = .000.
3.1.5. Measurement invariance across racial ethnic groups
A model testing the congural invariance of the conrmatory factor analysis across three different racial
ethnic groups (i.e., Caucasian, African American, and Hispanic) yielded t statistics suggesting adequate model t
(see Table 5). Reports from students who indicated Asian or Other racialethnic identity were excluded from
the racialethnic group measurement invariance analyses due to small sample sizes. The difference between test
statistics for the weak factorial (Model 2) and congural (Model 1) invariance models indicated that there was
weak factorial invariance across race-ethnicity, SatorraBentler scaled chi-square difference test=82.43
(df =46), pb .001, CFI=.001, RMSEA=.001, and SRMR=.003. When the test statistics for the strong
factorial (Model 3) and weak factorial (Model 2) invariance were compared, invariance in the starting point of
origin for the subscale was found across race, SatorraBentler scaled chi-square difference test=61.93
(df =34), p=.002, CFI=.002, RMSEA=.000, and SRMR=.000.
Table 5
Fit statistics for conrmatory factor analysis of bifactor model testing measurement invariance across grade level, race and gender.
Model

df

CFI

SRMR

RMSEA

Grade level
Model 1
Model 2
Model 3

3280.40
3340.96
3502.17

655
701
735

.958
.958
.956

.030
.034
.034

.032
.031
.031

Race
Model 1
Model 2
Model 3

2747.42
2738.29
2870.23

655
701
735

.961
.962
.960

.029
.032
.032

.030
.029
.029

Gender
Model 1
Model 2
Model 3

2588.45
2540.06
2634.62

431
454
471

.963
.964
.962

.028
.029
.029

.029
.028
.028

Model 1: congural invariance. Model 2: weak factorial invariance. Model 3: strong factorial invariance. 2 = chi-square statistic;
df = degrees of freedom; CFI = Comparative Fit Index; SRMR = Standardized Root Mean Square Residual; RMSEA = Root Mean
Square Error of Approximation.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

169

Table 6
Internal consistency coefcients and correlational coefcients between subscale and total scale scores for the full sample.

1.
2.
3.
4.
5.
6.

TeacherStudent Relations
StudentStudent Relations
Fairness of Rules
School Safety
Liking of School
Total Scale

(.88)
.51
.64
.61
.66
.89

(.81)
.42
.57
.52
.73

(.70)
.49
.59
.77

(.83)
.65
.79

(.83)
.85

(.94)

Values in parentheses are coefcients of internal consistency (Cronbach's alpha) for each subscale. All correlations are signicant at
p b .001.

3.1.6. Measurement invariance across gender


The test statistics for congural invariance (Model 1) across gender indicated adequate model t (see
Table 5). The weak factorial invariance model (Model 2) was nested within Model 1. The difference between test
statistics for the two models indicated that there was weak factorial invariance across gender, SatorraBentler
scaled chi-square difference test =28.99 (df = 23), p= ns, CFI =.001, RMSEA= .001, and SRMR=.001.
The strong factorial model (Model 3) was nested within Model 2. The difference between test statistics for
the two models indicated that there was strong factorial invariance across gender, SatorraBentler scaled
chi-square difference test= 40.38 (df= 17), p = .001, CFI = .002, RMSEA = .000, and SRMR= .000.
3.2. Correlations among Factors and Internal Consistency
To examine the relative independence of scores for the ve subscales supported by the results of
conrmatory factor analyses and the extent to which they assess the school climate construct,
correlations between subscale scores were computed. For these analyses, and all other analyses that follow,
we used manifest indicators of the factor (i.e., sum of raw scores of items on the derived subscales and total
scale). As shown in Table 6, for all students combined, correlation coefcients ranged in strength of value
(i.e., absolute value) from .42 to .66, with a median of .58. Those results indicate that 56% (1 .662 = .56) to
83% (1 .422 = .83) of the variance in each subscale score is independent of the scores on the other
subscales. Correlational analyses between scores on the ve subscales also were computed for each
subgroup across raceethnicity, gender, and grade level, resulting in 80 correlations (5 factors 3 racial
ethnic groups 2 genders 3 grade levels). Coefcients among subscales ranged from .26 to .68, with a
median of .53. All correlation coefcients were statistically signicant at the .001 level.
With respect to the reliability of DSCSS scores (see Table 6), for all students combined across grade levels,
internal consistency coefcients ranged from .70 to .88. The reliability of scores for each of the ve subscales
also was computed for each subgroup (3 racialethnic groups 2 genders 3 grade levels). Among the 40
reliability analyses computed, the median correlation coefcient was .81, with coefcients ranging from .63
(Fairness of Rules for elementary students) to .89 (TeacherStudent Relations for Caucasian students). There
were negligible differences between the alpha coefcients for elementary school (range of .63 to .84,
median= .80), middle school (range of .70 to .87, median= .80), and high school (range of .70 to .86,
median= .79) students; between Caucasian (range of .73 to .89, median= .84), African American (range of .67
to .87, median= .79), and Hispanic (range of .66 to .88, median = .81) students; and between boys (range of
.71 to .88, median= .82) and girls (range of .69 to .89, median= .84). Across all subgroups, the lowest alpha
coefcients were for Fairness of Rules and the highest for TeacherStudent Relations.3
For the total score of DSCSS, consisting of the sum of raw scores on all items of the ve subscales
(while reverse scoring items reecting a negative climate), high reliability was found across racialethnic,
gender, and grade-level groups (range .91 to .94, with overall alpha of .94 for all students combined).
3
We also examined reliability coefcients at each grade level, 312, with a particular concern about reliability at the lowest
grades. Compared to all higher grades, at grades 3 and 4 alpha coefcients were lower for the TeacherStudent Relationships,
Fairness of Rules and School Safety subscales. However, for grades 3 and 4, respectively, those coefcients were .81 and .83 for
TeacherStudent Relationships and .78 and .77 for School Safety. It was only for Fairness of Rules that coefcients fell below .70 (.57
for grade 3 and .62 for grade 4), but, as noted previously, coefcients on this subscale tended to be low at all grade levels (below .70
at all but two grade levels). Thus, there was little evidence that the subscales were less reliable in the lower grades.

170

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

Table 7
Aggregated means and standard deviations of school climate, suspensions/expulsions, and achievement data by school level.
Elementary a

School climate scores


TeacherStudent Relations
StudentStudent Relations
Fairness of Rules
School Safety
Liking of School
Total School Climate
Suspensions/expulsions
Academic achievement
ELA achievement
Math achievement

Middle and high b

Full sample

SD

SD

SD

26.52
10.38
12.13
9.60
12.63
71.56
5.62

1.30
1.07
.69
.88
1.21
4.89
5.90

21.54
8.95
9.96
7.85
10.31
58.79
23.16

1.56
.64
.77
.79
.93
4.09
10.60

24.93
9.93
11.44
9.04
11.89
67.50
11.19

2.71
1.16
1.24
1.18
1.56
7.56
11.22

79.66
78.86

8.45
9.65

72.56
60.63

9.25
12.56

77.40
73.00

9.27
13.57

ELA = English-Language Arts.


a
n = 58 schools.
b
n = 27 schools.

3.3. Evidence of Concurrent Validity


3.3.1. Correlations with academic achievement and suspensions/expulsions
At the schoolwide level, using aggregated scores across all students within each school, we examined
correlations between DSCSS scores, suspension and expulsion rates, and academic achievement. Means
and standard deviations are reported in Table 7. Due to small sample size, middle (n = 17) and high schools
(n = 10) were combined. As shown in Table 8, scores on the ve subscales and the total scale tended to
correlate moderately, and in the expected directions, with academic achievement and suspension and
expulsion rates. With the exception of scores on the StudentStudent Relations subscale correlating .24
with ELA at the middle and high school level, scores on all ve subscales and the total scale correlated
signicantly with academic achievement and suspensions and expulsions.
4. Discussion
The aim of this study was to develop a brief and psychometrically sound survey for school psychologists
and other educators to use in assessing school climate. In achieving this aim, multigroup conrmatory factor
analyses showed that the DSCSS is best represented by a bifactor model consisting of ve specic factors and
one general factor. The ve specic factors are TeacherStudent Relations, StudentStudent Relations, Fairness
of Rules, School Safety, and Liking of School. Results demonstrated congural, weak and strong factorial
invariance across three grade levels (i.e., elementary, middle, and high school), racialethnic groups (i.e.,
Table 8
Correlations between school climate and academic achievement and suspensions/expulsions.
Elementary school a

TeacherStudent Relations
StudentStudent Relations
Fairness of Rules
School Safety
Liking of School
Total School Climate

Middle and high schools b

Full sample

ELA

Math

S/E

ELA

Math

S/E

ELA

Math

S/E

.59
.72
.63
.76
.72
.72

.53
.63
.55
.68
.66
.65

.72
.67
.67
.74
.71
.73

.40
.24
.42*
.45
.44
.47

.64
.40
.58
.66
.69
.72

.48
.68
.42
.70
.60
.66

.56
.66
.59
.69
.68
.65

.77
.70
.76
.81
.80
.81

.83
.75
.81
.84
.81
.85

ELA = English-Language Arts. S/E = suspensions and expulsions.


a
n = 58 schools.
b
n = 27 schools.
p b .01.
p b .05.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

171

Caucasian, African American, and Hispanic), and gender. Scores for each of the ve subscales and the total score
also were found to be reliable across grade level, racialethnic, and gender groups, with internal consistency
coefcients across those groups ranging from .63 to .89 for the subscales and from .91 to .94 for the total scale.
In support of the DSCSS's concurrent validity, at the schoolwide level school climate scores on nearly all of
subscales correlated positively with academic achievement and negatively with suspension and expulsion
rates. At the elementary level and for all grades combined, all correlations were appreciable and statistically
signicant. All but one correlation (i.e., StudentStudent Relations with ELA) was signicant at the middle and
high school level. Most impressive were the correlation coefcients for the full sample, which ranged from .56
to .85, and particularly the correlations between the school climate subscales and suspension and expulsion
rates. For the full sample, Total School Climate scores correlated .85 with suspension and expulsion rates,
reecting a strong inverse relation between these two variables. This correlation is confounded by middle and
high schools tending to have lower school climate scores and higher suspension and expulsion rates than
elementary schools. However, when controlling for this confound by examining elementary schools separately
for middle and high schools, the correlation coefcients decrease in magnitude but remain appreciable (i.e.,
.73 for elementary schools and .66 for middle and high schools).
Results did not support the inclusion of a proposed sixth subscale, Student Conduct Problems, with items
assessing bullying, ghting, harming others, stealing, cheating, and drugs. Not only did a bifactor model with
ve specic factors (excluding Student Conduct Problems) and a general factor provide the strongest model
t, but the proposed Student Conduct Problems factor also correlated poorly with the other ve factors
(with r's ranging from .02 to .24), indicating that the proposed subscale did not measure the same construct
measured by the other ve subscales. Thus, we discourage schools from including the items on the original
DSCSS that were intended to measure student perceptions of schoolwide student conduct problems as a
dimension of school climate. We plan to delete those items in a future revision of the survey, and to use a
scale separate from school climate to assess student conduct problems, when such assessment is desired.
4.1. Limitations and Future Research
Whereas the limited number of items (29) on the DSCSS is a practical strength, requiring only about 15
20 min for most students to complete, this limited number of items also presents limitations to the survey with
respect to its scope and reliability. The DSCSS assesses only ve aspects of school climate, primarily aspects of
social support and structure of particular focus in SWPBS and bullying prevention programs. It was not designed
to tap aspects of school climate included in some other surveys. As discussed previously, social support and
structure are the two most common dimensions of school climate found on surveys of school climate. However,
two other dimensions also are often found: (a) teaching and learning and (b) environmentalstructural (Cohen
et al., 2009). With respect to providing a comprehensive survey of school climate, which was not the intention
of the current study, the exclusion of those two dimensions may be viewed as a limitation of the DSCSS.
The small number of items also limits the DSCSS's internal reliability, especially for those subscales with the
fewest number of items (e.g., one subscale has three items and three subscales have four items). Despite the small
number of items, for the total sample, internal reliability coefcients ranged from .70 to .88 across the ve subscales
and was .94 for the total scale. Internal reliabilities above .60 are considered marginal, above .70 adequate, and
above .80 high (Cicchetti & Sparrow, 1981). Although it consists of only 3 items, the School Safety subscale yielded a
coefcient of .83. When examining coefcients for each of the ve subscales across subgroups based on grade level,
gender, and race, coefcients fell below .70 only for the Fairness of Rules subscale (.63 for elementary students,
.67 for African Americans, .66 for Hispanics and .69 for girls). We plan to add additional items to the Fairness of
Rules subscale, including several that will tap clarity of expectations and rules, to strengthen its internal reliability.
Another limitation of the study was the sampling procedures employed. Schools were not randomly
selected, but they volunteered to participate. Moreover, within-school sampling occurred at the classroom
level, not the individual student level. Although schools were directed to select classrooms randomly, the extent
to which this occurred is unknown. Because each school was given 200 surveys to administer, irrespective of
school enrollment, a larger percentage of students in smaller than larger schools participated. Thus, it is unlikely
that the study included a truly random sample of students in Delaware. However, as noted previously, the
demographics for the nal sample closely matched those for the general student population in the state.
Other limitations of the DSCSS, recommendations for its use, and suggestions for future research are the
same ones that apply to many other studies and surveys of school climate. Caution is warranted in generalizing

172

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

the results to other populations, and one should not interpret the correlations among variables as implying
causation. Longitudinal studies are needed to examine causal relations as well as to establish the survey's
stability over time. Likewise, additional reliability and validity studies are needed to support the DSCSS's
psychometric properties, including testretest reliability, convergent validity (e.g., correlations with other
surveys of school climate), predictive validity (e.g., predicting changes in achievement over time),
discriminant validity (e.g., examining if school climate scores discriminate between schools with high and
low achievement or school suspensions), and additional studies of concurrent validity (e.g., correlations with
other social and emotional outcomes such as bullying, friendships, and self-concept). Studies employing
multi-level analyses are especially needed to examine differences in variance in school climate across
individual, school, and classroom levels. Finally, the survey's sensitivity to changes in school climate should be
explored. Related to this recommendation point, another possible limitation of the current study, which
applies to most other studies of school climate, is that it is unclear to what extent scores were affected by
prevention programs being implemented in the participating schools. Although many schools in Delaware
were implementing SWPBS, and all had some type of bullying prevention program, we do not believe that
prevention efforts were any different than those found in most other states, with schools varying greatly in
types of programs (including variations within SWPBS schools) and the delity of implementation.
4.2. Implications for School Psychologists
The DSCSS offers researchers and practitioners in school psychology and education a brief survey of
school climate that is valid, reliable, and free to the public. The survey should be particularly useful in
evaluating the effectiveness of schoolwide prevention and intervention programs designed to improve
relationships among teachers and students, school safety, the fairness of school rules, and student liking of
school. These programs would include ones for preventing bullying and school violence, as well as more
general programs for school and self-discipline (Bear, 2010), social and emotional learning (see CASEL.org,
the website for the Collaborative for Academic, Social, and Emotional Learning), schoolwide positive
behavioral supports and interventions (see PBIS.org, the website for the Technical Assistance Center for
Positive Behavioral Interventions and Supports), and character education (see character.org, the website
for the Character Education Partnership). Each of those programs target school climate in their
interventions and also view school climate as an important measurable outcome.
With training in prevention, consultation, child and adolescent development, and program evaluation, school
psychologists should be active members of schoolwide teams charged with implementing those programs. As
members of those teams, school psychologists are in an excellent position to advocate for evidence-based
interventions to improve school climate as a means of achieving other positive outcomes, such as decreased
bullying and violence and enhance social, emotional, and academic learning. Likewise, they are in an excellent
position to argue that, in many school prevention programs, student perceptions of school climate are as
important, or more important, than are other outcomes more commonly evaluated in the schools, such as ofce
disciplinary referrals, suspensions and expulsions, and teacher ratings of student behavior. That is, regardless of
the number of behavior problems reported by the school, students may or may not like their school or perceive it
favorably with respect to safety, fairness, and relationships with teachers and students (Arum, 2003; Bear, 2010)
perceptions that have shown to be related to a number of important academic, social, and emotional outcomes.
References
American Psychological Association Zero Tolerance Task Force (2008). Are zero tolerance policies effective in the schools? An
evidentiary review and recommendations. American Psychologist, 63, 852862.
Anderson, C. S. (1982). The search for school climate: A review of the research. Review of Educational Research, 52, 368420.
Arum, R. (2003). Judging school discipline: The crisis of moral authority. Cambridge, MA: Cambridge University Press.
Asparouhov, T., & Muthn, B. (2010). Computing the strictly positive SatorraBentler chi-square test in Mplus. Mplus Web Notes:
No. 12 Retrieved November 20, 2010, from. http://www.statmodel.com/examples/webnotes/webnote12.pdf.
Astor, R. A., Benbenishty, R., Zeira, A., & Vinokur, A. (2002). School climate, observed risky behaviors, and victimization as predictors of
high school students' fear and judgments of school violence as a problem. Health Education & Behavior, 29, 716736.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Upper Saddle River, NJ: Prentice-Hall.
Bandura, A. (1997). Self-efcacy: The exercise of control. New York: Freeman.
Barnett, R. V., Easton, J., & Israel, G. D. (2002). Keeping Florida's children safe in school: How one state designed a model safe school
climate survey. School Business Affairs, 68, 3138.

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

173

Battistich, V., & Horn, A. (1997). The relationship between students' sense of their school as a community and their involvement in
problem behaviors. American Journal of Public Health, 87, 19972001.
Battistich, V., Solomon, D., Kim, D., Watson, M., & Schaps, E. (1995). Schools as communities, poverty levels of student populations,
and students' attitudes, motives, and performance: A multilevel analysis. American Educational Research Journal, 32, 627658.
Battistich, V., Solomon, D., Watson, M., & Schaps, E. (1997). Caring school communities. Educational Psychologist, 32, 137151.
Baumrind, D. (1971). Current patterns of parental authority. Developmental Psychology Monographs, 4, 1103.
Baumrind, D. (1996). The discipline controversy revisited. Family Relations, 45, 405414.
Bear, G. G. (2010). School discipline and self-discipline: A practical guide to promoting prosocial student behavior. New York: Guilford Press.
Bear, G. G. (with A. Cavalier & M. Manning) (2005). Developing self-discipline and preventing and correcting misbehavior. Boston, MA: Allyn & Bacon.
Berkowitz, M. W., & Schwartz, M. (2006). Character education. In G. G. Bear, & K. M. Minke (Eds.), Children's needs III: Development,
prevention, and intervention (pp. 1527). Bethesda, MD: National Association of School Psychologists.
Brand, S., Felner, R., Shim, M., Seitsinger, A., & Dumas, T. (2003). Middle school improvement and reform: Development and validation
of a school-level assessment of climate, cultural pluralism, and school safety. Journal of Educational Psychology, 95, 570588.
Bronfenbrenner, U. (1979). The ecology of human development. Cambrdge, MA: Harvard University Press.
Brophy, J. E. (1996). Teaching problem students. New York: Guilford Press.
Bru, E., Stephens, P., & Torsheim, T. (2002). Students' perceptions of class management and reports of their own misbehavior. Journal
of School Psychology, 40, 287307.
Buhs, E. S., Ladd, G. W., & Herald, S. L. (2006). Peer exclusion and victimization: Processes that mediate the relation between peer
group rejection and children's classroom engagement and achievement? Journal of Educational Psychology, 98, 113.
Center for Social and Emotional Education (2009). Comprehensive school climate inventory. Retrieved September 11th, 2009 at.
http://www.schoolclimate.org/programs/csci.php.
Chen, F. F. (2007). Sensitivity of goodness of t indexes to lack of measurement invariance. Structural Equation Modeling: A
Multidisciplinary Journal, 14, 464504.
Chen, F. F., & West, S. G. (2008). Measuring individualism and collectivism: The importance of considering differential components,
reference groups, and measurement invariance. Journal of Research in Personality, 42, 259294.
Child Development Project (1993). Liking for school. Oakland, CA: Developmental Studies Center.
Cicchetti, D. V., & Sparrow, S. S. (1981). Developing criteria for establishing interrater reliability of specic items: Applications to
assessment of adaptive behavior. American Journal of Mental Deciency, 86, 127137.
Cohen, J., McCabe, E. M., Michelli, N. M., & Pickeral, T. (2009). School climate: Research, policy, practice, and teacher education.
Teachers College Record, 111, 180213.
Connell, J. P., & Wellborn, J. G. (1991). Competence, autonomy, and relatedness: A motivational analysis of self-system processes. In
M. R. Gunnar, & L. A. Sroufe (Eds.), Self processes and development. The Minnesota symposium on child psychology (pp. 4377).
Mahwah, NJ: Erlbaum.
Croninger, R. G., & Lee, V. E. (2001). Social capital and dropping out of high school: Benets to at-risk students of teachers' support and
guidance. Teachers College Record, 103, 548581.
Danielsen, A. G., Wiium, N., Wilhelmsen, B. U., & Wold, B. (2010). Perceived support provided by teachers and classmates and
students' self-reported academic initiative. Journal of School Psychology, 48, 247267.
Davidson, L. M., & Demaray, M. K. (2007). Social support as a moderator between victimization and internalizingexternalizing
distress from bullying. School Psychology Review, 36, 383405.
Delaware Department of Education (2009). Delaware Student Testing Program. Retrieved May 4, 2009, from. http://www.doe.k12.de.us/aab/.
Ding, C., & Hall, A. (2007). Gender, ethnicity, and grade differences in perceptions of school experiences among adolescents. Studies in
Educational Evaluation, 33, 159174.
Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students' social and
emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82, 474501.
Eccles, J. S., Midgley, C., Buchanan, C. M., Wigeld, A., Reuman, D., & Mac Iver, D. (1993). Development during adolescence: The impact
of stage/environment t. American Psychologists, 48, 90101.
Emmons, C., Haynes, N. M., & Comer, J. P. (2002). School climate survey: Elementary and middle school version (Revised Edition). New
Haven, CT: Yale University Child Study Center.
Fisher, D. L., & Fraser, B. J. (1983). A comparison of actual and preferred classroom environments as perceived by science teachers and
students. Journal of Research in Science Teaching, 20, 5561.
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of
Educational Research, 74, 59109.
Furlong, M. J., Greif, J. L., Bates, M. P., Whipple, A. D., Jimenez, T. C., & Morrison, R. (2005). Development of the California School Climate
and Safety SurveyShort Form. Psychology in the Schools, 42, 137149.
Goldstein, S. E., Young, A., & Boyd, C. (2008). Relational aggression at school: Associations with school safety and social climate.
Journal of Youth and Adolescence, 37, 641654.
Gottfredson, G. D. (1999). User's Manual for the Effective School Battery. Ellicott City, MD: Gottfredson Associates.
Gottfredson, G. D., Gottfredson, D. C., Payne, A. A., & Gottfredson, N. C. (2005). School climate predictors of school disorder: Results
from a national study of delinquency prevention in schools. Journal of Research in Crime and Delinquency, 42, 412444.
Gregory, A., & Cornell, D. (2009). Tolerating adolescent needs: Moving beyond zero tolerance policies in high school. Theory into
Practice, 48, 106113.
Gregory, A., Cornell, D., Fan, X., Sheras, P., Shih, T., & Huang, F. (2010). High school practices associated with lower student bullying
and victimization. Journal of Educational Psychology, 102, 483496.
Gregory, A., & Weinstein, R. S. (2004). Connection and regulation at home and in school: Predicting growth in achievement for
adolescents. Journal of Adolescent Research, 19, 405427.
Grifth, J. (1995). An empirical examination of a model of social climate in elementary schools. Basic and Applied Social Psychology, 17, 97117.
Grifth, J. (1999). School climate as social order and social action: A multi-level analysis of public elementary school student
perceptions. Social Psychology of Education, 2, 339369.
Hamre, B. K., Pianta, R. C., Downer, J. T., & Mashburn, A. J. (2008). Teacher's perceptions of conict with young students: Looking
beyond problem behaviors. Social Development, 17, 115136.

174

G.G. Bear et al. / Journal of School Psychology 49 (2011) 157174

Haynes, N. M., Emmons, C., & Ben-Avie, M. (1997). School climate as a factor in student adjustment and achievement. Journal of
Educational and Psychological Consultation, 8, 321329.
Haynes, N. M., Emmons, C., & Comer, J. P. (1994). School Climate Survey: Elementary and Middle School Version. New Haven, CT: Yale
University Child Study Center.
Horner, R., & Sugai, G. (2007). Is School-wide Positive Behavior Support an evidence-based practice? Retrieved March 18, 2008, from.
http://www.pbis.org.
Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasato, J., Todd, A. W., et al. (2009). A randomized, wait-list controlled effectiveness trial
assessing School-wide Positive Behavior Support in elementary schools. Journal of Positive Behavior Interventions, 11, 133144.
Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model
misspecication. Psychological Methods, 3, 424453.
Hughes, J. N., Cavell, T. A., & Wilson, V. (2001). Further support for the developmental signicance of the quality of the Teacher
Student relationship. Journal of School Psychology, 39, 289301.
Irvin, L. K., Tobin, T. J., Sprague, J. R., Sugai, G., & Vincent, C. G. (2004). Validity of ofce discipline referral measures as indices of schoolwide behavioral status and effects of school-wide behavioral interventions. Journal of Positive Behavior Interventions, 6, 131147.
Jessor, R., Turbin, M. S., Costa, F. M., Dong, Q., Zhang, H., & Wang, C. (2003). Adolescent problem behavior in China and the United
States: A cross-national study of psychosocial protective factors. Journal of Research on Adolescence, 13, 329360.
Jimerson, S. R., & Furlong, M. (Eds.). (2006). Handbook of school violence and school safety: From research to practice. Mahwah, NJ: Erlbaum.
Kincaid, J. P., Fishburne, R. P., Jr., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (Automated Readability
Index, Fog Count and Flesch Reading Ease Formula) for Navy enlisted personnel. Research Branch Report 875. Millington, TN:
Naval Technical Training, U. S. Naval Air Station, Memphis, TN.
Kitasatas, A., Ware, H. W., & Martinez-Arias, R. (2004). Students' perceptions of school safety: Effects by community, school
environment, and substance use variables. The Journal of Early Adolescence, 24, 412430.
Kuperminc, G. P., Leadbeater, B. J., & Blatt, S. J. (2001). School social climate and individual differences in vulnerability to
psychopathology among middle school students. Journal of School Psychology, 39, 141159.
Ladd, G. W., & Price, J. M. (1987). Predicting children's social and school adjustment following transition from preschool to
kindergarten. Child Development, 58, 11681189.
Lamborn, S. D., Mounts, N. S., Steinberg, & Dornbush, S. M. (1991). Patterns of competence and adjustment among adolescents from
authoritative, authoritarian, indulgent, and neglectful families. Child Development, 62, 10491065.
McIntosh, K., Frank, J. L., & Spaulding, S. A. (2010). Establishing research-based trajectories of ofce discipline referrals for individual
students. School Psychology Review, 39, 380394.
Meredith, W. (1993). Measurement invariance, factor analysis, and factorial invariance. Psychometrika, 58, 525543.
Merrell, K. W., Gueldner, B. A., Ross, S. W., & Isava, D. M. (2008). How effective are school bullying intervention programs? A metaanalysis of intervention research. School Psychology Quarterly, 23, 2642.
Moos, R. H., & Trickett, E. (1974). The Classroom Environment Scale manual. Palo Alton, CA: Consulting Psychologists Press.
Morrison, G. M., Redding, M., Fisher, E., & Peterson, R. (2006). Assessing school discipline. In S. R. Jimerson, & M. J. Furlong (Eds.),
Handbook of school violence and school safety: From research to practice (pp. 211220). Mahwah, NJ: Erlbaum.
Moos, R. H., & Trickett, E. (2002). The Classroom Environment Scale (CES) manual (Third Edition). Menlo Park, CA: Mind Garden.
Muthn, L. K., & Muthn, B. O. (19982008). Mplus Version 5.2 [Computer Software].
Nansel, T. R., Overpeck, M., Pilla, R. S., Ruan, W. J., Simons-Morton, B., & Scheidt, P. (2001). Bullying behaviors among US youth:
Prevalence and associations with psychosocial adjustment. Journal of the American Medical Association, 285, 20942100.
Osterman, K. F. (2000). Students' need for belonging in the school community. Review of Educational Research, 70, 323367.
Resnick, M. D., Bearman, P. S., Blum, R. W., Bauman, K. E., Harris, K. M., Jones, J., et al. (1997). Protecting adolescents from harm:
Findings from the National Longitudinal Study on Adolescent Health. Journal of the American Medical Association, 278, 823832.
Reuger, S. Y., Malecki, C. K., & Demaray, M. K. (2008). Gender differences in the relationship between perceived social support and
student adjustment during early adolescence. School Psychology Quarterly, 23, 496514.
Sailor, W., Dunlap, G., Sugai, G., & Horner, R. (Eds.). (2009). Handbook of positive behavior support. New York: Springer.
Stockard, J., & Mayberry, M. (1992). Effective educational environments. Newbury Park, CA: Corwin.
Swearer, S. M., Espelage, D. L., Vaillancourt, T., & Hymel, S. (2010). What can be done about school bullying?: Linking research to
educational practice. Educational Researcher, 39, 3847.
Thompson, B. (2004). Exploratory and conrmatory factor analysis: Understanding concepts and applications. Washington, DC: American
Psychological Association.
Thompson, D. R., Iachan, R., Overpeck, M., Ross, J. G., & Gross, L. A. (2006). School connectedness in the health behavior in school-aged
children study: The role of student, school, and school neighborhood characteristics. Journal of School Health, 76, 379386.
Trickett, E. J., & Quinlan, D. M. (1979). Three domains of classroom environment: Factor analysis of the classroom environment scale.
American Journal of Community Psychology, 7, 279291.
Way, N., Reddy, R., & Rhodes, J. (2007). Students' perceptions of school climate during the middle school years: Association with
trajectories of psychological and behavioral adjustment. American Journal of Community Psychology, 40, 194213.
Welsh, W. N. (2000). The effects of school climate on school disorder. Annals of the American Academy of Political and Social Science, 567, 88107.
Welsh, W. N. (2003). Individual and institutional predictors of school disorder. Youth Violence and Juvenile Justice, 1, 346368.
Wentzel, K. R. (1996). Social and academic motivation in middle school: Concurrent and long-term relations to academic effort.
Journal of Early Adolescence, 16, 390406.
Widaman, K. F., & Reise, S. P. (1997). Exploring the measurement invariance of psychological instruments: Applications in the
substance use domain. In K. J. Bryant, M. Windle, & S. G. West (Eds.), The science of prevention: Methodological advances from
alcohol and substance abuse research (pp. 281324). Washington, DC: American Psychological Association.
Wilson, S. J., & Lipsey, M. W. (2007). School-based interventions for aggressive and disruptive behavior: Update of a meta-analysis.
American Journal of Preventive Medicine, 33(Suppl. 2 S), 130143.
Wright, J. A., & Dusek, J. B. (1998). Compiling school base rates for disruptive behaviors from student disciplinary referral data. School
Psychology Review, 27, 138147.
Zins, J. E., & Elias, M. J. (2006). Social and emotional learning. In G. G. Bear, & K. M. Minke (Eds.), Children's needs III: Development,
prevention, and intervention (pp. 113). Bethesda, MD: National Association of School Psychologists.

Potrebbero piacerti anche