Sei sulla pagina 1di 3

Practice: Get quality data into your evaluator’s hands

Key Action: Troubleshoot data issues as they emerge

TOOL: Anticipating Data Collection Issues for Rigorous Evaluation

Purpose: As you plan and begin your data collection, it is


important to anticipate issues common to rigorous
evaluations of magnet programs. Use this tool to think
about and address specific issues that may negatively
impact your evaluation findings.

Instructions: 1. Review the list to learn about common data


collection issues that may negatively impact your
evaluation findings.

2. Based on the context of your particular district and


your understanding of the data collection issues,
identify the issues that are most likely to affect your
evaluation that need proactive problem solving.

3. Determine how your evaluation team will take


measures to address these issues and/or build in
checkpoints to routinely monitor these issues
throughout the process of data collection.

1
Practice: Get quality data into your evaluator’s hands
Key Action: Troubleshoot data issues as they emerge

Anticipating Data Collection Issues for Rigorous Evaluation

How this may


Ways to address
Rigorous evaluation issue impact our
the issue
evaluation
Lack of treatment fidelity: To accurately assess
the impact of the magnet program, you must
determine the extent to which key elements (or
“treatment”) of a magnet program were
implemented as originally intended. Before you
begin data collection, figure out how you will
measure “treatment fidelity,” or the degree to which
planned program activities are conducted and Type here
participants are reached. In other words, be clear
about how you will know if you are doing what you
said you would do.

Treatment “cross-over” in control or


comparison group: The control or comparison
group may engage in some or all of the elements of
the magnet program being evaluated. Indeed, few
schools or classrooms are intervention-free, and it
is difficult to anticipate all the ways best educational
practices may reach students. It is also difficult to
predict ahead of time which schools will adopt what
programs within the time frame of the evaluation
study. For this reason, it is important to be clear up
front with control and comparison schools about the
purpose of the evaluation and the need to limit
treatment crossover. You will also need to keep
regular close contact with all groups to document
what they are doing.

Selection bias: In a quasi-experimental design,


where students are not selected through random
assignment, statistical techniques may account for
some differences between the treatment and
comparison groups. Evaluators will try to make the
best matches between schools and individual
students to control for variables. However, when
students choose to participate in magnet programs,
it can be difficult to eliminate the effect of this
selection bias on student outcomes. You need to
minimize the effect of difficult-to-measure
differences between your treatment and
comparison group (e.g., level of motivation, parent
involvement).

Adapted from: U.S. Department of Education, Office of Safe and Drug-Free Schools. (2007). Mobilizing for evidence-based character 2
education (p. 36). Washington, DC: Author. The entire guide can be downloaded at www.ed.gov/programs/charactered/mobilizing.pdf
(last accessed December 10, 2008).
Practice: Get quality data into your evaluator’s hands
Key Action: Troubleshoot data issues as they emerge

How this may


Ways to address
Rigorous evaluation issue impact our
the issue
evaluation
Attrition: The loss of students, parents, and
teachers, as well as entire schools, can threaten an
experimental or quasi-experimental design by
leaving evaluators with inadequate numbers of
participants to make statistically significant
analyses. It may be difficult to make predictions
about participation of students and schools for the
length of time an evaluation may need data. Attrition
of students is common for many magnet programs,
which tend to serve urban areas that traditionally
experience high rates of student mobility. Urban
districts also face school closures because of
program improvement policies or declining
enrollment, which may mean loss of original
comparison or control schools. It is important to
“over-recruit” (or “over-sample”) students and
schools at the beginning of the study to anticipate
attrition.

Design breakdown: Execution problems—such as


the failure to collect data appropriately or within the
set time frame—will interfere with even the best-laid
plans. Well-researched planning will help ensure
your data collection plan is feasible and frequent
check-ins with the evaluation team will make sure
you stay on schedule and troubleshoot any issues
that emerge.

Consent bias: Some people will decline to


participate in a study, and those who do not
participate may have different characteristics from
the people who consent to take part. For this
reason, consent forms should include the option to
decline and an accompanying request for minimal
background information relevant to the study’s
objectives. You will then be able to document
differences between those who decline and those
who participate. The best way to reduce this bias is
to encourage everyone’s participation in the study
and to conduct random assignment after a sufficient
percentage of consents has been obtained.

Adapted from: U.S. Department of Education, Office of Safe and Drug-Free Schools. (2007). Mobilizing for evidence-based character 3
education (p. 36). Washington, DC: Author. The entire guide can be downloaded at www.ed.gov/programs/charactered/mobilizing.pdf
(last accessed December 10, 2008).

Potrebbero piacerti anche