Sei sulla pagina 1di 6

Practice:

Take action in response to evaluation results


Key Action: Discuss your findings with different stakeholders

SAMPLE MATERIAL: Formats Used to Display Data Results

Purpose: This excerpt demonstrates the ways in which different


data displays—tables, charts, graphs—can highlight
different types of evaluation findings by making the
data accessible and their meaning clear. It provides a
framework that can help you determine the most effective
ways to display data from your evaluation.

Note: The questions below may help you determine the


most effective format for sharing evaluation findings with
your stakeholders.

1. What are the key data points, statements, or


findings from your evaluation?

2. Which data display format from the article do you


think would be the most effective to display each finding?
What information will you need to include in the chart,
table, or graph to ensure the audience has sufficient
information to correctly interpret the findings?

3. Will each stakeholder group need the same


information? If they do, will the same data display work
equally well for each group?

Source: Building Evaluation Capacity: Guide 2: Collecting and


Using Data in Cross-Project Evaluations (pp. 21-25) by P.
B. Campbell and B. C. Clewell (2008). Washington, DC:
The Urban Institute. The entire guide can be downloaded
at http://www.urban.org/publications/411651.html (last
accessed on December 10, 2008).

1
10870-01_BEC2_redo.qxd 3/12/08 5:59 PM Page 21

Practice: Take action in response to evaluation results


Key Action: Discuss your findings with different stakeholders

Displaying Data Table 4. Change in Individual Instructor


Use of Student-Centered
Reporting and displaying data are two sides of Pedagogical Techniques
the same coin.8 Even minor changes in how Mean SD
data are displayed can have implications for
Pre 2.3 0.5
the conclusions drawn. The following pro-
Post 2.2 0.5
vides examples of the impact of presenting
data in different ways. Notes: Scale: 1 = almost always to 4 = never, N: 547
As the following example indicates, the
interpretation of the data can vary based on
whether mean change is displayed versus the
Table 5. Number and Percentage of
percentage of people changing. Table 4 shows
Individual Instructors Changing
that after participating in the program, teach- Their Use of Student-Centered
ers and professors were more apt to use stu- Pedagogical Techniques
dent-centered pedagogical techniques.
Positive No Negative
However, even though the data are the change change change
same, table 5 shows a different interpretation
Number 301 54 192
of the results. A majority of the teachers are
changing in the desired direction but more Percent 55% 10% 35%
than a third are changing in undesired ways.

Means of Individuals versus Means


Table 6. Change in Instructor Use of
of Individuals by Project
Student-Centered Pedagogical
The earlier data were presented by computing Techniques, Averaged
means and frequency counts for individual across Projects
instructors. Since the number of participating Mean SD
instructors varied across projects, the results
Pre 2.3 0.5
may be different when they are computed
Post 2.2 0.5
based on a project’s means or summaries
rather than individual means or summaries; Note: Scale: 1 = almost always to 4 = never; N = 547; instructors in
30 projects
although, in this case, the results were very
similar. Tables 6 and 7 report the data aver-
aged across projects rather than individually.
The question is not whether one method Table 7. Change in the Percentage of
is better than the other; the question is which Instructors Changing Their Use of
method better suits your needs. Regardless of Student-Centered Pedagogical
which computation is used, it is important to Techniques, Averaged
indicate the choice in the table, graph, or text. across Projects
Positive Negative
change No change change
Effect Size
Percent 59% 8% 33%
Including the statistical significance of
reported results is useful. However, statis- Note: N = 30 projects

Building evaluation capacity: Guide 2 collecting and using data in cross-project evaluations (pp. 21-25) by P. B. Campbell
Building Evaluation
and B. C. Clewell (2008). Washington, DC: The Urban Institute. The entire guide can be downloaded Capacity
at http://www. 21 2
urban.org/publications/411651.html. Last accessed December 10, 2008.
Practice: Take action in response to evaluation results
Key Action: Discuss your findings with different stakeholders

tical significance represents the probability Figure 8. Instructor Change in Pedagogical


that an observed difference exists and is not Approach
due to chance. It does not say anything
100%
about the size or meaningfulness of a
90
result. Another measure might be included:
effect size. The effect size, which can only 80
be computed over statistically signifi- 70
cant differences, shows how big the dif- 60
ference is. Effect sizes greater than 0.4 50
are considered large; between 0.2 and 40
0.4, moderate; and less than 0.2, small 30
(Glass, McGaw, and Smith 1981). More on 20
effect sizes and how to compute them can
10
be found at http://www.coe.tamu.edu/
∼bthompson/effect.html(Thompson 1997). 14 Projects
176 Instructors
Level of Disaggregation % Negative Change % No Change
% Positive Change
Looking at overall cross-project impact
is useful for seeing the big picture but it
doesn’t allow you to see the variability by project. As figures 8 and 9 indicate, even though the
data are the same, the interpretation is quite different based on the level of disaggregation.
Figure 9 shows that some projects are more effective than others in producing positive
change. (On a side note, including the number of instructors in each project is important, as
we’re disaggregating data into groups with small Ns.) Notice that although Project M reports

Figure 9. Instructor Change in Pedagogical Approach, by Project

100%
90
80
70
60
50
40
30
20
10

Project A B C D E F G H I J K L M N
# Instructors 10 12 5 12 2 23 15 10 18 3 8 18 1 39

% Negative Change % Positive Change % No Change

Building evaluation capacity: Guide 2 collecting and using data in cross-project evaluations (pp. 21-25) by P. B. Campbell
22 and Building Evaluation
B. C. Clewell Capacity
(2008). Washington, DC: The Urban Institute. The entire guide can be downloaded at http://www. 3
urban.org/publications/411651.html. Last accessed December 10, 2008.
Practice: Take action in response to evaluation results
Key Action: Discuss your findings with different stakeholders

100 percent positive change, it also Table 8. Number and Percentage of Instructors
had only one participating instructor. Changing Their Use of Student-Centered
The same data from the previous Pedagogical Techniques, by Project
disaggregated chart can also be pre- Number of % Positive %No % Negative
sented in a table (table 8). In general, instructors change change change
tables are more space effective but do Project A 10 40 0 60
not have the same visual impact as a Project B 12 75 0 25
well-constructed chart.
Project C 5 60 20 20
The previous examples show data
Project D 12 67 0 33
disaggregated by project, disaggre-
Project E 2 50 0 50
gating them by other variables may
also prove useful. In table 9, cross- Project F 23 52 13 35
project data are reported based on the Project G 15 53 7 40
type of professional development that Project H 10 80 10 10
instructors received. Presenting data Project I 18 50 0 50
in this way allows the reader to draw Project J 3 67 0 33
conclusions about the effectiveness of Project K 8 75 0 25
the type of intervention received. Project L 18 44 0 56
The most important considera- Project M 1 100 0 0
tion when choosing how to present
Project N 39 44 15 41
data is the story you would like to
Totals 176 54 7 39
tell—tables and graphs are only as
useful as the thought behind them.

Line versus Bar Graph


The type of graph used can make a difference in how data are understood, as the follow-
ing example shows. In figures 10 and 11, the same data are reported in a bar graph and
a line graph.
In general, when you want to emphasize how data changes over time, a line graph is
the most appropriate choice. Bar graphs are better suited for when you wish to highlight
differences between groups.
Notice how the bar graph in figure 10 looks very crowded. The 10 years represented
by the bars is probably too many. Crowded bars make it difficult to see the story being
told, especially when your document is printed in black and white.

Table 9. Percentage Change in Instructors’ Use of Student-Centered Pedagogical Techniques Based


on Project Professional Development
Pedagogical approach % Positive change % No change % Negative change

Professional development only 46 11 44


Professional development & curriculum change 66 8 26
Curriculum change only 70 10 19

Building evaluation capacity: Guide 2 collecting and using data in cross-project evaluations (pp. 21-25) by P. B. Campbell
Building Evaluation Capacity
and B. C. Clewell (2008). Washington, DC: The Urban Institute. The entire guide can be downloaded at http://www.
23 4
urban.org/publications/411651.html. Last accessed December 10, 2008.
Practice: Take action in response to evaluation results
Key Action: Discuss your findings with different stakeholders

Figure 10. Underrepresented Student Enrollment Status

250

200
Number of Students

150

100

50

Enrollment
Stage Admits New Enrollees Advance to Candidacy PhD Recipients All Enrollees

School Year 96–97 97–98 98–99 99–00 00–01 01–02 02–03 03–04 04–05

Figure 11. Underrepresented Student Enrollment Status

250

200
Number of Students

150

100

50

School Year 1996–97 1997–98 1998–99 1999–2000 2000–01 2001–02 2002–03 2003–04 2004–05

Admits All Enrollees PhD Recipients Advance to Candidacy New Enrollees

24 Building Evaluation Capacity

Building evaluation capacity: Guide 2 collecting and using data in cross-project evaluations (pp. 21-25) by P. B. Campbell
and B. C. Clewell (2008). Washington, DC: The Urban Institute. The entire guide can be downloaded at http://www. 5
urban.org/publications/411651.html. Last accessed December 10, 2008.
Practice: Take action in response to evaluation results
Key Action: Discuss your findings with different stakeholders

Figure 12. Percentage of Students at or above Grade Level in Math

50%

40%

30%

20%

10%

School Year 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01

Highly Effective Schools Typical Schools

Context versus Confusion


The context, or conditions surrounding a project, can also be helpful in understand-
ing volatile data. For example, examine the chart above (figure 12). Highly effective
schools were making substantial math gains until a large dip in 1999–2000. What
might explain this gap? When there are particular circumstances in a given year—
either positive, like a new program, or negative, such as a large budget cut—including
this with the graph can help readers not just see the trend but understand what might
be behind it.
In this case, following the 1998–99 school year, the district faced a number of challenges,
including multiple changes of superintendents, threatened state takeover, privatization of
some district schools, threatened strikes, and threatened removal of teacher certifications.
As figure 12 indicates, these changes appear to have a greater impact on student achieve-
ment in the highly effective schools than in their typical peers.

Using Cross-Project Data


The use of data varies with the type of user. Typical users of cross-project data fall into
three categories:

● funding agencies
● program areas
● individual projects

Building evaluation capacity: Guide 2 collecting and using data in cross-project evaluations (pp. Evaluation
Building 21-25) byCapacity
P. B. Campbell
25
and B. C. Clewell (2008). Washington, DC: The Urban Institute. The entire guide can be downloaded at http://www. 6
urban.org/publications/411651.html. Last accessed December 10, 2008.

Potrebbero piacerti anche