Sei sulla pagina 1di 31

Reading Psychology, 30:89–118, 2009

Copyright 
C Taylor & Francis Group, LLC

ISSN: 0270-2711 print / 1521-0685 online


DOI: 10.1080/02702710802274903

READING ACHIEVEMENT AND SCIENCE PROFICIENCY:


INTERNATIONAL COMPARISONS FROM THE
PROGRAMME ON INTERNATIONAL STUDENT
ASSESSMENT

JENNIFER G. CROMLEY
Temple University, Philadelphia, Pennsylvania, USA

Students need to develop scientific literacy in order to participate fully as citizens,


community members, and in the globalized economy. But what is the relationship
between scientific literacy and reading literacy? Three international data sets
from the Programme on International Student Assessment (PISA) were used to
calculate correlations between scientific literacy and reading literacy for 15-year-
old students. Mean correlations at the individual student level across countries
were .840 for the PISA 2000 data set, .805 for the PISA 2003 data set, and .819
for the PISA 2006 data set. In all three data sets, this correlation varied among
countries, and the reading-science relationship was weakest in countries with
low country mean reading scores. Three possible interpretations are discussed,
favoring the interpretation that knowledge and skills that drive higher reading
comprehension also drive higher science achievement.

Science proficiency assessments are now required annually in the


United States as of fall 20071 under the No Child Left Behind
Act (2001). Students are likely to show a range of scores on
these assessments (as they have on recent National Assessment
of Educational Progress [NAEP] and Trends in International
Math and Science Study [TIMSS] assessments), and different
interventions are likely to be proposed. These might include
some combination of inquiry and hands-on learning (especially
in cooperative learning groups), use of technology, or reading-
and writing-to-learn (Guthrie et al., 2004; Linn & Eylon, 2006;
Yore & Treagust, 2006). Science proficiency assessments typi-
cally require students to read text. Though a limited amount
of research in science education today concerns reading as a

Address correspondence to Jennifer G. Cromley, Ph.D., Department of Psycholog-


ical Studies in Education, Temple University, Ritter Annex 201, 1301 Cecil B. Moore Av-
enue, Philadelphia, PA 19122-6091. E-mail: jcromley@temple.edu

89
90 Reading Achievement and Science Proficiency

means to increase science proficiency, a new body of research


is emerging that documents the effectiveness of using literate
activity as a science intervention (Hapgood & Palincsar, 2006–
2007; Purcell-Gates, Duke, & Martineau, 2007; Ritchie, Rigano,
& Duane, 2008; Zmach et al., 2007).

Why Science?

Learning science is critical, not just for youth who wish to pursue
scientific careers but also for an informed citizenry (National Re-
search Council, 2007; Trefil, 2007). Students who wish to pursue
careers in fields as diverse as biology, medicine, forensics, aero-
nautics, and pharmaceutical research need a solid foundation in
scientific knowledge and the scientific method. Even high school
students who do not have a goal of entering a scientific field need
basic scientific knowledge in order to make decisions about per-
sonal health and to make voting decisions that are well informed
by scientific knowledge (e.g., about stem cell research, oil drilling,
and global warming; Trefil, 2007).
The need for science literacy has changed in recent decades
as national economies worldwide have become more intercon-
nected; both American and international employers seek em-
ployees across the globe who have advanced scientific and tech-
nical knowledge. For this reason, international comparisons of
students’ math and science skills and knowledge—such as TIMSS
and PISA—receive widespread attention. A small body of research
suggests that performance on science measures is relatively de-
pendent on reading skills (e.g., Dempster & Reddy, 2007). If this is
true, then one important prerequisite for raising science achieve-
ment may be to increase reading comprehension. That is, science
education reform efforts may need to intensively target reading
skills in order to increase science achievement.
Few researchers have reported data specifically on the rela-
tionship of reading comprehension to science proficiency, but
three new data sets from the Programme on International Stu-
dent Assessment (PISA) include data from three samples, each
with hundreds of thousands of 15-year-old students in more than
40 countries who completed both reading literacy and science
literacy assessments in 2000, 2003, and 2006 (Organisation for
Economic Co-operation and Development [OECD], 2007). These
J. G. Cromley 91

data present a unique opportunity to quantify the relationship be-


tween reading comprehension and performance on a science as-
sessment.
PISA is an international research project sponsored by
the Organisation for Economic Co-Operation and Development
(OECD), of which the United States is a member. In PISA 2000,
almost 175,000 students in 43 countries completed measures of
mathematics literacy, science literacy, and reading literacy, as well
as demographic and attitudinal questionnaires. In PISA 2003, a
new sample of more than one quarter of a million 15-year-old stu-
dents from more than 40 countries completed similar measures.
In PISA 2006, a new sample of almost 400,000 fifteen-year-old stu-
dents from 50 countries completed the measures. The PISA assess-
ments are designed to measure literacy, “the capacity of students
to apply knowledge and skills in key subject areas and to anal-
yse, reason and communicate effectively as they pose, solve and
interpret problems in a variety of situations” (OECD, 2005a, p.
13). The measures were developed by OECD and translated into
the languages of instruction for the participating countries. Thus,
even though these nations differ on what is taught and assessed in
science classrooms, and at what ages different subjects are taught,
the PISA program attempts to measure general scientific literacy
required to be a fully participating member of society.

Reading and Science Proficiency

The relationship between reading and science proficiency has


been discussed in three bodies of literature: a handful of correla-
tional studies, a small body of research on language-based science
instruction, and several summaries of the literature on compre-
hension of science text, each of which I discuss in turn.
There is a paucity of literature specifically examining the re-
lationship between reading and science proficiency; I was only
able to locate a handful of published studies. O’Reilly and
McNamara (2007) measured science proficiency, reading com-
prehension, background knowledge, and reading strategies with
694 high school students from two Virginia high schools. They
found a correlation of .58 between scores on the Gates-MacGinitie
reading comprehension test and scores on the statewide sci-
ence proficiency test. A regression of science proficiency on
92 Reading Achievement and Science Proficiency

reading comprehension, background knowledge, reading strate-


gies, and gender explained a significant 46% of the variance in
science proficiency. The beta weight for reading comprehension
was a significant .40; background knowledge (β= .35) and gen-
der (β= −.15, favoring males) also made significant contribu-
tions, but reading strategies did not. Medina and Mishra (1994)
compared the performance of native-Spanish-speaking Mexican
American children on Spanish-language reading comprehension
and Spanish-language science tests from the La Prueba battery.
Collapsing scores across second- to eighth-grade students, they
found a significant correlation of .61 between reading compre-
hension and science proficiency. However, the comprehension
test for the second- to third-grade students may tap knowledge
more than comprehension because tests at the early reading lev-
els are so dependent on decoding.
Two studies with English-speaking high school students
showed similarly high correlations. Nolen (2003) found a cor-
relation of .60 between reading comprehension scores on a
state-mandated Test of Academic Proficiency and a district-wide
curriculum-based science achievement test for 377 ninth-grade
students. Demps and Onwuegbuzie (2001) correlated reading test
scores on the Iowa Test of Basic Skills given in eighth grade with
scores on a mandatory high school graduation tests for 102 stu-
dents. They found a correlation of .80 between eighth-grade ITBS
reading and the high school science graduation test, and a cor-
relation of .78 between the high school language arts graduation
tests and the high school science graduation test.
During medical school orientation, Haught and Walls (2004)
collected scores from a prior Medical College Admissions Test
(MCAT) and vocabulary, reading fluency, and reading compre-
hension on the Nelson-Denny reading test for 730 students. The
MCAT includes verbal reasoning, physical science, and biologi-
cal science sections. The correlation between MCAT biological
science and Nelson-Denny comprehension was a small but sta-
tistically significant .13. In a regression of MCAT verbal reason-
ing on the predictors, all three explained a significant amount
of variance. In regressions of physical science and biological sci-
ence, only vocabulary was a significant predictor. Two years later,
the same students took the U.S. Medical Licensing Examination
(USMLE) Step 1 exam. There was a nonsignificant correlation
between Nelson-Denny comprehension scores and USMLE Step
J. G. Cromley 93

1 scores 2 years later. In a regression of USMLE step 1 scores on


the predictors, only vocabulary was a significant predictor.
In summary, the four studies with K–12 students show con-
sistently large relationships between reading proficiency and sci-
ence test performance, and one study with medical students shows
weak relationships.
From a theoretical standpoint, reading texts—journal arti-
cles, results of computer analyses, equipment manuals, etc.—is a
central activity in science (Norris & Phillips, 2003; Yore, Bisanz,
& Hand, 2003), so reading comprehension should be strongly
related to science proficiency. Likewise, assigned textbook read-
ing should constitute a substantial portion of the learning time
for high school and especially undergraduate students, and text-
books are an important source of information in science learning.
On the other hand, much of classroom science teaching in the
United States in the K–12 system takes place in lectures and lab-
oratory activities that may bypass textbooks and other texts (Roth
et al., 2006). Therefore, at this time the role of comprehension in
science achievement is unclear.
A second relevant theoretical perspective on the
comprehension-science proficiency relationship is a family
of constructionist or knowledge-building theories, including
Kintsch’s (1998) construction-integration model of compre-
hension, Linn’s (Linn & Eylon, 2006) knowledge integration
framework for science learning, Graesser and colleagues’
(Graesser, Singer, & Trabasso, 1994) constructionist model of
inference in reading comprehension, and van den Broek’s
Landscape Model of reading comprehension (van den Broek,
Rapp, & Kendeou, 2005). All four models describe the process of
forming a richly interconnected mental representation through
an effortful process of combining the learner’s (or reader’s)
prior knowledge together with what is read or other to-be-learned
material. This family of theories suggests that the underlying
processes required to comprehend complex text and the under-
lying processes required to demonstrate proficiency in science
are the same: namely, the ability to form a coherent mental
model of the domain, one in which large amounts of information
are linked via key concepts from the domain rather than the
fractured knowledge that results from typical science instruction.
This set of families may explain why interventions to improve
reading comprehension sometimes yield increased science test
94 Reading Achievement and Science Proficiency

scores (e.g., Romance & Vitale, 2008)—when students engage in


practices such as making concept maps that build their content
knowledge and result in integrated knowledge structures, both
reading and science scores should improve.
In addition to research specifically testing the relation-
ship between reading and science proficiency, numerous read-
ing researchers have tested reading strategy interventions within
content-area classes designed to increase students’ reading com-
prehension of scientific texts. These effective interventions range
from instruction in single reading comprehension strategies (e.g.,
concept maps; Liu, 2004) to multiple strategies (Best, Rowe,
Ozuru, & McNamara, 2005) and large-scale professional devel-
opment initiatives (e.g., Braunger, Donahue, Evans, & Galguera,
2005). The dependent variables in these studies are reading com-
prehension, though, rather than science proficiency. Together,
these studies support a strong relationship between one compo-
nent of reading comprehension—strategy use—and performance
on text-based science measures. It remains an open question
whether those text-based reading comprehension measures that
use science text are in fact the same as science proficiency mea-
sures. Indeed, that is the very question we are testing in the
present research.
Given the large correlations found in previous research with
one sample of elementary/middle school students, two samples
of high school students, and the small correlation found with one
sample of beginning medical school students as well as additional
supporting evidence from the strategy instruction literature, I was
interested in whether the reading-science relationship holds for
a larger sample of high school students and also whether results
were similar in the Unites States and in other countries. I there-
fore used three separate PISA data sets (2000, 2003, and 2006)
to analyze the relationship between reading comprehension and
science proficiency across more than 40 countries in each year.

Study 1

Method

PARTICIPANTS
Participants were 174,896 fifteen-year-old students from 43
countries across the world.2 They were sampled using a two-stage
J. G. Cromley 95

process. Within each country, schools were selected via propor-


tional probability sampling, and then within most schools 35 stu-
dents were randomly selected.3 The sample size in each country
had a median of 4,709 students (with a range of 314 for Liecht-
enstein to 29,687 for Canada). The weighted sample size (the stu-
dent population in each country that the sample was designed to
represent) had a median of 118,501 students per country (with a
range of 325 for Liechtenstein to 3,121,874 for the United States),
for a total of 15,959,166 students. Participants and sampling pro-
cedures are described in detail in the PISA 2000 Technical Report
(OECD, 2002).

M EASURES
Students completed demographic questionnaires and pencil-
and-paper measures of science literacy and reading literacy. Each
measure was written by a different expert panel and was designed
around domain-specific content, process, and situation dimen-
sions. In each of these measures, about one half of the questions
were multiple-choice and one half were short answer or essay
questions. Different combinations of items were grouped together
in nine different assessment booklets such that the entire test
lasted 2 hours for each student.
One hundred and forty-one reading prompts were created,
representing 270 minutes of testing time; however, each student
only completed a subset of items during a 60-minute reading
testing period. The prompts included narrative, expository, ar-
gumentative, instructional/injunctive, and descriptive passages,
charts, diagrams, advertisements, maps, tables, matrices, graphs,
and forms. The tests were designed to measure information re-
trieval (20%), broad understanding of information (20%), in-
terpretation development (30%), content reflection (15%), and
form reflection (15%). Therefore, the first three aspects account
for understanding and use of information (70% of items), while
the remaining aspects require interpretation and evaluation (30%
of items). The document’s content was categorized according to
its intended use including reading for private use (letters, fiction,
biographies; 28%), public use (official documents, public events;
28%), workplace (16%), and education (28%). Forty-five percent
of items required students to construct their own responses, 45%
were multiple choice, and the remaining 10% required students
96 Reading Achievement and Science Proficiency

to construct their response among a limited range of acceptable


answers.
The 35 science questions, representing 60 minutes of testing
time, covered key concepts in science such as biological diversity,
motion, and physiological change drawn from biology, chemistry,
Earth science, physics, and technology. The test was designed to
measure the ability to think scientifically on the following three di-
mensions: grasping scientific concepts (biodiversity, forces, phys-
iological change, etc.), engaging in scientific processes (draw-
ing and communicating conclusions), and application of science
(life, health, technology, environment, etc.; OECD, 2001). Ques-
tion prompts ranged from one sentence to several paragraphs,
and some included graphics such as a photograph, graph, or
diagram. Two thirds of the questions could be answered unam-
biguously, and the remaining questions allowed for full or partial
credit.
Internal consistency reliability estimates for the measures
across all countries are .90 for science and .93 for reading. In-
terrater reliability for coding responses to open-ended questions
on a sample of 41,796 reading answers was 91.5%. Evidence for
the validity of the measures includes review by national and inter-
national expert subject-matter panels, pilot testing of items, and
reliability evidence (OECD, 2002).

P ROCEDURE
Participants in 32 countries completed measures in
February–October 2000, and participants in an additional
11 countries completed identical measures in 2002. Within
each country, measures had to be completed within a 6-week
period. Measures were given by trained test administrators in
each country. The reading and science assessments each took
one hour (OECD, 2002).

D ATA A NALYSIS
PISA 2000 used many measurement methods similar to the
U.S. National Assessment of Student progress (NAEP). PISA 2000
used a cluster rotation measurement design (sometimes called
matrix sampling), in which each student answered a subset of
items from a pool of questions. That is, not all students answered
all possible questions. All measures were Rasch scaled (an Item
J. G. Cromley 97

Response Theory [IRT] method). The items varied in difficulty,


and scores ranging from 200 to 800 take into account the diffi-
culty levels of the items each student answered. Because of this,
for each measure PISA reports five “plausible values” for each stu-
dent, and provides a set of weights to use in analyzing these plau-
sible value scores via resampling (see the PISA 2000 Technical Re-
port; OECD, 2002).
All analyses were conducted using the SPSS syntax supplied
by PISA for analyzing these two-stage matrix sampled Rasch scale
score data. For each analysis within each country, 80 independent
subsamples of data are drawn for each plausible value, and each
subsample has its own resampling weight. The mean of these 80
subsamples is an unbiased estimate of the mean for each plausi-
ble value score. The resulting test statistics (e.g., five means for
the five science plausible values) and their errors are averaged to
produce the best unbiased estimate of each parameter in each
country (OECD, 2005b). For example, after drawing 80 subsam-
ples, the five mean plausible values for reading for the United
States were 494.87, 494.28, 495.34, 496.59, and 494.85. Although
these weighting and resampling methods for the plausible values
are relatively complex, the resulting means, standard deviations,
and correlations for each plausible value and combinations are
not widely divergent, as shown in the example above.

Results

Descriptive statistics for each measure in each of the 43 coun-


tries are shown in Table 1 (all statistics were calculated using the
weighted resampling method described above). The overall mean
performance for reading was 473.70 (standard error [SE] = .71),
with a range across the 43 countries from 328.63 to 546.48 and a
range of standard errors from .24 to 2.04). The overall mean per-
formance for science was 474.80 (SE = 3.79, with a range of 333.34
to 552.12 and a range of standard errors from 1.57 to 9.01).
Correlations between reading scores and science scores com-
puted at the student level within each country are shown in
Table 2; all correlations are statistically significant at the .001 level.
The overall mean correlation between reading and science was
.840 (with a range across the 43 countries from .675 to .916).
PISA reported correlations between latent reading factors and a
98 Reading Achievement and Science Proficiency

TABLE 1 Country-Specific Sample Sizes and Descriptive Statistics for the


Measures, PISA 2000

Un- Reading Plausible Science Plausible


weighted Weighted Values Values

Country N N M SE M SE

Albania 5,176 229,152 353.183 .705 376.453 2.886


Argentina 4,745 71,547 417.538 1.253 396.167 8.564
Australia 6,670 110,095 529.643 .682 527.502 3.470
Austria 4,893 2,402,280 509.420 .266 518.641 2.548
Belgium 29,687 348,481 512.536 .436 495.735 4.288
Bulgaria 5,365 125,639 432.191 .678 448.285 4.581
Brazil 4,235 47,786 397.499 .812 375.169 3.264
Canada 4,864 62,826 533.687 .282 529.360 1.571
Chile 4,673 730,494 412.440 .506 414.853 3.438
Czech Republic 5,073 826,816 494.816 .511 511.415 2.430
Denmark 3,644 111,363 494.165 .675 481.005 2.809
Finland 4,887 107,460 546.477 .928 537.741 2.477
France 3,372 3,869 510.256 .924 500.486 3.180
Germany 3,854 56,209 484.384 .344 487.106 2.433
Greece 4,984 510,792 468.451 .598 460.554 4.894
Hong Kong (China) 5,256 1,446,596 525.755 .580 540.809 3.012
Hungary 4,982 579,109 481.355 .237 496.076 4.167
Iceland 3,920 30,063 505.605 .718 495.914 2.172
Indonesia 314 325 375.364 .860 393.330 3.937
Ireland 3,528 4,138 523.525 .721 513.368 3.181
Israel 4,600 960,011 449.983 1.662 434.136 9.012
Italy 2,503 157,327 490.910 .566 477.603 3.051
Japan 3,667 46,757 524.407 .476 550.404 5.477
Korea 4,147 49,579 522.941 .534 552.119 2.691
Latvia 3,654 542,005 461.019 .984 460.061 5.619
Liechtenstein 4,585 99,998 477.383 2.041 476.100 7.093
Luxembourg 6,701 1,968,131 441.483 1.126 443.071 2.318
Macedonia 6,214 399,055 377.932 .870 400.715 2.097
Mexico 4,416 94,338 420.797 .951 421.537 3.180
The Netherlands 6,100 72,010 529.182 .688 529.058 4.013
New Zealand 9,340 643,041 529.766 .470 527.687 2.400
Norway 3,846 3,121,874 498.447 .582 500.339 2.749
Peru 5,176 229,152 328.630 .813 333.341 3.985
Poland 4,745 71,547 483.125 .664 483.122 5.123
Portugal 6,670 110,095 469.777 .802 458.996 3.996
Romania 4,893 2,402,280 435.634 .430 441.159 3.373
Russian Federation 29,687 348481 459.742 .434 460.314 4.735
Spain 5,365 125,639 491.671 .731 490.939 2.950
(Continued on next page)
J. G. Cromley 99

TABLE 1 Country-Specific Sample Sizes and Descriptive Statistics for the


Measures, PISA 2000 (Continued)

Un- Reading Plausible Science Plausible


weighted Weighted Values Values

Country N N M SE M SE

Sweden 4,235 47,786 514.280 .968 512.128 2.510


Switzerland 4,864 62,826 495.841 .594 495.667 4.445
Thailand 4,673 730,494 428.374 .320 436.379 3.062
United Kingdom 5,073 826,816 523.742 .469 532.024 2.687
United States 3,644 111,363 503.987 .925 499.460 7.311

Note. All statistics are calculated at the student level within countries, use student-level
weights, and are calculated using BSS resampling weights on 80 replications.

TABLE 2 Correlation Within Each Country Between Reading and Science


Scores, PISA 2000

Correlation Between Reading and Science

Country r SE

Australia .756 .015


Austria .816 .004
Belgium .894 .004
Brazil .893 .004
Canada .880 .003
Czech Republic .806 .006
Denmark .736 .007
Finland .882 .002
France .804 .006
Germany .870 .004
Greece .872 .002
Hong Kong (China) .825 .004
Hungary .845 .004
Iceland .875 .001
Indonesia .802 .007
Ireland .871 .003
Italy .851 .004
Japan .817 .005
Korea .675 .007
Latvia .894 .002
Liechtenstein .789 .005
(Continued on next page)
100 Reading Achievement and Science Proficiency

TABLE 2 Correlation Within Each Country Between Reading and Science


Scores, PISA 2000 (Continued)

Correlation Between Reading and Science

Country r SE

Luxembourg .856 .003


Macao (China) .849 .003
Mexico .815 .007
The Netherlands .786 .003
New Zealand .891 .009
Norway .887 .002
Poland .808 .006
Portugal .826 .005
Russian Federation .916 .002
Serbia .901 .004
Slovakia .869 .002
Spain .679 .013
Sweden .859 .005
Switzerland .878 .006
Thailand .819 .008
Tunisia .815 .003
Turkey .839 .003
United Kingdom .875 .002
United States .884 .002
Uruguay .814 .004

Note. All statistics are conducted at the student level within countries, use student-
level weights, and are calculated using BSS resampling weights on 80 replications.

latent science factor of .889 for retrieving information, .890 for


interpreting texts, and .840 for reflection and evaluation (after
accounting for measurement error; OECD, 2002).
The correlation between reading and science plausible value
scores is very large, but it does vary considerably among the
43 countries. Correlations are highest in The Netherlands, the
United States, the United Kingdom, New Zealand, and Ireland
and are lowest in Indonesia, Peru, Brazil, Albania, and Latvia. I no-
ticed that the countries with high correlations are also in the top
one half on mean reading achievement, and the countries with
low correlations are among the six lowest achieving in reading of
the 43 countries (except for Latvia, which is the 15th lowest). I
therefore regressed the reading-science correlation on the mean
J. G. Cromley 101

reading score. Mean country reading score accounted for a large,


statistically significant proportion of variance in the correlations
(F [1, 41] = 59.823, MSE = .001, p < .000, R2 = .593, adjusted R2 =
.583). Countries with higher mean reading performance showed
a stronger relationship between reading scores and science scores
than did countries with lower mean reading performance (see
Figure 1).
Because lower scoring countries might have higher standard
errors, I ran a second regression of reading-science correlation on
the mean reading score and the standard error of that mean. Stan-
dard errors correlated nonsignificantly with mean reading score
(r [41] =−.221, p = .155) and did not contribute significantly to
the overall regression (F [2, 40] = 33.302, MSE = .001, p < .001,
R2 = .625, adjusted R2 = .606, the beta weight for SE was nonsignif-
icant, t < 1).
Correlation Between Reading and Science at Student Level

0.95

0.90

0.85

0.80

0.75

0.70

0.65

300.00 350.00 400.00 450.00 500.00 550.00


Mean Country Reading Score
FIGURE 1 Scatterplot of reading-science correlation vs. mean country reading
score, PISA 2000.
102 Reading Achievement and Science Proficiency

Discussion

Given that these reading and science tests are both written as-
sessments, it is perhaps not surprising that students’ scores on
the two measures are highly correlated. The magnitude of the
correlation—a mean of .840—however, may be surprising. This
correlation did vary across the 43 countries, with a tighter linking
of reading and science proficiency in the countries with higher
mean reading scores.

Study 2

In study 2, I replicated these results with the PISA 2003 data, col-
lected with an entirely new set of 15-year-old participants, in a sim-
ilar set of countries, using very similar measures and procedures.

Method

PARTICIPANTS
Participants were 276,192 fifteen-year-old students from 41
countries across the world,4 sampled using the same two-stage pro-
cess as in PISA 2000.5 The sample size in each country had a me-
dian of 4,704 (with a range of 332 for Liechtenstein to 29,983 for
Mexico). The weighted sample size had a median of 111,831 per
country (with a range of 338 for Liechtenstein to 3,147,089 for the
United States), for a total of 19,155,864 students. Participants are
described in detail in Learning for Tomorrow’s World: First Results from
PISA 2003 (OECD, 2004). Sampling procedures are described in
detail in the PISA 2003 Technical Report (OECD, 2005a).
M EASURES
Students completed demographic questionnaires and mea-
sures of mathematics literacy, science literacy, and reading liter-
acy, with content (but not format) slightly modified from those
used in PISA 2000. The 28 reading prompts included narrative,
expository, and descriptive passages, charts, one map, graphs, and
forms. The tests were designed to measure searching for informa-
tion (literal comprehension; 29% of items) and interpreting and
evaluating texts (inferential comprehension; 71% of items). The
documents included recreational, workplace, citizenship, and
J. G. Cromley 103

academic writing. The 35 science questions covered key concepts


in science such as biological diversity, motion, and physiological
change drawn from biology, chemistry, Earth science, physics, and
technology. Topics include those related to personal, community,
and global concerns, as well as historical issues (OECD, 2003,
2004).
Internal consistency reliability estimates for the measures are
given with the results below. Interrater reliability for coding re-
sponses to open-ended questions on a sample of 24,600 read-
ing answers was 90.5% and on a sample of 23,570 science an-
swers was 90.1%. Evidence for validity includes that used for PISA
2000 as well as pilot studies asking students to think aloud as they
answered questions and screening for gender and country bias
(OECD, 2005a).

P ROCEDURE
Procedures were almost identical to those used in PISA 2000.
Participants completed measures in February–October 2003, and
within each country measures had to be completed within a 30-
day period (OECD, 2005a).

D ATA A NALYSIS
As in the earlier study, PISA 2003 used a cluster rotation mea-
surement design with Rasch scaled items. Resampling weights spe-
cific to the PISA 2003 data set were used to analyze the plausi-
ble value scores (see the PISA 2003 Data Analysis Manual; OECD,
2005b). All analyses were conducted using the SPSS syntax sup-
plied by PISA for analyzing these two-stage matrix sampled Rasch
scale score data. Data analyses using resampling were the same as
that described for Study 1.
Results

Descriptive statistics for each measure in each of the 41 coun-


tries are shown in Table 3 (all statistics were calculated using the
weighted resampling method described above). The overall mean
performance for reading was 480.9 (standard error [SE] = 3.1,
with a range across the 41 countries from 374.6 to 543.5 and a
range of standard errors from 1.5 to 5.8). The overall mean per-
formance for science was 488.5 (SE = 3.2, with a range of 384.7
104 Reading Achievement and Science Proficiency

TABLE 3 Country-Specific Sample Sizes and Descriptive Statistics for the


Measures, PISA 2003

Un- Reading Plausible Science Plausible


weighted Weighted Values Reliability Values Reliability

Country N N M SE M SE
Australia 12,551 235,591 525.43 2.13 .83 525.05 2.10 .84
Austria 4,597 85,931 490.69 3.76 .87 490.99 3.44 .88
Belgium 8,796 111,831 506.99 2.58 .86 508.83 2.48 .85
Brazil 4,452 1,952,253 402.80 4.58 .76 389.62 4.35 .73
Canada 27,953 330,436 527.91 1.75 .83 518.75 2.02 .83
Czech Republic 6,320 121,183 488.54 3.46 .84 523.25 3.38 .82
Denmark 4,218 51,741 492.32 2.82 .82 475.22 2.97 .82
Finland 5,796 57,884 543.46 1.64 .80 548.23 1.92 .80
France 4,300 734,579 496.19 2.68 .84 511.23 2.99 .84
Germany 4,660 884,358 491.36 3.39 .88 502.34 3.64 .88
Greece 4,627 105,131 472.27 4.10 .78 481.02 3.82 .76
Hong Kong (China) 4,478 72,484 509.54 3.69 .85 539.50 4.26 .85
Hungary 4,765 107,044 481.87 2.47 .81 503.28 2.77 .81
Iceland 3,350 3,928 491.75 1.56 .83 494.75 1.47 .82
Indonesia 10,761 1,971,476 381.59 3.38 .70 395.04 3.21 .68
Ireland 3,880 54850 515.48 2.63 .87 505.39 2.69 .85
Italy 11,639 481,521 475.66 3.04 .85 486.45 3.13 .84
Japan 4,704 1,240,054 498.11 3.92 .84 547.64 4.14 .85
Korea 5,444 533,504 534.09 3.09 .83 538.43 3.54 .84
Latvia 4,627 33,643 490.56 3.67 .80 489.13 3.89 .78
Liechtenstein 332 338 525.08 3.58 .84 525.18 4.33 .84
Luxembourg 3,923 4,080 479.42 1.48 .86 482.76 1.50 .85
Macao (China) 1,250 6,546 497.64 2.16 .78 524.68 3.03 .79
Mexico 29,983 1,071,650 399.72 4.09 .78 404.90 3.49 .72
The Netherlands 3,992 184,943 513.12 2.85 .87 524.37 3.15 .88
New Zealand 4,511 48,638 521.55 2.46 .86 520.90 2.35 .86
Norway 4,064 52,816 499.74 2.78 .82 484.18 2.87 .81
Poland 4,383 534,900 496.61 2.88 .82 497.78 2.86 .81
Portugal 4,608 96,857 477.57 3.73 .84 467.74 3.46 .81
Russian Federation 5,974 2,153,373 442.20 3.94 .76 489.29 4.14 .74
Serbia 4,405 68,596 411.74 3.56 .78 436.37 3.50 .76
Slovakia 7,346 77,067 469.16 3.12 .83 494.86 3.71 .82
Spain 10,791 344,372 480.54 2.60 .81 487.09 2.61 .80
Sweden 4,624 107,104 514.27 2.42 .83 506.12 2.72 .81
Switzerland 8,420 86,491 499.12 3.28 .84 512.98 3.69 .84
Thailand 5,236 637,076 419.92 2.81 .76 429.06 2.70 .74
Tunisia 4,721 150,875 374.62 2.81 .72 384.68 2.56 .69
Turkey 4,885 481,279 440.97 5.79 .82 434.22 5.89 .83
United Kingdom 9,535 698,579 507.01 2.46 .86 518.40 2.52 .87
United States 5,456 3,147,089 495.19 3.22 .87 491.26 3.08 .85
Uruguay 5,835 33,775 434.15 3.43 .77 438.37 2.90 .75

Note. All statistics are calculated at the student level within countries, use student-level
weights, and are calculated using BSS resampling weights on 80 replications.
J. G. Cromley 105

to 548.2 and a range of standard errors from 1.5 to 5.9). Mean


within-country reliability of the reading measure was .82 (range
.70–.88) and for the science measure was .81 (range .68–.88) (see
OECD, 2005a, for different approaches to scale reliability).
Correlations between reading scores and science scores com-
puted at the student level within each country are shown in Ta-
ble 4; all correlations are statistically significant at the .001 level.
The overall mean correlation between reading and science was
.805 (SE = .017, with a range across the 41 countries from .599 to
.892).
The correlation between reading and science plausible value
scores was again very large, and again varied considerably among
the 41 countries. Correlations were highest in The Netherlands,
the United Kingdom, Japan, the United States, and Germany
and were lowest in Tunisia, Uruguay, Brazil, Mexico, and Greece.
Again, I noticed that the countries with high correlations are also
in the top one half on mean reading achievement, and the coun-
tries with low correlations are all among the lowest achieving. I
therefore regressed the reading-science correlation on the mean
reading score. Mean country reading score accounted for a large,
statistically significant proportion of variance in the correlations
(F [1, 39] = 52.985, MSE = .002, p < .001, R2 = .576, adjusted R2 =
.565). Countries with higher mean reading performance showed
a stronger relationship between reading scores and science scores
than did countries with lower mean reading performance (see
Figure 2).
As before, since lower scoring countries might have higher
standard errors, I ran a second regression of reading-science cor-
relation on the mean reading score and the standard error of that
mean. Standard errors correlated significantly with mean reading
score (r [40] =−.454, p = .001) but did not contribute signifi-
cantly to the overall regression (F [2, 38] = 26.965, MSE = .002,
p < .001, R2 = .587, adjusted R2 = .565, the beta weight for SE was
nonsignificant t < 1).

Discussion

As in study 1, there was a high correlation between reading com-


prehension and science proficiency, with a mean of .805. This
correlation varied across the 41 countries, with a tighter linking
106 Reading Achievement and Science Proficiency

TABLE 4 Correlation Within Each Country Between Reading and Science


Scores, PISA 2003

Correlation Between Reading and Science

Country r SE
Australia .860 .017
Austria .871 .016
Belgium .784 .016
Brazil .813 .012
Canada .845 .018
Czech Republic .881 .012
Denmark .708 .017
Finland .836 .028
France .837 .017
Germany .718 .023
Greece .847 .015
Hong Kong (China) .820 .011
Hungary .779 .018
Iceland .878 .021
Indonesia .822 .020
Ireland .851 .030
Italy .866 .009
Japan .801 .016
Korea .745 .021
Latvia .707 .018
Liechtenstein .892 .020
Luxembourg .800 .014
Macao (China) .858 .011
Mexico .788 .014
The Netherlands .839 .019
New Zealand .736 .015
Norway .809 .020
Poland .849 .015
Portugal .754 .019
Russian Federation .599 .021
Serbia .796 .014
Slovakia .836 .043
Spain .641 .017
Sweden .876 .011
Switzerland .781 .017
Thailand .860 .017
Tunisia .871 .016
Turkey .784 .016
United Kingdom .796 .014
United States .813 .012
Uruguay .845 .018

Note. All statistics are conducted at the student level within countries, use student-
level weights, and are calculated using BSS resampling weights on 80 replications.
J. G. Cromley 107
Correlation Between Reading and Science at Student

0.90

0.85

0.80

0.75
Level

0.70

0.65

0.60

0.55

360 390 420 450 480 510 540


Mean Country Reading Score
FIGURE 2 Scatterplot of reading-science correlation vs. mean country reading
score, PISA 2003.

of reading and science proficiency in the countries with higher


mean reading scores.

Study 3

In Study 3, I replicated these results with the PISA 2006 data, also
collected with a new set of 15-year-old participants, in a similar set
of countries with similar measures and procedures.

Method

PARTICIPANTS
Participants were 389,750 fifteen-year-old students from 57
countries across the world,6 sampled using the same two-stage pro-
cess as in PISA 2000 and 2003.7 Reading data for the United States
were not usable because of an error in printing the instructions
for the test booklets, leaving a sample of 393,139 students from
56 countries. The sample size in each of these countries had a
median of 4,909 (with a range of 339 for Liechtenstein to 30,971
108 Reading Achievement and Science Proficiency

for Mexico). The weighted sample size had a median of 101,211


per country (with a range of 353 for Liechtenstein to 2,248,313
for Indonesia), for a total of 18,718,551 students. Participants and
sampling procedures are described in detail in PISA 2006: Science
Competencies for Tomorrow’s World (OECD, 2007).

M EASURES
Students completed demographic questionnaires and mea-
sures of mathematics literacy, science literacy, and reading liter-
acy, with content and format slightly modified from those used
in PISA 2000 and PISA 2003. The PISA 2006 measures were de-
signed to require less reading to more clearly distinguish between
scientific literacy and reading literacy.

P ROCEDURE
Procedures were almost identical to those used in PISA 2000
and 2003. Participants completed measures in March–November
2006 (OECD, 2007).

D ATA ANALYSIS
As in the earlier studies, PISA 2006 used a cluster rotation
measurement design with Rasch scaled items. Data analyses using
resampling were the same as that described for Study 1.

Results

Descriptive statistics for each measure in each of the 56 coun-


tries are shown in Table 5 (all statistics were calculated using the
weighted resampling method described above). The overall mean
performance for reading was 459.6 (standard error [SE] = 3.4,
with a range across the 56 countries from 284.7 to 556.0 and a
range of standard errors from 1.0 to 7.2). The overall mean per-
formance for science was 472.9 (SE = 2.9, with a range of 322.0 to
563.3 and a range of standard errors from 0.9 to 6.1).
Correlations between reading scores and science scores com-
puted at the student level within each country are shown in
Table 6; all correlations are statistically significant at the .001 level.
The overall mean correlation between reading and science was
.819 (SE = .008, with a range across the 56 countries from .603 to
.902).
J. G. Cromley 109

TABLE 5 Country-Specific Sample Sizes and Descriptive Statistics for the


Measures, PISA 2006

Un- Reading Plausible Science Plausible


weighted Weighted Values Values

Country N N M SE M SE

Argentina 4,339 523,048 373.72 7.17 391.24 6.08


Australia 14,170 234,940 512.89 2.06 526.88 2.26
Austria 4,927 89,925 490.19 4.08 510.84 3.92
Azerbaijan 5,184 122,208 352.89 3.12 382.33 2.75
Belgium 8,857 123,161 500.90 3.04 510.36 2.48
Bulgaria 4,498 74,326 401.93 6.91 434.08 6.11
Brazil 9,295 1,875,461 392.89 3.74 390.33 2.79
Canada 22,646 370,879 527.01 2.44 534.47 2.03
Switzerland 12,192 89,634 499.28 3.06 511.52 3.16
Chile 5,233 233,458 442.09 4.99 438.18 4.32
Colombia 4,478 537,262 385.31 5.08 388.04 3.37
Czech Republic 5,932 128,827 482.72 4.18 512.86 3.48
Germany 4,891 903,512 494.94 4.41 515.65 3.80
Denmark 4,532 57,013 494.48 3.18 495.89 3.11
Spain 19,604 381,686 460.83 2.23 488.42 2.57
Estonia 4,865 18,662 500.75 2.93 531.39 2.52
Finland 4,714 61,387 546.87 2.15 563.32 2.02
France 4,716 739,428 487.71 4.06 495.22 3.36
United Kingdom 13,152 732,004 495.08 2.26 514.77 2.29
Greece 4,873 96,412 459.71 4.04 473.38 3.23
Hong Kong (China) 4,645 75,145 536.07 2.42 542.21 2.47
Croatia 5,213 46,523 477.36 2.81 493.20 2.45
Hungary 4,490 106,010 482.37 3.28 503.93 2.68
Indonesia 10,647 2,248,313 392.93 5.92 393.48 5.73
Ireland 4,585 55,114 517.31 3.54 508.33 3.19
Iceland 3,789 4,624 484.45 1.95 490.79 1.64
Israel 4,584 93,347 438.67 4.58 453.90 3.71
Italy 21,773 520,055 468.52 2.43 475.40 2.02
Jordan 6,509 90,267 400.58 3.27 421.97 2.84
Japan 5,952 1,113,701 497.96 3.65 531.39 3.37
Kyrgyzstan 5,904 80,674 284.71 3.48 322.03 2.93
Korea 5,176 576,669 556.02 3.81 522.15 3.36
Liechtenstein 339 353 510.44 3.91 522.16 4.10
Lithuania 4,744 50,329 470.07 2.98 487.96 2.76
Luxembourg 4,567 4,733 479.37 1.28 486.32 1.05
Latvia 4,719 29,232 479.49 3.73 489.54 2.97
Macao (China) 4,760 6,417 492.29 1.10 510.84 1.06
Mexico 30,971 1,190,420 410.50 3.06 409.65 2.71
Montenegro 4,455 7,734 391.98 1.22 411.79 1.06
(Continued on next page)
110 Reading Achievement and Science Proficiency

TABLE 5 Country-Specific Sample Sizes and Descriptive Statistics for the


Measures, PISA 2006 (Continued)

Un- Reading Plausible Science Plausible


weighted Weighted Values Values

Country N N M SE M SE

The Netherlands 4,871 189,576 506.75 2.92 524.86 2.74


Norway 4,692 59,884 484.29 3.18 486.53 3.11
New Zealand 4,823 53,398 521.03 2.99 530.38 2.69
Poland 5,547 515,993 507.64 2.79 497.81 2.34
Portugal 5,109 90,079 472.30 3.56 474.31 3.02
Qatar 6,265 7,271 312.21 1.20 349.31 0.86
Romania 5,118 223,887 395.93 4.69 418.39 4.20
Russian Federation 5,799 1,810,856 439.86 4.32 479.47 3.67
Serbia 4,798 73,907 401.03 3.46 435.64 3.04
Slovak Republic 4,731 76,201 466.35 3.06 488.43 2.59
Slovenia 6,595 20,595 494.41 0.99 518.82 1.11
Sweden 4,443 126,393 507.31 3.44 503.33 2.37
Chinese Taipei 8,815 293,513 496.24 3.38 532.47 3.57
Thailand 6,192 644,125 416.75 2.59 421.01 2.14
Tunisia 4,640 138,491 380.34 4.02 385.51 2.96
Turkey 4,942 665,477 447.14 4.21 423.83 3.84
Uruguay 4,839 36,011 412.52 3.43 428.13 2.75

Note. All statistics are calculated at the student level within countries, use student-level
weights, and are calculated using BSS resampling weights on 80 replications.

The correlation between reading and science plausible value


scores was again very large, and again it varied considerably
among the 56 countries. Correlations were highest in The Nether-
lands, New Zealand, Luxembourg, Switzerland, and Liechtenstein
and were lowest in Colombia, Kyrgyzstan, Azerbaijan, Uruguay,
and Tunisia. I noticed again that the countries with high correla-
tions are also in the top one half on mean reading achievement,
and the countries with low correlations are all among the low-
est achieving. I therefore regressed the reading-science correla-
tion on the mean reading score. Mean country reading score ac-
counted for a large, statistically significant proportion of variance
in the correlations (F [1, 55] = 37.240, MSE = .002, p < .001, R2 =
.408, adjusted R2 = .397). As before, countries with higher mean
reading performance showed a stronger relationship between
J. G. Cromley 111

TABLE 6 Correlation Within Each Country Between Reading and


Science Scores, PISA 2006

Correlation Between Reading and Science

Country r SE

Argentina .749 .015


Australia .878 .004
Austria .848 .010
Azerbaijan .679 .019
Belgium .876 .006
Bulgaria .843 .010
Brazil .737 .012
Canada .837 .006
Switzerland .888 .004
Chile .747 .010
Colombia .603 .016
Czech Republic .857 .007
Germany .856 .009
Denmark .856 .006
Spain .800 .006
Estonia .849 .006
Finland .809 .006
France .853 .007
United Kingdom .870 .005
Greece .820 .008
Hong Kong (China) .840 .007
Croatia .860 .006
Hungary .853 .007
Indonesia .785 .017
Ireland .860 .007
Iceland .847 .005
Israel .833 .008
Italy .764 .009
Jordan .801 .009
Japan .857 .006
Kyrgyzstan .675 .018
Korea .863 .008
Liechtenstein .902 .010
Lithuania .847 .007
Luxembourg .886 .004
Latvia .818 .008
Macao (China) .770 .007
Mexico .754 .010
Montenegro .873 .004
The Netherlands .878 .006
(Continued on next page)
112 Reading Achievement and Science Proficiency

TABLE 6 Correlation Within Each Country Between Reading and


Science Scores, PISA 2006 (Continued)

Correlation Between Reading and Science

Country r SE

Norway .835 .008


New Zealand .879 .004
Poland .837 .005
Portugal .856 .007
Qatar .862 .003
Romania .791 .014
Russian Federation .731 .011
Serbia .832 .007
Slovak Republic .845 .007
Slovenia .853 .004
Sweden .842 .007
Chinese Taipei .868 .005
Thailand .796 .009
Tunisia .725 .015
Turkey .790 .015
Uruguay .699 .012

Note. All statistics are conducted at the student level within countries, use
student-level weights, and are calculated using BSS resampling weights on 80
replications.

reading scores and science scores than did countries with lower
mean reading performance (see Figure 3).
As before, I ran a second regression of reading-science corre-
lation on the mean reading score and the standard error of that
mean. Standard errors correlated significantly with mean reading
score (r [54] =−.275, p = .041) but did not contribute signifi-
cantly to the overall regression (F [2, 53] = 19.872, MSE = .002,
p < .001, R2 = .429, adjusted R2 = .407, the beta weight for SE was
nonsignificant, t < 1).

Discussion

As in Studies 1 and 2, there was a high correlation between read-


ing comprehension and science proficiency, with a mean of .819,
even though the test designers deliberately required less read-
ing on the PISA 2006 science test. As in Studies 1 and 2, the
J. G. Cromley 113
Correlation Between Reading and Science at Student Level

1.00

0.95

0.90

0.85

0.80

0.75

0.70

0.65

0.60

0.55

250 300 350 400 450 500 550


Mean Country Reading Score
FIGURE 3 Scatterplot of reading-science correlation vs. mean country reading
score, PISA 2006.

correlations varied across the 56 countries, with a tighter linking


of reading and science proficiency in the countries with higher
mean reading scores.

General Discussion

What are we to make of this large and robust correlation between


reading comprehension and science proficiency? Three possible
explanations present themselves: reading comprehension causes
science proficiency, science proficiency causes reading compre-
hension, or a third factor cause both reading comprehension and
science proficiency. I consider these in turn.
Could reading comprehension cause science proficiency?
This is perhaps the most intuitively appealing interpretation. If
comprehension is a relatively domain-general skill, then having
good reading comprehension should lead to good understanding
of science texts and tests. Since proficient readers engage in more
reading (Bray, Pascarella, & Pierson, 2004) and read more widely
114 Reading Achievement and Science Proficiency

(Smith, 2000), they would be expected to have more exposure


to science text. Furthermore, the results of our regression anal-
yses suggest this interpretation: though there is some “science-
specific” proficiency captured by the test, the higher the reading
comprehension of a country, the larger the correlation between
reading and science proficiency. This interpretation is also con-
sistent with the results of cognitive strategy instruction programs
in science classrooms. Teaching students specific reading strate-
gies is associated with better performance on science text-based
measures.
Could science proficiency cause reading comprehension?
This seems implausible for several reasons: not all 15-year-old stu-
dents have had equal exposure to science learning experiences in
or out of school; science proficiency should be rooted—at least
to some extent—in observational/hands-on experiences, whereas
reading comprehension derives from reading experience; science
proficiency should be domain specific, whereas reading compre-
hension should be relatively domain general.
Could a third factor (or factors) cause both reading com-
prehension and science proficiency? Several current theories of
comprehension that have been applied at the secondary level
suggest components of comprehension that might be relevant for
science proficiency as well. Background knowledge, vocabulary,
and inference all play important roles in reading comprehension,
but they may also play an important role in science proficiency,
as discussed above with regard to constructionist theories of com-
prehension and learning. To paraphrase Walter Kintsch (1998),
comprehension is a paradigm for cognition. Research suggests
that science domain-specific background knowledge affects
comprehension of scientific text, as does science domain-specific
vocabulary (Otero, León, & Graesser, 2002). A large body of
research also suggests that extensive reading across a wide variety
of domains and text types increases knowledge, vocabulary, and
reading comprehension (Stanovich, 2000). Inference is a key
process for comprehension of scientific text (Otero et al., 2002)
and was heavily weighted in both the PISA reading literacy and
science literacy measures.
The emphasis in constructionist theories or forming a well-
integrated mental model for a domain (Graesser et al., 1994;
Kintsch, 1998; Linn & Eylon, 2006; van den Broek et al., 2005)
J. G. Cromley 115

could be the common variable underlying both reading and


science achievement. This might explain why a number of class-
room science interventions appear to increase both reading com-
prehension and science proficiency. For example, Romance and
Vitale (2008) found that building science content knowledge
through reading, writing, inquiry, and discussion—but no explicit
instruction in, e.g., comprehension strategies or vocabulary—led
to increases on both the Iowa Test of Basic Skills comprehension
subtest and the MAT science achievement test.
There are other plausible third factors that I could not in-
vestigate with these data; the high correlations between reading
comprehension and science proficiency could be due to artifacts
of measurement, instructional practices, students’ familiarity with
standardized tests, alignment of home and school culture or prac-
tices (especially with regard to science practices; e.g., Lee, Fradd,
& Sutman, 1995), or other causes.
Although I am not able to test the three primary hypotheses
with these data, I lean toward the third explanation: that back-
ground knowledge, reading comprehension strategies, general
vocabulary, inference, and other products of extensive reading ex-
perience also drive higher science proficiency. This is consistent
with several reading-related initiatives in science education, which
have found increases in science achievement using broad read-
ing interventions—ones that instruct students in strategies, vocab-
ulary, background knowledge, and writing (e.g., Guthrie et al.,
2004; Shymansky, Yore, & Anderson, 2004). Results of this study
support the notion that in order to increase scores on science pro-
ficiency tests, one productive avenue might be to increase compre-
hension of written science texts. Although strategy instruction is
probably the most popular approach to increasing content area
literacy, it remains an empirical question which aspect or aspects
of science text comprehension ought to be emphasized in such
instruction, and we have begun a series of studies to investigate
this (Cromley, Snyder, Luciw, & Tanaka, 2009).

Acknowledgements

I thank the office of the Temple University Vice President for Re-
search for funds to support this research and Ulana Luciw and
Lindsey Snyder for editorial assistance with the manuscript.
116 Reading Achievement and Science Proficiency

Notes

1. Under NCLB, students must be tested at least once yearly across three age
ranges (once in a late elementary grades 3–5, once in a middle school grades
6–9, and once in a high school grades 10–12).
2. The year and month of birth are included in the data set, but for privacy
reasons neither the exact age nor the date of data collection are given.
3. In schools with less than 35 eligible students, all 15-year-old students partici-
pated.
4. As in PISA 2000, the year and month of birth are included in the data set, but
for privacy reasons neither the exact age nor the date of data collection are
given.
5. As in PISA 2000, in schools with less than 35 eligible students, all 15-year-old
students participated.
6. As in PISA 2000, the year and month of birth are included in the data set, but
for privacy reasons neither the exact age nor the date of data collection are
given.
7. As in PISA 2000, in schools with less than 35 eligible students, all 15-year-old
students participated.

References

Best, R., Rowe, M., Ozuru, Y., & McNamara, D. (2005). Deep-level comprehen-
sion of science texts: The role of the reader and the text.Topics in Language
Disorders, 25(1), 65–83.
Braunger, J., Donahue, D. M., Evans, K., & Galguera, T. (2005). Rethinking prepa-
ration for content area teaching: The reading apprenticeship approach. San Francisco:
Jossey-Bass.
Bray, G. B., Pascarella, E. T., & Pierson, C. T. (2004). Postsecondary education
and some dimensions of literacy development: An exploration of longitudinal
evidence. Reading Research Quarterly, 39(3), 306–330.
Cromley, J. G., Snyder, L. E., Luciw, U. A., & Tanaka, J. (2009, June). Testing the
fit of the DIME model of reading comprehension with biology text. Paper presented at
the meeting of the Society for the Scientific Study of Reading, Boston, MA.
Demps, D. L., & Onwuegbuzie, A. J. (2001). The relationship between eighth-
grade reading scores and achievement on the Georgia High School Gradua-
tion Test. Research in the Schools, 8(2), 1–9.
Dempster, E. R., & Reddy, V. (2007). Item readability and science achievement
in TIMSS 2003 in South Africa. Science Education, 91(6), 906–925.
Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during
narrative text comprehension. Psychological Review, 101(3), 371–395.
Guthrie, J. T., Wigfield, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis,
M. H., et al. (2004). Increasing reading comprehension and engagement
J. G. Cromley 117

through Concept-Oriented Reading Instruction. Journal of Educational Psychol-


ogy, 96(3), 403–423.
Hapgood, S., & Palincsar, A. S. (2006–2007). Where literacy and science inter-
sect. Educational Leadership, 64(4), 56–61.
Haught, P. A., & Walls, R. T. (2004). Relationships of reading, MCAT, and
USMLE Step 1 Test results for medical students. Reading Psychology: An In-
ternational Quarterly, 25(2), 83–92.
Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, England:
Cambridge University Press.
Lee, O., Fradd, S. H., & Sutman, F. X. (1995). Science knowledge and cogni-
tive strategy use among culturally and linguistically diverse students. Journal
of Research in Science Teaching, 32(8), 797–816.
Linn, M. C., & Eylon, B.-S. (2006). Science education: Integrating views of learn-
ing and instruction. In P. A. Alexander & P. H. Winne (Eds.), Handbook of ed-
ucational psychology (2nd ed.; pp. 511–544). Mahwah, NJ: Lawrence Erlbaum.
Liu, X. (2004). Using concept mapping for assessing and promoting relational
conceptual change in science.Science Education, 88(3), 373.
Medina, M., & Mishra, S. (1994). Relationships among Spanish reading achieve-
ment and selected content areas for fluent- and limited-Spanish-proficient
Mexican Americans. Bilingual Review/Revista Bilingue, 19(2), 134–141.
National Research Council. (2007). Rising above the gathering storm: Energizing and
employing America for a brighter future. Washington, DC: Author.
No Child Left Behind Act of 2001. Pub. L. No. 107-110, 20 U.S.C. (2001).
Nolen, S. B. (2003). Learning environment, motivation, and achievement in
high school science. Journal of Research in Science Teaching, 40(4), 347–368.
Norris, S. P., & Phillips, L. M. (2003). How literacy in its fundamental sense is
central to scientific literacy. Science Education, 87(2), 224–240.
Organisation for Economic Co-operation and Development. (2001). Knowledge
and skills for life: First results from the OECD programme for international student
assessment (PISA) 2000. Paris: Author.
Organisation for Economic Co-operation and Development. (2002). PISA 2000
technical report. Paris: Author.
Organisation for Economic Co-operation and Development. (2003). The PISA
2003 assessment framework: Mathematics, reading, science and problem solving knowl-
edge and skills. Paris: Author.
Organisation for Economic Co-operation and Development. (2004). Learning for
tomorrow’s world: First results from PISA 2003. Paris: Author.
Organisation for Economic Co-operation and Development. (2005a). PISA 2003
technical report. Paris: Author.
Organisation for Economic Co-operation and Development. (2005b). PISA 2003
data analysis manual. Paris: Author.
Organisation for Economic Co-operation and Development. (2007). PISA 2006:
Science competencies for tomorrow’s world. Paris: Author.
O’Reilly, T., & McNamara, D. S. (2007). The impact of science knowledge, read-
ing skill, and reading strategy knowledge on more traditional “high-stakes”
measures of high school students’ science achievement. American Educational
Research Journal, 44(1), 161–196.
118 Reading Achievement and Science Proficiency

Otero, J., León, J. A., & Graesser, A. C. (Eds.). (2002). The psychology of science text
comprehension. Hillsdale, NJ: Lawrence Erlbaum.
Purcell-Gates, V., Duke, N. K., & Martineau, J. A. (2007). Learning to read and
write genre-specific text: Roles of authentic experience and explicit teach-
ing.Reading Research Quarterly, 42(1), 8–45.
Ritchie, S., Rigano, D., & Duane, A. (2008). Writing an ecological mystery in
class: Merging genres and learning science.International Journal of Science Edu-
cation, 30(2), 143–166.
Romance, N. R., & Vitale, M. R. (2008, March). Science IDEAS: A knowledge-based
model for accelerating reading/literacy through in-depth science learning. Paper pre-
sented at the annual meeting of the American Educational Research Associa-
tion, New York.
Roth, K. J., Drucker, S. L., Garnier, H. E., Lemmens, M., Chen, C., Kawanaka,
T., et al. (2006). Teaching science in five countries: Results from the TIMSS
1999 video study statistical analysis report. Washington, DC: U.S. Department
of Education. Retrieved June 16, 2006, from http://www.nces.ed.gov
Shymansky, J. A., Yore, L. D., & Anderson, J. O. (2004). Impact of a school dis-
trict’s science reform effort on the achievement and attitudes of third- and
fourth-grade students. Journal of Research in Science Teaching, 41(8), 771–790.
Smith, M. C. (2000). The real-world reading practices of adults. Journal of Literacy
Research, 32(1), 25–52.
Stanovich, K. (2000). Progress in understanding reading: Scientific foundations and
new frontiers. New York: Guilford.
Trefil, J. (2007). Why science? New York: Teachers College Press.
van den Broek, P., Rapp, D. N., & Kendeou, P. (2005). Integrating memory-
based and constructionist processes in accounts of reading comprehension.
Discourse Processes, 39(2), 299–316.
Yore, L. D., Bisanz, G. L., & Hand, B. M. (2003). Examining the literacy com-
ponent of science literacy: 25 years of language arts and science research.
International Journal of Science Education, 25(6), 689–725.
Yore, L. D., & Treagust, D. F. (2006). Current realities and future possibilities:
Language and science literacy—Empowering research and informing instruc-
tion. International Journal of Science Education, 28(2–3), 291–314.
Zmach, C. C., Sanders, J., Patrick, J. D., Dedeoglu, H., Charbonnet, S., Henkel,
M., et al. (2007). Infusing reading into science learning.Educational Leader-
ship, 64(4), 62–66.

Potrebbero piacerti anche