Sei sulla pagina 1di 21

Assessing Students' Motivation and

Learning Strategies in the Classroom Context:


The Motivated Strategies for
Learning Questionnaire

Teresa Garcia and


Paul R. Pintrich

Current research on student classroom learning stresses the importance of


considering both motivational and cognitive components of academic performance
(Garcia & Pintrich, 1994; Pintrich & De Groot, 1990). Motivational components
include students' perceptions of the classroom environment as well as their self-
related beliefs such as personal goals, self-efficacy, interest, and value beliefs.
Cognitive components include students' content knowledge as well as various
cognitive learning strategies such as rehearsal, elaboration, and organization, and
metacognitive strategies such as planning, monitoring, and regulating learning
(Garcia & Pintrich, 1994). Research in both experimental and field settings has
consistently shown that positive motivational beliefs such as perceptions of high self-
efficacy, a focus on mastery goals, high value and interest in the task or content, and
low levels of test anxiety are positively related to greater cognitive engagement in
terms of the use of cognitive and metacognitive strategies as well as actual academic
performance (see Pintrich & Schrauben, 1992 for a review).

Given that both motivational and cognitive components are important for
classroom learning, how can we assess them in the classroom context? Of course, in
laboratory studies there are a number of techniques that can be used, including
reaction time or think-aloud protocols to measure strategy use, and actual
experimental manipulations to induce certain types of motivational goals (cf. Ericsson
& Simon, 1993; Graham & Golan, 1991). Although these types of techniques can
provide good construct validity and high internal validity, they do sacrifice some
external validity and generalizability to the classroom setting. For example, students
have self-efficacy beliefs for most academic tasks based on their past history of
success and failure with similar tasks, while the laboratory task may be relatively

M. Birenbaum et al. (eds.), Alternatives in Assessment of Achievements,


Learning Processes and Prior Knowledge
© Kluwer Academic Publishers 1996
320 Garcia and Pintrich
unfamiliar to them. Students also have differing levels of personal interest and value
for classroom academic tasks. In contrast, most laboratory tasks probably have low
value for the typical student, as the artificiality of certain tasks (e.g., ring-tossing;
puzzle completion) may make the activity seem unimportant or not meaningful to
"real" life (however, the novelty of certain laboratory tasks can make them very
interesting to students). Accordingly, if students' motivational beliefs for laboratory
tasks may be qualitatively different from their motivational beliefs for classroom
academic work, and if experimental settings may not adequately tap into the complex
and multifaceted nature of students' beliefs regarding their classroom academic work,
how might we realistically assess these beliefs in the classroom and link them to
students' cognitive and metacognitive learning strategies? For researchers who aspire
to study the interaction of motivation and cognition in a classroom setting, this
presents a serious problem with no one "correct" solution, just tradeoffs among the
strengths and weaknesses of various methods.

ASSESSING MOTIVATION AND LEARNING STRATEGIES IN THE CLASSROOM

As regards the methods that can be used in a classroom setting, reaction time and
think-aloud protocols are rather difficult to use in terms of pragmatic concerns and
may also limit ecological validity. However, observations, stimulated recaIl,
interviews, and questionnaires can all be used in classroom settings. Of course,
observations (both high and low inference quantitative observational schemes as well
as more qualitative and ethnographic techniques) can be used to assess students'
motivation and cognition. In fact, many indicators of motivation are behavioral in
nature, such as choice of tasks, level of effort on tasks, and persistence at tasks.
These three behaviors are all good indicators of a student who is motivated for the
task. However, in most current motivational models, simple observation of the
behaviors of choice, effort, and persistence is considered inadequate for
characterizing student motivation. Both attribution theory (Weiner, 1986) and goal
theory (Ames, 1992) suggest that students' perceptions of the task and themselves, as
weB as their achievement behaviors, have implications for future cognition,
motivation, and affect. For example, in goal theory, two students may both
demonstrate high levels of effort and persistence on an academic tasks, but if one is
mastery-oriented and the other is performance-oriented, then these qualitative
differences in goal orientation can have a dramatic effect on subsequent cognitions,
attributions, motivation, and affect (Ames, 1992).

Accordingly, from this general constructivist perspective, it is very important to


also coBect data on students' perceptions and beliefs about the task and their
behavior, not just the behavioral indices that can be generated from observational
data. Of course, if the observational data include student speech and discourse in the
classroom, the actual statements could be coded for indicators of student motivation
(e.g., Thorkildsen & Nicholls, 1991) or actual strategy use (e.g., Como, 1989).
However, this type of observational data is not easy to collect or use. The collection
and transcription of this type of data is very time-consuming. Moreover, the coding
of the data is fraught with reliability and validity issues. In particular, it is not clear
how to characterize the representativeness of the discourse and codes for other
students or classrooms besides the ones actually in the sample (Carter, 1993).

Stimulated recall methods can provide the same type of rich descriptive data that
is generated by observational methods, and can also supply measures of students'
The Motivated Strategies for Learning Questionnaire 321
beliefs and perceptions of their behavior. By having students respond to a videotaped
replay of their behavior from an earlier classroom session, researchers can ground
students' self reports of their motivation and cognition in actual classroom behavior.
This approach assists in limiting some of the validity problems with self-report data,
provided that the time lag between actual behavior and stimulated recall of the
behavior is not too lengthy (Ericsson & Simon, 1993). In particular, stimulated
recalls may allow the researcher to assess students' beliefs and cognitions at a fairly
small "grain size" (Howard-Rose & Winne, 1993), thereby providing data about
microlevel cognitive and motivational processes that influence learning.
Additionally, stimulated recall techniques can be used as classroom events unfold
over time, unlike think-aloud protocols which could not be used in an actual
classroom setting during instruction. Stimulated recall techniques also can be used
with large numbers of students, although data collection is time-consuming.
Stimulated recall methods may provide the best data in terms of reliability and
validity, but have the disadvantages of being expensive, time-consuming, and
impractical for researchers with limited resources.

SELF-REPORT MEASURES IN THE CLASSROOM

Other self-report methods such as interviews and questionnaires are often the
most practical and easy to use in classroom settings. They can be administered
relatively easily, and in the case of self-report questionnaires with closed-ended items,
scored and prepared for complex data analyses fairly quickly. Most importantly,
given their ease of use, questionnaires can be used with large and diverse samples
which can increase the generalizability of the findings. Of course, there are reliability
and validity problems with these types of self-report instruments as well.

Reliability of Self-Reports

In terms of reliability over time, Assor and Connell (1992) suggest that students'
self-assessments of their competence may not be stable when they are quite young
because in fact, children's perceptions of competence are changing quite rapidly as a
function of development and experience. Nevertheless, these researchers note that
children's perceptions of competence can show moderate stability, enough to meet
basic psychometric requirements that allow for valid assessment (Assor & Connell,
1992). Indeed, this general issue of stability is pertinent not only to competence
beliefs, but also to other motivational factors and to the use of cognitive strategies for
learning. That is, it may be that the most adaptive or self-regulated learners do
modify and change their beliefs and strategies as a function of the task or context. In
this case, traditional estimates of stability over time are difficult to use.

Besides the stability issue, other researchers suggest that the internal consistency
or coherence of factor structures generated from self-report questionnaires may vary
with age (Pintrich & De Groot, 1990; Wigfield & Eccles, 1992). For example, in our
own work, somewhat different factor structures emerge from our questionnaire data
with junior high school and college students (cf., Pintrich & De Groot, 1990; Pintrich,
Smith, Garcia, & McKeachie, 1993), but the results still fit within our general
conceptual model. Future research needs to address whether these developmental
differences in factor structures are a function of method variance or actually reflect
developmental differences in cognition and motivation. Although these types of
322 Garcia and Pintrich
developmental differences make for interesting problems in building and elaborating
theoretical models, they do not necessarily invalidate the use of self-report
instruments as long as developmentally and methodologically appropriate factor
structures are used.

Validity of Self-Reports

Besides these reliability issues, the overall validity of self-report questionnaires


or interviews has been questioned throughout the history of empirical psychology.
There are often concerns about the social desirability of students' responses.
Although this is always a concern to keep in mind when using questionnaires or
interviews, in our own work, when we have included measures of social desirability
(e.g., the Crown-Marlowe social desirability scale), these measures of response bias
did not account for any significant amount of variance and did not change our final
results. In terms of motivational beliefs, there also is concern abut the inaccuracy of
competence beliefs, in particular that students often overestimate their competence.
However, Assor and Connell (1992) suggest that, although these beliefs may not
reflect "reality" in terms of agreement with grades or achievement test scores, their
longitudinal data shows that two years later, these inflated perceptions of competence
actually do relate to achievement. Hence, these "overestimates" of competence are
adaptive and help students who are not doing that well cope with the demands of the
tasks and maintain their effort. In general, if one adopts a constructivist perspective
regarding student motivation, then it is crucial to assess students' perceptions of their
own motivation, not just "objective" measures of motivation such as observations and
teachers' or parents' reports.

The validity of self-reports of cognitive strategy use is not so easily resolved,


however. In this case, students' self-reports from interviews or questionnaires may
not reflect actual strategy use. Actual observations or some behavioral indicator of
strategy use provide better construct validity. In addition, these behavioral measures
can be used to assess the smaller grain size of the more basic or microlevel cognitive
processes that make up cognitive and metacognitive strategy use (Howard-Rose &
Winne, 1993). These measures are very useful in helping us understand which of the
many possible cognitive processes or strategies contribute most to self-regulated
learning. However, at the larger grain size of more global indicators of strategy use or
metacognition, self-reports such as interviews or questionnaires can be quite useful.
These more global measures can help us decide if any cognitive or metacognitive
strategy use is taking place. For example, in our own work with the Motivated
Strategies for Learning Questionnaire (MSLQ), we have found over and over again
that the three general aspects of metacognition-planning, monitoring, and regulating,
do not load into separate factors in a factor analyses, but just load into one factor.
From this questionnaire data we would not want to conclude that the theoretical
distinctions between planning, monitoring, and regulating are not useful. We would
leave the explication of the relations between the three aspects to more experimental
studies where the processes could be examined in more micro level detail. However,
our results do suggest that when students engage in some aspects of metacognition,
they tend to report doing all three aspects and they also do better in terms of actual
achievement, which is in line with our general assumptions about self-regulated
learning.
The Motivated Strategies for Learning Questionnaire 323
In addition, self-reports of strategy use can be improved when students are asked
to report on concrete behaviors that they could engage in, not abstract cognitive
operations. Accordingly, in our instrument development, we try to have items that
ask students about actual behaviors they might use as they study their course material.
For example, we ask students if they outline their course material or write short
summaries of their readings and lecture notes to assess their use of cognitive
strategies. For metacognition, we ask them if they reread course material when they
can't understand it, not if they "monitor and regulate" their reading comprehension
(see Appendix). Of course, some of our items are more global than those, but most
students should be able to report if they engage in certain types of behaviors.
However, there may be a lower developmental limit to young children's ability to use
these type of self-report items. We have had success with the MSLQ with children as
young as fifth and sixth graders, but in the early elementary grades these items may
be difficult for them. It may be that they are not able to understand the items or lack
the metacognitive awareness to even report on their own behavior. At the same time,
it may not be just developmental, it may be that the nature of the context and the types
of academic tasks in the early elementary grades do not provide affordances for the
use of these strategies. In this case, lacking the opportunities to develop strategies,
we would not expect young children to be able to report on their use of strategies very
well.

In any event, carefully designed self-report instruments can probably be used


with students in the upper elementary grades and beyond. They can provide a
relatively efficient and practical measure of students' motivation and use of learning
strategies. In addition, depending on how they are constructed, they can be used to
assess motivation and learning strategies in a manner that is ecologically valid for the
classroom setting. In the remainder of this paper, we outline the development of one
such instrument - the Motivated Strategies for Learning Questionnaire. We discuss
the development of the questionnaire, present some data on the reliability and validity
of the scales, and discuss how we have used it in our own research program on
motivation and cognition in the classroom.

DESCRIPTION AND DEVELOPMENT OF THE MOTIVATED STRATEGIES FOR


LEARNING QUESTIONNAIRE

The Motivated Strategies for Learning Questionnaire (MSLQ) is a self-report


instrument designed to assess college students' motivational orientation and their use
of different learning strategies for a college course. The MSLQ is based on a general
social-cognitive view of motivation and learning strategies, with the student
represented as an active processor of information whose beliefs and cognitions are
important mediators of instructional input and task characteristics. By focusing on
the roles of both motivation and cognition in the classroom, the MSLQ also addresses
recent advances in self-regulated learning, which emphasizes the interface between
motivation and cognition (Schunk & Zimmerman, 1994; Zimmerman & Schunk,
1989). This theoretical framework distinguishes the MSLQ from many of the older
study skill inventories (e.g., Brown & Holtzman, 1967; Christensen, 1968; Goldman
& Warren, 1973), which have been criticized for being atheoretical (e.g., Weinstein &
Underwood, 1985), and measures of learning styles, which proceed from an
individual differences framework (e.g., Lockhart & Schmeck, 1984; Torrance,
Reynolds, Riegel, & Ball, 1977). In contrast to another widely used self-report
instrument, the Learning and Study Strategies Inventory (the LASSI, Weinstein,
324 Garcia and Pintrich
Palmer, & Schulte, 1987), the MSLQ takes a more detailed view of the motivational
processes involved in self-regulated learning, and contextualizes motivation and
learning strategies by assessing them at the course level, rather than at a general level.

The MSLQ has been under development formally since 1986 when the National
Center for Research on improving Postsecondary Teaching and Learning
(NCRIPTAL) at the University of Michigan was funded and informally since 1982.
During 1982-1986, self-report instruments to assess students' motivation and use of
learning strategies (varying from 50 to 140 items) were used to evaluate the
effectiveness of the "Learning to Learn" course offered at the University of Michigan
(see McKeachie, Pintrich, & Lin, 1985; Pintrich, McKeachie & Lin, 1987). These
measures were used with over 1000 University of Michigan undergraduates enrolled
in the course. These early instruments were subjected to the usual statistical and
psychometric analyses, including internal reliability coefficient computation, factor
analysis, and correlations with academic performance and aptitude measures (e.g.,
SAT scores). The items have undergone continuous revisions on the basis of these
results. The formal development of the MSLQ began in earnest when NCRIPT AL
was founded in 1986. NCRIPTAL was funded for research on college populations
excluding major research institutions like the University of Michigan. Accordingly,
the MSLQ was administered at three collaborating institutions in the Midwest: a four-
year public, comprehensive university; a small liberal arts college; and a community
college. There were three major waves of data collection with previous versions of
the MSLQ used with students from these three institutions: 1986, 1987, and 1988.
The items on these previous versions of the MSLQ also underwent the usual
statistical and psychometric analyses including internal reliability coefficient
computation, factor analyses, and correlations with academic performance measures.
The first wave of data collected in 1986 included 326 students; the second wave in
1987 included 687 students; and the third wave in 1988 included 758 students. After
each of these waves the data were analyzed and items revised as the conceptual model
underlying the instrument was refined.

The final version of the MSLQ presented in this paper reflects the past dozen
years of work on these various waves of data collection (see the Appendix for a copy
of the most current version of the MSLQ). The instrument is designed to be given in
class and takes approximately 20-30 minutes to administer. There are two sections to
the MSLQ, a motivation section and a learning strategies section. The 81 items on
this version of the MSLQ are scored on a seven-point Likert scale, from I (not at all
true of me) to 7 (very true of me). The motivation section consists of 31 items that
assess students' goals and value beliefs for a course, their beliefs about their skills to
succeed in a course, and their anxiety about tests in a course (see Figure I). The
learning strategy section includes 50 questions: 31 items regarding students' use of
different cognitive and metacognitive strategies and 19 items concerning student
management of different learning resources (see Figure 2).

The questionnaire as a whole and all items are designed to be answered in terms
of the students' motivation and use of learning strategies for a specific course.
Students usually take the questionnaire during the actual meeting time of the class and
are asked to respond about their motivation and use of learning strategies for that
specific course. By having them respond to the MSLQ while physically sitting in the
classroom for the course with the instructor, the other students, and course books and
materials actually present, we hope that these cues will stimulate the respondents to
think about their actual beliefs and behavior for that course, thereby increasing
r::9f:l
nnnnq-i--] """---.64 .57~
/1 99 ;l
(I)
3M _~ I
~
nnnqi6:J .....__.69 J< .84---100-1 __ a:o
0'.
~.66 .47 - - ~ 25 <
r--q-zz:J ...... I 9
/.55 [
n
til
r- q'24:J

:~~/ I
~~ I· 912 I
....0'
.63 "1--- ~
c----.r::1 """---.71 ./l~ 915 I 2.
~
CCjil:J .....__.58 ::::::;;..{ .«:= 86
.48 :89 - - - 1 920 ~
913::1 ~.44 .77--- ~.
.
87~ -- I 921 I §
930
/ e1.
~I 929 I a
I 931 I
94 - -]
/!'(j3I
:~~ I 98 I
T"" .62 _ _ _
:=::::
,n,W I"~K ~ .86 Value
914 I
.88 I
923 I ~.88 .76 - - - 1 919
~
9 26 ;/.84 I 9 28

927
Figure 1. Measurement Model of Student Motivation Based on the MSLQ (Pintrich et aI., 1993). Note: Numbers on arrows represent lambda-ksi W
tv
estimates of the standardized solution. VI
[ 934 I[ 945 II 950
w
U
I C2C1 I N
939- J 40r 0-.
9 I uSC] 1...._;_68 /'" II-~g"""i uU'l
",-
"- .82 .84
q46 '-..'" \, .~" ~ I ~,
'""'"--- .63 Rehearsal
.~b
~62~ .~
959 ----.58
40~/1 944
r:::::qn-]
:44
~</ I -954"'--'
----- ~ '47/ _
r-Q53--]
.53 ________
:54~ I 955 I
U
r- ij6:j-:::J ~
~.60 .!~ -_......... I 956
.60
C<i64--] .......--.74~ :::::: 35 -----...... I 957 , I
.42
967 "'--.71
~'~~I 961 I
c:sw=] ~.65 .50 ~" " I 976 I
C981J
/
"'" ~I 978 I
r 932:1 ~I I
[<j42J """'---- .57 ~
.......- .55 _ Organization I 9 35
.45
[ 949 "'--.75 ~I 943 I
[--963-:1 ~ .81 ________ I
.52~ 952, I
[--938-"] ~ _.:~~ -_........... I 965 I
.._ - - .64 -----...... I 9 I
70
I .49 .37 ............ o
---:-;, ~.76
48 ~
~.
,-q:'5l-] ...... . :S: '--9=73 1
.74 .53" .65 .52 ...74 §l
r---966- ]
-
-----.67
.40",' " I_ Q..
, " ~ :9.
r--q~37-' 948 960£ r-, ='
/""'I,'1 II I -q~74~
[=---qycJ ~ 5'.
"" I 980r (")
::T
Figure 2. Measurement Model of Student Learning Strategies Based on the MSLQ (Pintrich et aI .• 1993). Note: Numbers on arrows represent lambda-ksi estimates of the standardized solution.
The Motivated Strategies for Learning Questionnaire 327
accuracy. In addition, our theoretical model assumes that students' motivation and
learning strategies are contextualized and situation-specific, not generalized
individual differences or learning styles. Accordingly, we did not operationalize the
MSLQ at the general level of having students respond in terms of their general
approach to all learning situations or all classroom learning (cf. the LASSI,
Weinstein, Zimmerman & Palmer, 1988). We assume that students' motivation
varies for different courses (e.g., more interest or value in an elective course vs. a
required course; more efficacy for an easier course in psychology in comparison to a
difficult math or physics course) and that their strategy use might vary as well
depending on the nature of the academic tasks (e.g., multiple choice vs. essay exams).
At the same time, in terms of practical utility, we did not think it useful to
operationalize the questionnaire items in terms of all the various tasks and situations
that a student might confront in one course (e.g., studying for a test; trying to
understand one lecture; reading one chapter in a textbook; writing a final paper or
studying for the comprehensive final exam). We chose the course level as an
appropriate level for our items as a reasonable compromise between the very general
and global level of all learning situations and the impractical and unwieldy level of
every specific situation within one course.

Scale scores are constructed by taking the mean of the items that make up that
scale. For example, intrinsic goal orientation has four items (see Table 1 and
Appendix). An individual's score for intrinsic goal orientation would be computed by
summing the four items and taking the average. There are some negatively worded
items and the ratings should be reversed before an individual's score is computed, so
that the statistics reported represent the positive wording of all the items and higher
scores indicate greater levels of the construct of interest. The 15 different scales on
the MSLQ can be used together or singly. The scales are designed to be modular and
can be used to fit the needs of the researcher or instructor. The motivational scales
are based on a broad social-cognitive model of motivation that proposes three general
motivational constructs (Pintrich, 1988a, 1988b, 1989): expectancy, value, and affect.
Expectancy components refer to students' beliefs that they can accomplish a task, and
two MLSQ subscales are directed towards assessing perceptions of self-efficacy and
control beliefs for learning. Our definition and measurement of self-efficacy is a bit
broader than other measures (e.g., the LASSI, Weinstein, Zimmerman & Palmer,
1988), in that both expectancy for success (which is specific to task performance) and
judgments of one's ability to accomplish a task and confidence in one's skills to
perform a task are collapsed within the general term self-efficacy. Control beliefs for
learning refer to students' beliefs that outcomes are contingent upon their own effort,
rather than external factors such as the teacher or luck. Value components focus on
the reasons why students engage in an academic task. Three subscales are included in
the MSLQ to measure value beliefs: intrinsic goal orientation (a focus on learning and
mastery), extrinsic goal orientation (a focus on grades and approval from others), and
task value beliefs (judgments of how interesting, useful, and important the course
content is to the student). The third general motivational construct is affect, and has
been operationalized in terms of responses to the test anxiety scale, which taps into
students' worry and concern over taking exams.

The learning strategies section of the instrument is based on a general cognitive


model of learning and information processing (see Weinstein & Mayer, 1986). There
are three general types of scales: cognitive, metacognitive, and resource
management. Cognitive strategies include students' use of basic and complex
strategies for the processing of information from texts and lectures. The most basic
328 Garcia and Pintrich
cognitive strategy sub scale provides a measure of the use of rehearsal by students
(e.g., repeating the words over and over to oneself to help in the recall of
information). The use of more complex strategies are measured by two subscales
concerning the use of elaboration strategies (e.g., paraphrasing, summarizing) and
organization strategies (e.g., outlining, creating tables). In addition, a subscale on
critical thinking is included, which refers to students' use of strategies to apply
previous knowledge to new situations or make critical evaluations of ideas. The
second general category is metacognitive control strategies, which is measured by one
large subscale concerning the use of strategies that help students control and regulate
their own cognition. This subscale includes planning (setting goals), monitoring (of
one's comprehension), and regulating (e.g., adjusting reading speed depending on the
task). The third general strategy category is resource management, which includes
four subscales on students' regulatory strategies for controlling other resources
besides their cognition. These strategies include managing one's time and study
environment (e.g., using one's time well, having an appropriate place to study), as
well as regulation of one's effort (e.g., persisting in the face of difficult or boring
tasks). Finally, the remaining two subscales, peer learning (e.g., using a study group
or friends to help learn) and help-seeking (e.g., seeking help from peers or instructors
when needed) focus on the use of others in learning.

Table 1
Coefficient Alphas and Items Comprising the Fifteen MSLQ Scales

Scale Items Comprising the Scale Alpha

Motivation Scales
Intrinsic Goal Orientation 1, 16,22,24 .74
Extrinsic Goal Orientation 7,11,13,30 .62
Task Value 4,10,17,23,26,27 .90
Control of Learning Beliefs 2,9, 18,25 .68
Self-Efficacy for Learning & Performance 5,6,12,15,20,21,29,31 .93
Test Anxiety 3,8,14,19,28 .80

Leilllling Strategi~:! S!.<ill~s


Rehearsal 39,46,59,72 .69
Elaboration 53,62,64,67,69,81 .75
Organization 32,42,49,63 .64
Critical Thinking 38,47,51,66,71 .80
Metacognitive Self-Regulation 33r, 36, 41, 44, 54, 55, 56, 57r,
61, 76, 78, 79 .79
Time & Study Environment Management 35,43, 52r,65, 70, 73, 77r,80r .76
Effort Regulation 37r, 48, 60r, 74 .69
Peer Learning 34,45,50 .76
HelE-Seeking 40r, 58, 68, 75 .52
The Motivated Strategies for Learning Questionnaire 329
PSYCHOMETRIC PROPERTIES OF THE MSLQ

Construct Validity.

In order to test the utility of the theoretical model and its operationalization in the
final version of the MSLQ scales, we used data gathered from 380 Midwestern
college students enrolled in 37 classrooms (spanning 14 subject domains and five
disciplines: natural science, humanities, social science, computer science, and foreign
language) to perform two confirmatory factor analyses: one for the set of motivation
items and another for the set of cognitive and metacognitive strategy items (Pintrich,
Smith, Garcia, & McKeachie, 1993). Structural equation modeling was used to
estimate parameters and test the models. In contrast to exploratory factor analysis,
confirmatory factor analysis requires the identification of which items (indicators)
should fall onto which factors (latent variables). Parameter estimates for the model
specified were generated using maximum likelihood, and tests for goodness-of-fit
were made. The goodness-of-fit tests assessed how well correlations that were
reproduced, given the model specified, "matched up" with the input set of
correlations. In other words, confirmatory factor analysis allowed for a quantitative
test of the theoretical model. For example, we have four items that are assumed to be
indicators of a construct called Intrinsic Goal Orientation. The confirmatory factor
analysis tested how closely the input correlations could be reproduced given the
constraints that Items 1, 16, 22, and 24 fall onto one specific factor (Intrinsic Goal
Orientation); that Items 7, 11, 13, and 30 fall onto another factor (Extrinsic Goal
Orientation); that Items 4,10,17,23,26, and 27 fall onto another (Task Value), and
so forth. Each item on the MSLQ was constrained to fall on one specific latent factor.
The 31 motivation items were tested to see how well they fit six correlated latent
factors: (1) intrinsic goal orientation, (2) extrinsic goal orientation, (3) task value, (4)
control beliefs about learning, (5) self-efficacy for learning and performance, and (6)
test anxiety (see Figure 1 for the measurement model). The 50 cognitive strategy
items were tested to see how well they fit nine correlated latent factors: (1) rehearsal,
(2) elaboration, (3) organization, (4) critical thinking, (5) metacognitive self-
regulation, (6) time and study environment management, (7) effort regulation, (8)
peer learning, and (9) help seeking (see Figure 2 for the measurement model).
Therefore, the measurement models tested in the analyses followed the theoretical
framework, and the structural models freely estimated the covariances between the
latent constructs.

The goodness of fit indices generated by the LISREL program suggested that the
general model of motivational components with six scales and the general model of
cognitive components with nine scales were indeed reasonable representations of the
data (Pintrich et aI., 1993; cf. Garcia & Pintrich, 1991). Several omnibus fit statistics
were calculated: the chi-square to degrees of freedom ratio (X2/df); the goodness-of-
fit and adjusted goodness-of-fit indices (GFI and AGFI); and the root mean residual
(RMR). A X2/df ratio of less than 5 is considered to be indicative of a good fit
between the observed and reproduced correlation matrices (Hayduk, 1987); a GFI or
AGFI of .9 or greater and an RMR of .05 or less are heuristic values that indicate that
the model "fits" the input data well. The motivation model (see Figure 1) resulted in
a GFI of .77, an AGFI of .73, an RMR of .07, and generated a X2/df ratio of 3.49
(Pintrich et. aI., 1993). The six correlated latent factors model appears to be the best
fitting representation of the input data, as the largest modification index provided by
LISREL VI was 50.2, and making the modification did not substantively "improve"
330 Garcia and Pintrieh
the overall fit indices for the motivation model (e.g., the GFI increased from .773 to
.784; the RMR decreased from .074 to .072). Constraining the 50 learning strategies
items to fall onto nine correlated latent factors generated a X2/df ratio of 2.26, a GFI
of .78, an AGFI of .75, and an RMR of .08. The nine correlated latent factors model
appears to be the best fitting representation of the input data, as the largest
modification index provided by LISREL VI was 91.59, and modifying the model did
not substantively "improve" the overall fit indices for the learning strategies model
(e.g., the GFI increased from .779 to .789; the RMR decreased from .078 to .076).
These results provide support for the soundness of the measurement and theoretical
models for the two sections of the MSLQ.

On a more basic level, the correlations among the MSLQ scales suggest that the
scales are valid measures of the motivational and cognitive constructs. The value and
expectancy scales, intrinsic goal orientation, extrinsic goal orientation, task value,
control of learning beliefs, and self-efficacy, were all positively correlated with one
another, with rs ranging from .14 to .68 (see Table 2). Test anxiety was modestly
correlated with the value and expectancy scales in the expected directions. Test
anxiety was negatively correlated with the "positive" motivational beliefs of intrinsic
goal orientation, task value, control of learning beliefs, and self-efficacy as would be
expected theoretically. It was positively correlated with extrinsic goal orientation, a
motivational belief that focuses on getting good grades and performing well, so it is
not surprising that students who are concerned about grades would show more test
anxiety (Garcia & Pintrich, 1991; Pintrich et aI., 1993). As expected, all the cognitive
strategy and resource management scales were positively related to one another, with
rs ranging from .10 to .70 (see Table 2). Peer learning and help-seeking were
generally more weakly correlated with the other scales: their correlations with
cognitive strategies and other resource management strategies range from .10 to .28.

Table 2
Correlations Among MSLQ scales

Intr Extr Tskv Cont Sirer Tanx Reh Elab Or~ Crit Mc~ Tstdy Errt Prim
Extr .15
Tskv .68 .18
Cont .29 .14 .30
Sifef .59 .15 .51 .44
Tanx -.15 .23 -.14 -.10 -.37
Reh .10 .23 .12 .02 .10 .11
Elab .48 .13 .44 .22 .35 -.13 .36
Org .27 .09 .19 .02 .21 -.05 .49 .52
Cril .58 .06 .39 .18 .42 -.11 .15 .57 .31
Meg .50 .07 .45 .17 .46 -.24 .39 .67 .55 .53
Tstdy .32 .13 .37 .00 .32 -.17 .38 .44 .44 .25 .58
Efft .43 .11 .47 .07 .44 -.21 .26 .44 .36 .25 .61 .70
Prim .13 .20 .09 -.03 .05 .10 .21 .19 .23 .25 .15 .10 .05
Hsk .10 .08 .16 .00 .08 .08 .18 .28 .22 .19 .25 .21 .18 .55
Note. Intr: Intrinsic Goal Orientation; Extr: Extrinsic Goal Orientation; Tskv: Task
Value; Cont: Control of Learning Beliefs; Slfef: Self-Efficacy for Learning and
Performance; Tanx: Test Anxiety; Reh: Rehearsal; Elab: Elaboration; Org:
Organization; Crit: Critical Thinking; Mcg: Metacognitive Self-Regulation; Tstdy:
Time and Study Environment Management; Efft: Effort Regulation; Prlrn: Peer
Learning; Hsk: Help-Seeking.
The Motivated Strategies for Learning Questionnaire 331

Finally, the motivational and learning strategies scales were correlated in the expected
directions. The positive motivational beliefs of intrinsic goal orientation, task value,
self-efficacy, and control of learning were positively associated with the use of
cognitive, metacognitive, and resource management strategies. At the same time, test
anxiety was negatively related to the use of cognitive, metacognitive, and resource
management strategies. Although some of the correlations are low, as a whole the
relationships between the constructs are in the directions predicted by theory. Indeed,
the low correlations may be interpreted as evidence for the orthogonality of these
constructs: that the motivation scales are measuring different aspects of motivation,
and that the learning strategies scales are measuring different classroom learning
tactics.

With regard to the interface between motivation and cognition, we have found
consistent patterns of relationships between students' motivational beliefs and their
cognitive engagement. Self-efficacy, task value, and an intrinsic goal orientation are
typically the motivational factors most highly correlated with higher-order strategies
such as elaboration, organization, critical thinking and metacognitive regulation
(average r = .39), as well as strategies involving the management of one's time, study
space, and effort (average r = .38). These three factors are less strongly related to the
use of rehearsal strategies (average r = .10) and to peer learning or help-seeking
(average r = .09). The weak relationships to rehearsal strategies may be due to the
widespread use of memorization across college courses (i.e., so that students at all
levels of motivation would use rehearsal strategies to the same extent). In terms of
the peer learning or help-seeking, the low correlations to self-efficacy, task value, and
intrinsic goal orientation may be due to students' lack of opportunities to engage in
collaboration with their peers, and to a commonly-seen reluctance to seek assistance
for college-level courses. An extrinsic goal orientation and internal control beliefs for
learning are less strongly related to cognitive engagement (average r = .12), but are in
the expected directions. Finally, we have found that test anxiety is consistently
negatively related to higher-order strategies such as elaboration, organization, critical
thinking, and metacognitive regulation, as well as to the management of one's time,
study space and effort (average r = .15). In contrast, test anxiety is positively related
(albeit weakly) to the use of rehearsal strategies, peer learning, and help seeking
(average r = .10). As a whole, these patterns of correlations indicate that positive,
more desirable motivational beliefs (i.e., self-efficacy, task value, intrinsic goal
orientation) are related to higher levels of cognitive engagement, whereas less
desirable motivational beliefs (e.g., extrinsic goal orientation, test anxiety) are either
weakly or negatively related to cognitive engagement.

Internal Consistency and Reliability

Internal consistency estimates of reliability (coefficient alphas) lend additional


support for the strength of the psychometric properties of the MSLQ subscales (see
Table 1). The coefficient alphas for the motivational scales are robust, demonstrating
good internal consistency (Pintrich & Garcia, 1991; Pintrich, Smith, Garcia, &
McKeachie, 1991; Pintrich et aI., 1993). Task value beliefs concerning students'
ratings about how interesting, useful, and important the course material is to them
typically have a very high alpha (averaging .90 across our datasets), as do students'
judgments of their self-efficacy for learning (averaging .93). The Test Anxiety and
332 Garcia and Pintrich
Intrinsic Goal Orientation subscales yielded good internal consistency estimates
(generally .80 and .74 respectively). Extrinsic goal orientation and control of learning
beliefs tend to show more variability in students' responses, with coefficient alphas
averaging at about .65. Similarly, the alphas for the learning strategies scales are
reasonable, with most of the coefficient alphas averaging above .70. However, help-
seeking typically has the lowest alpha (below .60). This scale asks about seeking help
from both peers and instructors and it may be that students tend to seek help from
only one of these sources. Taken together, however, the confirmatory factor analyses
discussed above and alphas of each of the fifteen scales suggest that the general
model of motivational components with six scales and cognitive components with
nine scales are a reasonable representation of the data.

Predictive Validity

We have examined predictive validity in terms of the relations between the


MSLQ scales and standardized course grades (course grades were standardized to
control for instructor grading differences). The motivational subscales showed
significant correlations with final grade, and were in the expected directions, adding
to the validity of the scales. Students who approached their course with an intrinsic
goal for learning, who believed that the material was interesting and important, who
had high self-efficacy beliefs for accomplishing the tasks, and who rated themselves
as in control of their learning were more likely to do well in terms of course grade
(average r = .29). At the same time, students who reported being anxious about test
overall were less likely to do well in the course (average r = -.26; e.g., Pintrich &
Garcia, 1991; Pintrich et aI., 1993).

Most of the learning strategy subscales also showed the expected correlations
with course grade. Students who relied on deeper processing strategies like
elaboration, organization, critical thinking, and metacognitive self-regulation were
more likely to receive higher grades in the course (average r = .21). Students who
successfully managed their own time and study environment, as well as their own
efforts (persistence at difficult tasks) were more likely to perform better in their
courses (average r = .30). Surprisingly, the use of rehearsal, peer learning and help-
seeking strategies are not significantly related to grades; this may be due to -the fact
that both high and low achieving students engage in these strategies to the same
extent.

Multivariate analyses have lent further support for the predictive utility of the
MSLQ. For students in the computer and natural sciences, the fifteen subscales
accounted for a total of 39% of the variance in final course grade; self-efficacy and
time and study environment management were the strongest predictors, with betas of
.35 and .49, respectively. For students in the social sciences, humanities, and foreign
language classes, the fifteen subscales accounted for a total of 17% of the variance in
final course grade; however, the two strongest predictors, test anxiety and effort
management, were only marginally significant (p < .10), with betas of -.12 and .16,
respectively. In other studies (e.g., Pintrich & De Groot, 1990) we found that a subset
of these variables accounted for 22% of the variance in final course grade. Given that
many factors can account for the variance in the grades that teachers assign, these
modest amounts of explained variance seem reasonable.
The Motivated Strategies for Learning Questionnaire 333
Practical Utility

It has been our policy to provide students feedback on the MSLQ as a form of
compensation for their participation in our studies. We have chosen nine scales of the
MSLQ (Task Value, Self-Efficacy for Learning and Performance, Test Anxiety,
Rehearsal, Elaboration, Organization, Metacognition, Time and Study Environment
Management, and Effort Regulation) on which to give students feedback. The
student's individual scores, the class scale means, and quartile information for that
class are included in the feedback form. We provide descriptions of each scale and
also offer suggestions to students on how to increase their levels of motivation and
strategy use. Although we have not done any formal research on the effects of this
feedback on students' motivation, use of learning strategies, and performance,
students do tell us that they find the feedback quite helpful and informative. We have
also provided instructors with feedback on their course's motivation and use of
learning strategies (at the group level, not the individual student level), and instructors
too have found this information helpful in adapting the content and pace of the class.
Of course, the amount and type of feedback may be adapted to the researcher's or
instructor's needs.

We have not provided norms for the MSLQ and have no plans to do so given our
theoretical assumptions of situation-specificity. It is designed to be used at the course
level. As noted previously, we assume that students' responses to the items might
vary as a function of different courses, so that the same individual might report
different levels of motivation or strategy use depending on the course. If the user
desires norms for comparative purposes over time, we suggest the development of
local norms for the different courses or instructors at the local institution. The 15
different scales on the MSLQ can be used together or singly. The scales are designed
to be modular and can be used to fit the needs of the researcher or instructor. The
instrument is designed to be given in class and takes approximately 20-30 minutes to
administer. Because of its modularity, flexibility, ease of administration, and sound
psychometric properties, the MSLQ has shown to be a practical and useful means for
assessing college students' motivation and learning strategies.

CONCLUSIONS

Although the validity of self-report measures has been questioned (e.g., Nisbett
& Wilson, 1977), the criticisms made are themselves flawed, as they stem from data
on respondents' misattributions (inaccurate ascriptions to "why did X happen?")
rather than respondents' reports about their behaviors (e.g., "I do X when I study") or
attitudes (Ericsson & Simon, 1993). That is, direct articulation of information stored
in memory (such as a behavior in which one engages or an attitude which one holds)
has been shown to be accurate and veridical, whereas verbalizations which are
products of intermediate processing, such as abstractions, inferences, or attributions,
are more subject to distortion (Ericsson & Simon, 1993). According to Ericsson &
Simon (1993), the issue for researchers then becomes one of methodology, and they
provide ample evidence for the utility of verbal reports as data. Similarly, survey
researchers, who use self-report methods almost exclusively, have an entire literature
on question-writing (e.g., Converse & Presser, 1986) and response effects (e.g.,
Bradburn, 1983; Wentland & Smith, 1993). This body of work suggests that while
certain information may be unavailable in memory, the degree of response accuracy
and consistency (even to sensitive questions) varies according to specific attributes of
334 Garcia and Pintrich
questions such as wording or length, indicating that particular conditions and stimuli
facilitate information retrieval (Wentland & Smith, 1993). Admittedly, the use of
self-report questionnaires does trade some internal validity for external validity, but
we are confident that given careful construction of questions and conscientious
administration of the instrument, relatively high levels of accuracy may be
maintained.

The results suggest that the Motivated Strategies for Learning Questionnaire has
relatively good reliability in terms of internal consistency. The general theoretical
framework and the scales that measure it seem to be valid given the results of the two
confirmatory factor analyses. The six motivational subscales and the nine learning
strategies subscales represent a coherent conceptual and empirically validated
framework for assessing student motivation and use for learning strategies in the
college classroom (Pintrich et aI., 1993). The six motivational scales measure three
general components of college student motivation that seem to be distinct factors. In
addition, the learning strategy scales represent an array of different cognitive,
metacognitive, and resource management strategies that can be reliably distinguished
from one another on both conceptual and empirical grounds. Finally, the subscales
seem to show reasonable predictive validity. The motivational scales were related to
academic performance in the expected directions. In the same fashion, the learning
strategies scales were positively related to course grade. These significant, albeit
modest relations with course grade are reasonable, given the many other factors that
are related to college course grade that are not measured by the MSLQ (individual
course grades themselves are not very reliable measures of performance or learning).
The MSLQ seems to represent a useful, reliable, and valid means for assessing
college students' motivation and use of learning strategies in the classroom.

REFERENCES

Ames, C. (1992). Classroom: Goals, structures, and student motivation. Journal


of Educational Psychology, 84,261-271.
Assor, A., & Connell, J. (1992). The validity of students' self-reports as
measures of performance affecting self-appraisals. In D.H. Schunk & J. Meece,
(Eds.), Student perceptions in the classroom (pp. 25-47). Hillsdale, NJ: Erlbaum.
Bradburn, N. (1983). Response effects. In P. H. Rossi, J. D. Wright, & A. B.
Anderson (Eds.), Handbook of survey research (pp. 289-328). New York: Academic
Press.
Brown, W., & Holtzman, W. (1967). Survey of study habits and attitudes. New
York: Psychological Corporation.
Carter, K. (1993). The place of story in the study of teaching and teacher
education. Educational Researcher, 22 (1),5-12.
Christensen, F. A. (1968). College adjustment and study skills inventory. Berea,
OH: Personal Growth Press.
Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the
standardized questionnaire. Newbury Park, CA: Sage.
Corno, L. (1989). Self-regulated learning: A volitional analysis. In B.J.
Zimmerman & D.H. Schunk (Eds.), Self-regulated learning and academic
achievement: Theory, research, and practice (pp. 111-141). New York: Springer-
Verlag.
The Motivated Strategies for Learning Questionnaire 335
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as
data (revised edition). Cambridge, MA: MIT Press.
Garcia, T., & Pintrich, P. R. (1991, April). Student motivation and self-regulated
learning: A LISREL model. Paper presented at the annual meeting of the American
Educational Research Association, Chicago, IL.
Garcia, T., & Pintrich, P. R. (1994). Regulating motivation and cognition in the
classroom: The role of self-schemas and self-regulatory strategies. In D.H. Schunk &
B.J. Zimmerman (Eds.), Self-regulation of learning and performance: Issues and
educational applications (pp. 127-153). Hillsdale, NJ: Erlbaum.
Goldman, R., & Warren, R. (1973). Discriminant analysis of study strategies
connected with college grade success in different major fields. Journal of
Educational Measurement, 10, 39-47.
Graham, S., & Golan, S. (1991). Motivational influences on cognition: Task
involvement, ego involvement, and depth of information processing. Journal of
Educational Psychology, 83, 187-194.
Hayduk, L. A. (1987). Structural equation modeling with LISREL: Essentials
and advances. Baltimore, MD: Johns Hopkins University Press.
Howard-Rose, D., & Winne, P. (1993). Measuring component and sets of
cognitive processes in self-regulated learning. Journal of Educational Psychology,
85, 591-604.
Joreskog, K. G., & Sorbom, D. (1986). LISREL: Analysis of linear structural
relationships by the method of maximum likelihood: User's guide. Mooresville, IN:
Scientific Software.
Lockhart, D, & Schmeck, R. (1984). Learning styles and classroom evaluation
methods: Different strokes for different folks. College Student Journal, 17, 94-100.
McKeachie, W. J., Pintrich, P. R., & Lin, Y. G. (1985). Teaching learning
strategies. Educational Psychologist, 20, 153-160.
Pintrich, P. R. (1988a). A process-oriented view of student motivation and
cognition. In J. Stark & L. Mets (Eds.), Improving teaching and learning through
research: New directions for institutional research (Vol. 57, pp. 65-79). San
Francisco: Jossey-Bass.
Pintrich, P. R. (1988b). Student learning and college teaching. In R. E. Young &
K. E. Eble (Eds.), College teaching and learning: Preparing for new commitments.
New directions for teaching and learning (Vol. 33, pp. 71-86). San Francisco:
Jossey-Bass.
Pintrich, P. R. (1989). The dynamic interplay of student motivation and
cognition in the college classroom. In C. Ames & M.L. Maehr (Eds.), Advances in
motivation and achievement: Motivation-enhancing environments (Vol. 6, pp. 117-
160). Greenwich, CT: JAI Press.
Pintrich, P. R., & De Groot, E. (1990). Motivational and self-regulated learning
components of classroom academic perfomlance. Journal of Educational Psychology,
82,33-40.
Pintrich, P. R., & Garcia, T. (1991). Student goal orientation and self-regulation
in the college classroom. In M. L. Maehr & P. R. Pintrich (Eds.), Advances in
motivation and achievement: Goals and self-regulatory processes (Vol. 7, pp. 371-
402). Greenwich, CT: JAI Press.
Pintrich, P. R., McKeachie, J. W., & Lin, Y. G. (1987). Teaching as a course in
learning to learn. Teaching of Psychology, 14,81-86.
336 Garcia and Pintrich
Pintrich, P.R., & Schrauben, B. (1992). Students' motivational beliefs and their
cognitive engagement in classroom academic tasks. In D.H. Schunk & J. Meece
(Eds.), Student perceptions in the classroom (pp. 149-183). Hillsdale, NJ: Erlbaum.
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A
manual for the use of the motivated strategies questionnaire (MSLQ). Ann Arbor,
MI: University of Michigan, National Center for Research to Improve Postsecondary
Teaching and Learning.
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993).
Reliability and predictive validity of the Motivated Strategies for Learning
Questionnaire (MSLQ). Educational and Psychological Measurement, 53, 801-813.
Schunk, D., & Zimmerman, B. (1994). Self-regulation of learning and
performance: Issues and educational application. Hillsdale, NJ: Erlbaum.
Thorkildsen, T., & Nicholls, J. (1991). Students' critiques as motivation.
Educational Psychologist, 26,347-368.
Torrance, E. P., Reynolds, C. R., Riegel, T., & Ball, O. (1977). Your style of
learning and thinking: Forms A and B. The Gifted Child Quarterly, 21, 563-573.
Weiner, B. (1986). An attributional theory of motivation and emotion. New
York: Springer-Verlag.
Weinstein, C. E., & Mayer, R. E. (1986). The teaching of learning strategies. In
M. Wittrock (Ed.), Handbook of research on teaching (pp. 315-327). New York:
Macmillan.
Weinstein, C. E., Palmer, D. R., & Schulte, A. C. (1987). Learning and study
strategies inventory. Clearwater, FL: H&H Publishing.
Weinstein, C. E., & Underwood, V. L. (1985). Learning strategies: The how of
learning. In J. W. Segal, S. F. Chipman, & R. Glaser (Eds.), Thinking and learning
skills: Relating instruction to research (Vol. I, pp. 241-258). Hillsdale, NJ:
Erlbaum.
Weinstein, C. E., Zimmerman, S. A., & Palmer, D. R. (1988). Assessing learning
strategies: The design and development of the LASS!. In C. E. Weinstein, E. T.
Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment,
instruction, and evaluation (pp. 25-40). New York: Academic Press.
Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of
their validity. New York: Academic Press.
Wigfield, A., & Eccles, J. (1992). The development of achievement task values:
A theoretical analysis. Developmental Review, 12, 265-310.
Zimmerman, B.1., & Schunk D.H. (1989). Self-regulated learning and academic
achievement: Theory, research, and practice. New York: Springer-Verlag.
The Motivated Strategies for Learning Questionnaire 337
APPENDIX

The MSLQ Items

Part A. Motivation

1. In a class like this, I prefer course material that really challenges me so I can
learn new things.
2. If I study in appropriate ways, then I will be able to learn the material in this
course.
3. When I take a test I think about how poorly I am doing compared with other
students.
4. I think I will be able to use what I learn in this course in other courses.
5. I believe I will receive an excellent grade in this class.
6. I'm certain I can understand the most difficult material presented in the
readings for this course.
7. Getting a good grade in this class is the most satisfying thing for me right now.
8. When I take a test I think about items on other parts of the test I can't answer.
9. It is my own fault ifI don't learn the material in this course.
10. It is important for me to learn the course material in this class.
11. The most important thing for me right now is improving my overall grade
point average, so my main concern in this class is getting a good grade.
12. I'm confident I can learn the basic concepts taught in this course.
13. If I can, I want to get better grades in this class than most of the other students.
14. When I take tests I think of the consequences of failing.
15. I'm confident I can understand the most complex material presented by the
instructor in this course.
16. In a class like this, I prefer course material that arouses my curiosity, even if it
is difficult to learn.
17. I am very interested in the content area of this course.
18. If I try hard enough, then I will understand the course material.
19. I have an uneasy, upset feeling when I take an exam.
20. I'm confident I can do an excellent job on the assignments and tests in this
course.
21. I expect to do well in this class.
22. The most satisfying thing for me in this course is trying to understand the
content as thoroughly as possible.
23. I think the course material in this class is useful for me to learn.
24. When I have the opportunity in this class, I choose course assignments that I
can learn from even if they don't guarantee a good grade.
25. If I don't understand the course material, it is because I didn't try hard enough.
26. I like the subject matter of this course.
27. Understanding the subject matter of this course is very important to me.
28. I feel my heart beating fast when I take an exam.
29. I'm certain I can master the skills being taught in this class.
30. I want to do well in this class because it is important to show my ability to my
family, friends, employer, or others.
31. Considering the difficulty of this course, the teacher, and my skills, I think I
will do well in this class.
338 Garcia and Pintrich
Part B. Learning Strategies

32. When I study the readings for this course, I outline the material to help me
organize my thoughts.
33. During class time I often miss important points because I'm thinking of other
things. (REVERSED)
34. When studying for this course, I often try to explain the material to a classmate
or friend.
35. I usually study in a place where I can concentrate on my course work.
36. When reading for this course, I make up questions to help focus my reading.
37. I often feel so lazy or bored when I study for this class that I quit before I finish
what I planned to do. (REVERSED)
38. I often find myself questioning things I hear or read in this course to decide if I
find them convincing.
39. When I study for this class, I practice saying the material to myself over and
over.
40. Even if I have trouble learning the material in this class, I try to do the work on
my own, without help from anyone. (REVERSED)
4l. When I become confused about something I'm reading for this class, I go back
and try to figure it out.
42. When I study for this course, I go through the readings and my class notes and
try to find the most important ideas.
43. I make good use of my study time for this course.
44. If course readings are difficult to understand, I change the way I read the
material.
45. I try to work with other students from this class to complete the course
assignments.
46. When studying for this course, I read my class notes and the course readings
over and over again.
47. When a theory, interpretation, or conclusion is presented in class or in the
readings, I try to decide if there is good supporting evidence.
48. I work hard to do well in this class even if I don't like what we are doing.
49. I make simple charts, diagrams, or tables to help me organize course material.
50. When studying for this course, I often set aside time to discuss course material
with a group of students from the class.
5l. I treat the course material as a starting point and try to develop my own ideas
about it.
52. I find it hard to stick to a study schedule. (REVERSED)
53. When I study for this class, I pull together information from different sources,
such as lectures, readings, and discussions.
54. Before I study new course material thoroughly, I often skim it to see how it is
organized.
55. I ask myself questions to make sure I understand the material I have been
studying in this class.
56. I try to change the way I study in order to fit the course requirements and the
instructor's teaching sty Ie.
57. I often find that I have been reading for this class but don't know what it was
all about. (REVERSED)
58. I ask the instructor to clarify concepts I don't understand well.
59. I memorize key words to remind me of important concepts in this class.
60. When course work is difficult, I either give up or only study the easy parts.
(REVERSED)
The Motivated Strategies for Learning Questionnaire 339
61. I try to think through a topic and decide what I am supposed to learn from it
rather than just reading it over when studying for this course.
62. I try to relate ideas in this subject to those in other courses whenever possible.
63. When I study for this course, I go over my class notes and make an outline of
important concepts.
64. When reading for this class, I try to relate the material to what I already know.
65. I have a regular place set aside for studying.
66. I try to play around with ideas of my own related to what I am learning in this
course.
67. When I study for this course, I write brief summaries of the main ideas from
the readings and my class notes.
68. When I can't understand the material in this course, I ask another student in this
class for help.
69. I try to understand the material in this class by making connections between
the readings and the concepts from the lectures.
70. I make sure that I keep up with the weekly readings and assignments for this
course.
71. Whenever I read or hear an assertion or conclusion in this class, I think about
possible alternatives.
72. I make lists of important items for this course and memorize the lists.
73. I attend this class regularly.
74. Even when course materials are dull and uninteresting, I manage to keep
working until I finish.
75. I try to identify students in this class whom I can ask for help if necessary.
76. When studying for this course I try to determine which concepts I don't
understand well.
77. I often find that I don't spend very much time on this course because of other
activities. (REVERSED)
78. When I study for this class, I set goals for myself in order to direct my
activities in each study period.
79. If I get confused taking notes in class, I make sure I sort it out afterwards.
80. I rarely find time to review my notes or readings before an exam. (REVERSED)
81. I try to apply ideas from course readings in other class activities such as lecture
and discussion.

Potrebbero piacerti anche