Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract: Asking questions is an important skill for teachers to practice and develop, as
being able to ask the right questions may be considered the key to interactive learning
that maximizes student involvement. This paper focuses on the development and
implementation of an observation instrument designed to record the types of questions a
teacher asks. The instrument was modified and tested during 10 class observations of
Japanese undergraduate students studying English. The classes comprised on average
20 undergraduate learners aged between 18 and 20. The lessons observed had a variety
of focuses so data recorded may be considered very context specific. The conclusions
drawn examine the usefulness of the instrument and speculate on what may be learnt
from the collected data. In summary, the data collected was sample specific and
dependent on the observers interpretation of questions. However, it may be
considered useful for informing the observers own teaching practice and questioning
technique, as well as helpful for the teacher being observed in assisting reflection on the
questions they asked during a lesson.
Introduction
Literature Review
Questions play an important role in classroom interaction, and can serve a variety of
purposes. For example Questions can be used for checking for understanding,
starting a discussion, inviting curiosity, beginning an inquiry, determining students
prior knowledge and stimulating critical thinking (Harris 2000:25). In essence,
questions play a vital role in the learning process, and arguably Effective use of
questioning arouses curiosity, stimulates interest, and motivates students to seek new
information (Caram and Davis 2005:20). This is supported by Volger (2005:102)
who claims that:
The first stage in the instruments design was to include space to record how the teacher
initiated the question, whether the student was nominated or not, as this may affect their
ability to respond. It may be claimed that, 'The distribution of questions should
include all students, yet be unpredictable so that all students know that their attention is
required' (Caram and Davis 2005:20). It would also have been possible to record if
students were selected randomly or in pattern. Both have benefits, as random selection
ensures all students are kept alert, pattern selection that no students will be left out
(Feldman 2003). However, at this stage this information was considered unnecessary
to record and would overcomplicate the instrument.
The next step in the design was to record the overarching question type by identifying
the two main forms of question: convergent or divergent. Convergent questions focus
the respondents answer towards something that the teacher usually already knows the
answer to, and requires a short response or a simple yes or no answer to something
factual. Divergent questions on the other hand, are open-ended allowing for a range of
possible responses that are varied and unpredictable (House, Chassie and Spohn 1990).
Divergent questioning arguably gives the students more scope for critical thinking,
problem solving and being creative in their answers. It also may help further stimulate
questions and answers from students.
The next facet of the instrument's design was to ascertain what level of thinking or
response the question requires from the student. As Wajnryb (1992:46) suggests:
While teachers often plan their questions in terms of the lessons content, they
seem to place less emphasis on considering their questions in terms of the cognitive and
linguistic demands placed on the learner."
Therefore, the instrument was designed to help the teacher consider the cognitive
demand of questions on students. This instrument utilizes Blooms Taxonomy (Bloom
1956) that ranks questioning style in order of the challenge or complexity of the
response required. The system orders thinking into the hierarchical stages of 1)
knowledge, 2) comprehension, 3) application, 4) analysis, 5) synthesis and finally 6)
evaluation. This ranking system suggests the latter stage questions may be more
difficult but activate higher level thinking and learning skills in students. This
125
Finally, the instrument was designed to record whether the question successfully
achieved a response or whether no response was achieved. It would be possible to add
further detail here regarding whether the question was correctly answered, but the
primary purpose of the instrument was to record how effective the questions are at
engaging the students to respond, not how successful they are at answering correctly.
There were some areas explored in researching the instrument that were not included in
its final design. It would have been possible to record question sequencing to see how
teachers developed their line of questioning, to activate higher order thinking skills.
For example, it may be considered a good strategy to start with knowledge level
questions and graduate to higher levels of thinking skills (Caram and Davis 2005).
Also follow up questions could have been recorded to consider if probing questions
were asked, Probes follow earlier, planned questions and are used to expand or alter
information presented in a previous response (House, Chassie and Spohn 1990:197).
These questions attempt to gain further insight and explanation from a students initial
answer and get them to extend, justify or clarify. They are used by the teacher to
challenge ideas and get further proof to support responses. This information will still be
recorded in capturing the questions but will not be explicitly identified using the
instrument. Finally, Wajnryb (1992) recommends recording question and answer sets
to see the impact of the type of question on the complexity of the response. However,
again it was decided that this level of detail would be overly complicated and
unnecessary to include on the observation instrument.
126
Another area considered but not included, was the length of time the teacher waits for a
response after asking a question. House, Chassie and Spohn (1990) suggest increasing
the wait time between questions improves the quality of student responses, as it allows
students to further consider and reflect on their answer. However, it was considered
that including and recording wait time would be more intrusive on the teacher being
observed, and would require accurate recording and attention, which would not be
possible whilst recording other data. Therefore, this is an area which would require its
own instrument and focus.
Finally, a spatial observation schedule (Wallace 1998) was also considered, for
recording the pattern and distribution of questions around the class, but this was
considered only practical if using a tally system and still too complex if applied to the
level of detail this observation instrument also wishes to record. The number of
options on the instrument was kept to a minimum in order to prevent it from becoming
too complex or involve too much writing that would distract from concentration on
actually observing. Space was left at the bottom of the page for the teacher to write
important notes they felt relevant that could not be directly captured by the instrument.
The coding system was designed as follows: the columns for student nominated or
whole class question were combined: (s) for specified or (w) for any student may
answer. Convergent or divergent could equally be recorded as either (c) or (d)
respectively. The stages of Blooms Taxonomy also were now numbered and recorded
from 1-6. Finally, whether a response was given was coded as (+) or if there was no
response (-). This made recording data easier as there were now less column
alternatives. For example, a question that specified a student, was convergent in nature
with the purpose of retrieving knowledge, which was responded to by the student,
would be recorded in the table as SC1+. Despite coding reducing the required space
needed on the instrument, space still remained for recording the complete question. As
127
this increases the potential accuracy and reliability of the data, as it allows for the
teacher to write down and consider the categorization later if they are unable to make a
snap decision on how it should be coded.
After observation 7, the instrument was once again re-evaluated and adapted to create
the final version. A new category was added for how the teacher dealt with the response.
If no response was given (-) the question may then be categorized as either intercepted
by another student other than the one specified (I), rephrased for the student (r) or
redirected and passed on to another (p), finally the teacher may decide to simply move
on (m) to something else. Any other strategies the teacher may use could be recorded
in the comments section and could be added to the instrument at a later time. Due to
the late addition to the instrument of this categorization the data has not been included
in the table of results. Finally, a new box was added to tally procedural questions such
as Have you done your homework? or Do you have a partner?, as these were
considered unsuitable for categorization under Blooms Taxonomy and could otherwise
take up useful space on the instrument. (See Appendix A for the final design)
Results
Figure 1. Figure 2.
120
100
80
60
40 Number of questions
20
0
sis
sis
e
n
n
dg
sio
io
tio
aly
he
t
le
ica
ua
en
nt
An
ow
al
eh
pl
Sy
Ev
Kn
Ap
pr
m
Co
Analysis of Results
The results are very sample specific to the content and context of each individual lesson
so generalizations cannot be easily made. However, from examining the overall data it
can be seen that teachers generally split questions evenly amongst the whole class or
specified students. They also tended to ask more convergent than divergent questions.
Furthermore, the frequency of questions asked was highest for lower order questions
classified by Bloom's Taxonomy. The data shows many more knowledge based
129
questions were asked. The data illustrates a steady decline with each tier until stage 6
when evaluation questions were overall more frequent than the previous categories of
analysis and synthesis. Finally, it may also be noted that very few questions did not
achieve a response from students. From examining the specific questions that did not
achieve a response it can be claimed that divergent higher order questions often prove to
be the most difficult for students to respond to, and also occur less frequently in class.
In collecting the results the reliability of results was greatest when the question was
accurately recorded as this could be re-examined later. However, how the instrument
is applied may vary from teacher to teacher who may differ in opinion in where exactly
a question may fit into the taxonomy. Applying Blooms Taxonomy evenly and
consistently proved difficult, this problem would be magnified if different teachers
attempted to use the instrument and compare results. The lack of questions recorded
under synthesis may partly be due to the observers lack of understanding of what
questions might fit there. Furthermore, even when questions were recorded,
classifying them out of context can be difficult, therefore great importance on the
observers initial reaction to the question type is necessary. This mirrors previous
research findings. For example, in Morse and Davis (1970) study, that also utilized
Blooms Taxonomy, they found differentiation in how teachers applied their instrument
to the same class, particularly in the knowledge and comprehension categories.
Overall the modifications made were successful in improving the instrument. They
allowed the responses to be coded making them appear clearer for the teacher to process.
It also has the added implication than data can be more easily displayed visually. In
the future, as the observer becomes more experienced at coding, it would make the
recording of information quicker and easier, and would allow for a higher volume of
data to be collected without the question needing to be transcribed.
130
In conclusion, the instrument proved an effective tool for recording and analyzing the
style of teacher questioning. The research has demonstrated the importance for a
teacher to consider how they select a respondent, ask a question and react to the
students response or lack of response, as this can all impact on the effectiveness of their
questioning technique. However, the limitation of the instrument include that it does
not examine question sequencing or record wait time, that may both influence the
students ability to respond effectively. In particular, the length of time a teacher waits
after asking a question would be an interesting area for future research. As teachers
often feel uncomfortable leaving too much silence after a question, so it would be
interested to learn if providing the student with additional time increases the quality of
their response.
The scope of the instrument was limited to looking at the types of questions teachers
asked and how students responded, it did not consider in detail the students answers or
how series of questions developed. It also does not consider the accuracy of students
responses, or whether they give a correct answer. Finally, it did not consider the
distribution and pattern of questioning around the class. In future, if the teachers
ability to code the questions improved this would allow for less space on the instrument
for writing, and more possibility to record other useful data such as proximity patterns
or wait time. Overall, the process of designing the instrument has enabled the teacher
to reflect on many important areas of questioning that could be reviewed and observed
during lessons. This undoubtedly can have a positive impact on how the teacher or
observer approaches classroom questioning in the future and considers what, how and to
whom questions should be asked.
Acknowledgements
I would like to thank my colleagues at Kanda University of International Studies for
their kindness in allowing me to observe their classes.
References
Belland, C; Belland, A and Price, T. (1971). Analysing Teacher Questions a
Comparative Evaluation of Two Observation Systems. New York: American
Educational Research Association.
131
Caram, Chris A. and Davis, Patsy B. (2005). Inviting Student Engagement with
Questioning. Kappa Delta Pi Record, 42 (1), 18-23.
Feldman, Samdra (2003). The Right Line of Questioning. Teaching Pre k-8; 33 (4), 8.
House, Beverley M; Chassie, Marilyn B and Spohn, Betty Bowling (1990) Questioning:
An Essential Ingredient in Effective Teaching. The Journal of Continuing
Education in Nursing; 21 (5), 196-201.
Morse, Kevin R and Davis, O. L., Jr. (1970). The Questioning Strategies Observation
System (QSOS). Texas: Austin University Research and Development Center
for Teacher Education.
Volger, Kenneth E. (2005). Improve Your Verbal Questioning. The Clearing House: A
Journal of Educational Strategies, Issues and Ideas, 79(2), 98-103.
Appendix
Comprehension
Convergent (C)
Whole class (W)
No response (-)
Application
Knowledge
Intercepted (I)
Rephrased (R)
(6) Evaluation
Divergent (D)
Moved on (M)
Passed on (P)
(5) Synthesis
Response (+)
Analysis
(1)
(2)
(3)
(4)
Question
1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
11)
12)
13)
14)
15)
16)
17)
18)
19)
20)
21)
22)
23)
133
24)
25)
26)
27)
28)
29)
30)
Comments: Procedural Questions