Sei sulla pagina 1di 12

122

Any Questions? Designing an Observation Instrument to

Record Teacher Questioning


Peter Harrold, Kanda University of International Studies, Japan

Abstract: Asking questions is an important skill for teachers to practice and develop, as
being able to ask the right questions may be considered the key to interactive learning
that maximizes student involvement. This paper focuses on the development and
implementation of an observation instrument designed to record the types of questions a
teacher asks. The instrument was modified and tested during 10 class observations of
Japanese undergraduate students studying English. The classes comprised on average
20 undergraduate learners aged between 18 and 20. The lessons observed had a variety
of focuses so data recorded may be considered very context specific. The conclusions
drawn examine the usefulness of the instrument and speculate on what may be learnt
from the collected data. In summary, the data collected was sample specific and
dependent on the observers interpretation of questions. However, it may be
considered useful for informing the observers own teaching practice and questioning
technique, as well as helpful for the teacher being observed in assisting reflection on the
questions they asked during a lesson.

Keywords: Observation instrument, Questions.

Introduction

This is my first year teaching at Kanda University of International Studies in Chiba,


Japan, so this research was carried out in response to a specific area I found challenging
in my initial weeks. I found students were often reluctant to voice an answer to
questions pitched at the entire class and found classes would move at a slow pace as I
attempted to entice answers from students. Therefore, I planned to develop an
instrument to assist observations of experienced teachers and record the types,
approaches and styles of questions they use in order to engage the students into
responding positively. Therefore, the focus of the design of this instrument was to
record the types of questions a teacher asks and whether they were successful at
achieving a response from the students.
123

Literature Review

Questions play an important role in classroom interaction, and can serve a variety of
purposes. For example Questions can be used for checking for understanding,
starting a discussion, inviting curiosity, beginning an inquiry, determining students
prior knowledge and stimulating critical thinking (Harris 2000:25). In essence,
questions play a vital role in the learning process, and arguably Effective use of
questioning arouses curiosity, stimulates interest, and motivates students to seek new
information (Caram and Davis 2005:20). This is supported by Volger (2005:102)
who claims that:

Asking questions and leading classroom discussions can have a positive


impact on student learning. They can monitor student comprehension, help
make connections to prior knowledge, and stimulate cognitive growth. But
good questions and classroom discussions dont just happen. Verbal
questioning is a skill, and like any skill, it must be practiced before it is
mastered.

Therefore, in order to master questioning techniques teachers need to be aware of the


questions they ask in class, one method of achieving this is through classroom
observation. In observing questioning, three key areas are often considered and
recorded: who is called to answer, what they are asked and how they respond. For
example, Morse and Davis (1970) designed an observation instrument that focused on
three main stages of questioning: teacher initiation, student response and the teachers
reaction to the response. They considered this to cover a complete questioning
interchange. Similarly, Belland, Belland and Price (1971) evaluated two observation
systems for coding teacher question and student responses that followed comparable
categories of interaction. Based on these categorizations the data collected from an
observation instrument can be used to inform the teachers own questioning techniques,
as it allows the review of the form and style of questions enabling us to consider how
effective they are at involving all students in actively responding. Ultimately, the
importance of effective questioning should not be underestimated as teaching is as
much about what we ask students as what we tell them (Feldman 2003:8).
124

Designing the Instrument

The first stage in the instruments design was to include space to record how the teacher
initiated the question, whether the student was nominated or not, as this may affect their
ability to respond. It may be claimed that, 'The distribution of questions should
include all students, yet be unpredictable so that all students know that their attention is
required' (Caram and Davis 2005:20). It would also have been possible to record if
students were selected randomly or in pattern. Both have benefits, as random selection
ensures all students are kept alert, pattern selection that no students will be left out
(Feldman 2003). However, at this stage this information was considered unnecessary
to record and would overcomplicate the instrument.

The next step in the design was to record the overarching question type by identifying
the two main forms of question: convergent or divergent. Convergent questions focus
the respondents answer towards something that the teacher usually already knows the
answer to, and requires a short response or a simple yes or no answer to something
factual. Divergent questions on the other hand, are open-ended allowing for a range of
possible responses that are varied and unpredictable (House, Chassie and Spohn 1990).
Divergent questioning arguably gives the students more scope for critical thinking,
problem solving and being creative in their answers. It also may help further stimulate
questions and answers from students.

The next facet of the instrument's design was to ascertain what level of thinking or
response the question requires from the student. As Wajnryb (1992:46) suggests:

While teachers often plan their questions in terms of the lessons content, they
seem to place less emphasis on considering their questions in terms of the cognitive and
linguistic demands placed on the learner."

Therefore, the instrument was designed to help the teacher consider the cognitive
demand of questions on students. This instrument utilizes Blooms Taxonomy (Bloom
1956) that ranks questioning style in order of the challenge or complexity of the
response required. The system orders thinking into the hierarchical stages of 1)
knowledge, 2) comprehension, 3) application, 4) analysis, 5) synthesis and finally 6)
evaluation. This ranking system suggests the latter stage questions may be more
difficult but activate higher level thinking and learning skills in students. This
125

taxonomy was considered suitable to include on the instrument to provide a reference


point for the type of response and thinking required from students. Krathworhls
(2002) revised taxonomy was also considered that changes the initial two stages,
knowledge and comprehension to remember and understand respectively, but
essentially the stages remain the same. The system was applied using the following
guidelines:
Stage Characteristics
1) Knowledge (Remember) Recall specific set information.
2) Comprehension (Understand) Compare, organize, describe, summarize.
3) Application Solve problems using acquired knowledge.
4) Analysis Consider motives, causes, inferences.
5) Synthesis Compile information in different ways.
6) Evaluation Make judgements and defend ideas.

Finally, the instrument was designed to record whether the question successfully
achieved a response or whether no response was achieved. It would be possible to add
further detail here regarding whether the question was correctly answered, but the
primary purpose of the instrument was to record how effective the questions are at
engaging the students to respond, not how successful they are at answering correctly.

There were some areas explored in researching the instrument that were not included in
its final design. It would have been possible to record question sequencing to see how
teachers developed their line of questioning, to activate higher order thinking skills.
For example, it may be considered a good strategy to start with knowledge level
questions and graduate to higher levels of thinking skills (Caram and Davis 2005).
Also follow up questions could have been recorded to consider if probing questions
were asked, Probes follow earlier, planned questions and are used to expand or alter
information presented in a previous response (House, Chassie and Spohn 1990:197).
These questions attempt to gain further insight and explanation from a students initial
answer and get them to extend, justify or clarify. They are used by the teacher to
challenge ideas and get further proof to support responses. This information will still be
recorded in capturing the questions but will not be explicitly identified using the
instrument. Finally, Wajnryb (1992) recommends recording question and answer sets
to see the impact of the type of question on the complexity of the response. However,
again it was decided that this level of detail would be overly complicated and
unnecessary to include on the observation instrument.
126

Another area considered but not included, was the length of time the teacher waits for a
response after asking a question. House, Chassie and Spohn (1990) suggest increasing
the wait time between questions improves the quality of student responses, as it allows
students to further consider and reflect on their answer. However, it was considered
that including and recording wait time would be more intrusive on the teacher being
observed, and would require accurate recording and attention, which would not be
possible whilst recording other data. Therefore, this is an area which would require its
own instrument and focus.

Finally, a spatial observation schedule (Wallace 1998) was also considered, for
recording the pattern and distribution of questions around the class, but this was
considered only practical if using a tally system and still too complex if applied to the
level of detail this observation instrument also wishes to record. The number of
options on the instrument was kept to a minimum in order to prevent it from becoming
too complex or involve too much writing that would distract from concentration on
actually observing. Space was left at the bottom of the page for the teacher to write
important notes they felt relevant that could not be directly captured by the instrument.

Modifications to the instrument


The instrument was reviewed and redrafted between observations. The most
significant change was to reduce the number of columns on the instrument and code the
data. This was to make the data easier to read, display and analyze trends after the
observation than the existing system of ticks. Ticking was also considered harder to
read when processing the information and difficult to see if any categories had been
inadvertently missed.

The coding system was designed as follows: the columns for student nominated or
whole class question were combined: (s) for specified or (w) for any student may
answer. Convergent or divergent could equally be recorded as either (c) or (d)
respectively. The stages of Blooms Taxonomy also were now numbered and recorded
from 1-6. Finally, whether a response was given was coded as (+) or if there was no
response (-). This made recording data easier as there were now less column
alternatives. For example, a question that specified a student, was convergent in nature
with the purpose of retrieving knowledge, which was responded to by the student,
would be recorded in the table as SC1+. Despite coding reducing the required space
needed on the instrument, space still remained for recording the complete question. As
127

this increases the potential accuracy and reliability of the data, as it allows for the
teacher to write down and consider the categorization later if they are unable to make a
snap decision on how it should be coded.

After observation 7, the instrument was once again re-evaluated and adapted to create
the final version. A new category was added for how the teacher dealt with the response.
If no response was given (-) the question may then be categorized as either intercepted
by another student other than the one specified (I), rephrased for the student (r) or
redirected and passed on to another (p), finally the teacher may decide to simply move
on (m) to something else. Any other strategies the teacher may use could be recorded
in the comments section and could be added to the instrument at a later time. Due to
the late addition to the instrument of this categorization the data has not been included
in the table of results. Finally, a new box was added to tally procedural questions such
as Have you done your homework? or Do you have a partner?, as these were
considered unsuitable for categorization under Blooms Taxonomy and could otherwise
take up useful space on the instrument. (See Appendix A for the final design)

Results

Lesson S W C D 1 2 (C) 3 4 (U) 5 (S) 6 (E) (+) (-)


(K) (A)
1 4 8 7 5 2 1 4 2 0 2 12 0
2 8 6 7 7 3 3 1 3 0 4 13 1
3 12 12 17 7 10 5 7 0 0 2 24 0
4 6 5 6 5 4 1 1 1 0 4 10 1
5 18 16 31 3 28 3 0 0 0 3 34 0
6 9 10 11 8 10 4 2 1 1 1 19 0
7 13 3 8 8 7 5 3 0 0 1 14 2
8 9 7 9 7 1 6 8 0 0 1 13 3
9 0 10 9 1 8 1 0 0 0 1 9 1
10 9 21 25 5 23 6 1 0 0 0 30 0
TOTAL: 88 98 130 56 96 35 27 7 1 19 178 8
Key
(S)specified student (W)whole class (C)Convergent (D)Divergent (+)response (-)no response
(1)Knowledge (2)Comprehension (3)Application (4) Understanding (5)Synthesis (6)Evaluation
128

Figure 1. Figure 2.

Respondent Question Type


Student
Convergent
specified
Whole
Divergent
class

Figure 3. Graph Showing Total Number of Questions in Each Category of


Bloom's Taxonomy.

120

100

80

60

40 Number of questions

20

0
sis

sis
e

n
n
dg

sio

io
tio

aly

he

t
le

ica

ua
en

nt
An
ow

al
eh

pl

Sy

Ev
Kn

Ap
pr
m
Co

Analysis of Results

The results are very sample specific to the content and context of each individual lesson
so generalizations cannot be easily made. However, from examining the overall data it
can be seen that teachers generally split questions evenly amongst the whole class or
specified students. They also tended to ask more convergent than divergent questions.
Furthermore, the frequency of questions asked was highest for lower order questions
classified by Bloom's Taxonomy. The data shows many more knowledge based
129

questions were asked. The data illustrates a steady decline with each tier until stage 6
when evaluation questions were overall more frequent than the previous categories of
analysis and synthesis. Finally, it may also be noted that very few questions did not
achieve a response from students. From examining the specific questions that did not
achieve a response it can be claimed that divergent higher order questions often prove to
be the most difficult for students to respond to, and also occur less frequently in class.

Evaluation of the instrument


The main limitation of the design was the time taken to write the questions and
categorizing them potentially distracted from the observers principal aim of focusing
on the teacher. If a high frequency of questions were to occur this could lead to the
recorder being unable to transcribe them in time. Furthermore, the space available to
write questions was inadequate for some questions, meaning only the main idea of the
question could be recorder in particularly wordy questions.

In collecting the results the reliability of results was greatest when the question was
accurately recorded as this could be re-examined later. However, how the instrument
is applied may vary from teacher to teacher who may differ in opinion in where exactly
a question may fit into the taxonomy. Applying Blooms Taxonomy evenly and
consistently proved difficult, this problem would be magnified if different teachers
attempted to use the instrument and compare results. The lack of questions recorded
under synthesis may partly be due to the observers lack of understanding of what
questions might fit there. Furthermore, even when questions were recorded,
classifying them out of context can be difficult, therefore great importance on the
observers initial reaction to the question type is necessary. This mirrors previous
research findings. For example, in Morse and Davis (1970) study, that also utilized
Blooms Taxonomy, they found differentiation in how teachers applied their instrument
to the same class, particularly in the knowledge and comprehension categories.

Overall the modifications made were successful in improving the instrument. They
allowed the responses to be coded making them appear clearer for the teacher to process.
It also has the added implication than data can be more easily displayed visually. In
the future, as the observer becomes more experienced at coding, it would make the
recording of information quicker and easier, and would allow for a higher volume of
data to be collected without the question needing to be transcribed.
130

Conclusion and Recommendations

In conclusion, the instrument proved an effective tool for recording and analyzing the
style of teacher questioning. The research has demonstrated the importance for a
teacher to consider how they select a respondent, ask a question and react to the
students response or lack of response, as this can all impact on the effectiveness of their
questioning technique. However, the limitation of the instrument include that it does
not examine question sequencing or record wait time, that may both influence the
students ability to respond effectively. In particular, the length of time a teacher waits
after asking a question would be an interesting area for future research. As teachers
often feel uncomfortable leaving too much silence after a question, so it would be
interested to learn if providing the student with additional time increases the quality of
their response.

The scope of the instrument was limited to looking at the types of questions teachers
asked and how students responded, it did not consider in detail the students answers or
how series of questions developed. It also does not consider the accuracy of students
responses, or whether they give a correct answer. Finally, it did not consider the
distribution and pattern of questioning around the class. In future, if the teachers
ability to code the questions improved this would allow for less space on the instrument
for writing, and more possibility to record other useful data such as proximity patterns
or wait time. Overall, the process of designing the instrument has enabled the teacher
to reflect on many important areas of questioning that could be reviewed and observed
during lessons. This undoubtedly can have a positive impact on how the teacher or
observer approaches classroom questioning in the future and considers what, how and to
whom questions should be asked.

Acknowledgements
I would like to thank my colleagues at Kanda University of International Studies for
their kindness in allowing me to observe their classes.

References
Belland, C; Belland, A and Price, T. (1971). Analysing Teacher Questions a
Comparative Evaluation of Two Observation Systems. New York: American
Educational Research Association.
131

Bloom, Benjamin S. (ed.) (1956). Taxonomy of educational objectives: the


classification of educational goals, Handbook I, Cognitive domain. New
York: David McKay.

Caram, Chris A. and Davis, Patsy B. (2005). Inviting Student Engagement with
Questioning. Kappa Delta Pi Record, 42 (1), 18-23.

Feldman, Samdra (2003). The Right Line of Questioning. Teaching Pre k-8; 33 (4), 8.

Harris, Robin Lee (2000). Batting 1,000: Questioning Techniques in Student-Centred


Classrooms, The Clearing House, 74 (1), 25-26.

House, Beverley M; Chassie, Marilyn B and Spohn, Betty Bowling (1990) Questioning:
An Essential Ingredient in Effective Teaching. The Journal of Continuing
Education in Nursing; 21 (5), 196-201.

Krathworhl, David R. (2002). A Revision of Bloom's Taxonomy: An Overview.


Theory into Practice, 41(4), 212-218.

Morse, Kevin R and Davis, O. L., Jr. (1970). The Questioning Strategies Observation
System (QSOS). Texas: Austin University Research and Development Center
for Teacher Education.

Volger, Kenneth E. (2005). Improve Your Verbal Questioning. The Clearing House: A
Journal of Educational Strategies, Issues and Ideas, 79(2), 98-103.

Wajnryb, Ruth (1992). Classroom Observation Tasks. Cambridge: Cambridge


University Press.

Wallace, Michael J. (1998). Action Research for Language Teachers. Cambridge:


Cambridge University Press.
132

Appendix

Data Capture Instrument: Teacher Questioning


Teacher No: _____ Class/Topic: __________________________ No. of Students: _____
Time: _______ Date:________

Student specified (S)

Comprehension
Convergent (C)
Whole class (W)

No response (-)
Application
Knowledge

Intercepted (I)

Rephrased (R)
(6) Evaluation
Divergent (D)

Moved on (M)
Passed on (P)
(5) Synthesis
Response (+)
Analysis
(1)

(2)

(3)

(4)
Question

1)
2)
3)
4)
5)
6)
7)
8)
9)
10)
11)
12)
13)
14)
15)
16)
17)
18)
19)
20)
21)
22)
23)
133

24)
25)
26)
27)
28)
29)
30)
Comments: Procedural Questions

Potrebbero piacerti anche