Sei sulla pagina 1di 4

Reaction to Chris Sangwin’s paper

John Monaghan
University of Leeds

Introduction
Chris Sangwin has provided CAME participants with an interesting overview of STACK, a
computer aided assessment system (CAA) which includes a computer algebra system (CAS)
to perform a variety of tasks including syntax checks on students’ input. Chris is very
knowledgeable in his field and is respected in the UK and internationally; indeed, the
Mathematics Department of my university recently consulted him when they were
considering introducing CAA. Chris’ paper is an informative overview but it does not have a
central focus other than description. I thus select some issues Chris addresses and does not
address which CAME 5 participants may wish to pursue in the discussion part of our meeting.

On CAME and research


A major concern of CAME is with research on the use of computer algebra in mathematics
education. We rely on development work by mathematicians, computer scientists and teachers
but our focus is on education research. I take a broad view of what research is – systematic
enquiry – that includes teacher-research and action research. Research is, however, more than
just description. I do think there is much which can be researched on the use of CAA and a
question I would like to put to Chris and CAME participants is
What research agendas are contained in this paper?

Chris raises important questions on p.5 but then states “these larger issues are not addressed in
this paper”. Perhaps we should address them in the discussion group.

On assessment
Chris introduces formative assessment and a paper by Wiliam and Black in paragraph 2.
Black and Wiliam have written a number of groundbreaking papers on assessment, especially
on formative assessment. Black and Wiliam (1998) reports on the work of Butler on teacher
feedback to students where they compare grade only, comment only and grade and comment
teacher feedback. If you are not aware of this work, then take a minute to think – which of
these is likely to be the most effective with regard to assessment to improve learning? The
answer requires a number of qualifications (see Black and Wiliam, 1998, for details and
further reference) but, simplistically, the answer is ‘comment only’. My question regarding
STACK’s feedback to students is
How might STACK’s feedback to students be improved and how might we research this?

On context
In the conclusion of his paper Chris states that CAA/CAS can support a pedagogy of practice
and can encourage informal group work and that virtually immediate feedback can provide
students with an incentive to reflect upon their answers. Chris does not elaborate on this but it
is something we might address in the discussion group. There are at least two aspects to
consider: how these ‘can’ statements might be realised, e.g. putting some detail on how
CAA/CAS can encourage informal group work; how we might research these issues.

To address these aspects I feel we need to consider CAA with regard to the wider student
experience: teaching, learning with regard to the module CAA supports, learning with regard
to students’ degree (as the centrality of mathematics to their studies undoubtedly matters) and
other forms of assessment. These are things of which Chris does not provide us with details.
Without these details I fail to see how we can appreciate or assess the value of STACK to
student learning, which leads to the question
What do we need to know about students’ experience in order to assess the value of STACK?

On tasks and design


I think the basic structure of questions given on p.6 has been well thought out. I particularly
like the set of stationary point questions given at the bottom of p.5 which are clearly designed
to get students thinking deeply about the algebraic form of functions with specific properties.
But privileging question types inevitably means that distinct question types are not privileged.
This will happen with any assessment (computer aided or not) or, indeed, as Kendal and
Stacey (1999) point out, with teaching. With CAA, however, the question type is not the
choice of the teacher but of the CAA designer who is likely to be greatly influenced by what
the computer can do. With regard to stationary points, when I teach this topic I often: include
graphic work, e.g. sketching a function and asking for sketches of the first and second
derivative; ask students, in groups, to state what the second derivative tells us about the
original function. In recapping the topic I often set a trick question to catch out those who
think that if the second derivative at a point is zero, then the function has a point of inflection
at that point. Whether I set these tasks depends on my interaction with the group. Question
which arise with students working with STACK are
How do STACK questions interrelate with other tasks students are set?
What are teachers opinions of the STACK questions?

Lagrange (2005) describes research on students using Casyopée, software with a CAS kernel.
This software is designed for teaching and learning rather than assessment. With Casyopée
parameters in the software can be manipulated. A question for Chris in the first instance is
Is there scope for user manipulation of parameters in STACK?

Lagrange (ibid., 144) notes that “‘Traditional’ software design in mathematics educational
research often put a strong emphasis on the analysis of mathematical content, and take
teaching practices and curriculum into account only as a second dimension.” Last December
saw the 17th ICMI study conference on the theme ‘technology revisted’ (see
http://www.math.msu.edu/~nathsinc/ICMI/). One of the four conference themes was Design
of Learning Environments and Curricula, and Chris was a member of that group. There are
many schools of thought on software design. Something that is not clear to me from Chris’
paper which I would value Chris’ opinion of is
Where does the design of STACK feature in the spectrum of design approaches?

The anthropological and the instrumental approach


CAME 5 is the first CAME symposium not to include input on the anthropological and/or the
instrumental approach. I partially remedy this omission by briefly noting areas of the work
Chris introduces for which these approaches could provide insight. My CAME 4 paper
(Monaghan, 2005) provides an introduction to and critique of both of these approaches.

An important construct in the instrumental approach is instrumental genesis, a process over


time which links the affordances and constraints of the tool to the user’s prior understandings
and activity. A study of students’ STACK instrumental geneses would be of undoubted
interest.
A central feature of the anthropological approach is that educational practices are described in
terms of Chevallard’s ‘tasks, techniques, technology and theory’ construct. I focus on ‘task-
technique’ as this relates to the previous section. Researchers in this field note that tasks and
techniques are linked, that tasks are artefacts constructed in educational settings and that
certain techniques are institutionally privileged. These researchers differentiate between
pragmatic and epistemic values of techniques (pragmatic values concern breadth of
application of a technique; epistemic values concern the role of techniques in facilitating
mathematical understanding). A consideration of Chris’ p.5 stationary point task with regard
to these constructs may be illuminating.

Computer vs paper assessment


Do students do equivalent items in the same way when the items are presented and answered
on a computer and on paper? NB Let us assume that, like STACK, the student cannot harness
the processing power of the computer, e.g. the student cannot directly use, say, a CAS. This
question is addressed in a recent paper (Threlfall et al., 2007). They work with younger
students than Chris does but there may be issues for us to consider in their work. I focus on
one of the questions they report on, Circles, to illustrate issues arising.

Circles states “Here is a grid with eight circles on it”. The paper-based item states “Draw two
more circles to make a symmetrical pattern” and the computer-based item states “Move the
two extra circles on to the grid to make a symmetrical pattern”.

Threlfall et al. (pp. 8-9) comment


On paper, the circles cannot actually be drawn until after a decision has been made about where they
should go, because of the mess resulting from a change of mind. The pupil needs to decide that it will
look right without being able to try it properly, so has either to be able to visualise, or be analytic – for
example by matching pairs across possible lines of symmetry. Either of these involves dealing with
elements simultaneously, with high demands on working memory. On computer, the pupil can put the
two circles on and make a judgement by recognition – does this arrangement look symmetrical? If not,
he or she can move them elsewhere (or, if he or she cannot remember which circles were placed and
which were already there, can “start again”). The sequential affordance of the computer medium
brings a smaller ‘cognitive load’ and enables easier success – by recognition of symmetry rather than
through visualisation or by analysis. If pupils are willing to try things out, the question is assessing
whether they recognise symmetry when they see it. The implicit aspect of the paper assessment is that
a desirable understanding of symmetry is more than just the ability to recognise it when one sees it,
but also should incorporate elements of visualisation and/or analysis. If that is accepted, then the
activity afforded by the computer is not legitimate for the assessment, and the computer question is
less valid as an assessment item.

I think it is very difficult to anticipate differences in how students answer questions in


different media and I would not like to speculate about potential differences in the work Chris
presents. I do, however, think that research on CAA into this area would be very useful.

Endnote
CAA is in its infancy but it is clearly one aspect of the future of teaching, learning and
assessing mathematics with ICT/CAS. Any criticisms of Chris’ paper are made with this, ‘in
its infancy’, comment as his defence. It is, however, important that we step into this future
with a critical mind towards CAA.

References
Black, P. and Wiliam, D. (1998) Assessment and classroom learning. Assessment in
Education, 5(1), 7-75.

Kendal, M. and Stacey, K. (1999) ‘Varieties of teacher privileging for teaching calculus with
computer algebra systems’. The International Journal for Computer Algebra in Mathematics
Education, 6(4), 233-247.

Lagrange, J-b. (2005c). ‘Curriculum, classroom practices and tool design in the learning of
functions through technology-aided experimental approaches’. International Journal of
Computers for Mathematical Learning, 10(2), 143-189.
Monaghan, J. (2005) ‘Computer algebra, instrumentation and the anthropological approach’.
Available from http://www.lonklab.ac.uk/came/events/CAME4/index.html
Threlfall, J., Pool, P., Homer, M. and Swinnerton, B. (2007) ‘Implicit aspects of paper and
pencil mathematics assessment that come to light through the use of the computer’. Accepted
for Educational Studies in Mathematics and available at the time of writing via
http://www.springerlink.com/content/t61556153k435553/fulltext.pdf

Potrebbero piacerti anche