Sei sulla pagina 1di 55

GCSE

Examiners Report

GCSE
Mathematics A (1387)
June 2003

Edexcel is one of the leading examining and awarding bodies in the UK and throughout
the world. We provide a wide range of qualifications including academic, vocational,
occupational and specific programmes for employers.
Through a network of UK and overseas offices, Edexcels centres receive the support
they need to help them deliver their education and training programmes to learners.
For further information please call our Customer Response Centre on 0870 240 9800,
or visit our website at www.edexcel.org.uk.

June 2003
Publications Code UG014129
All the material in this publication is copyright
London Qualifications Ltd 2003

Contents
Principal Examiners Report Paper 5501. 1

Principal Examiners Report Paper 5502. 6

Principal Examiners Report Paper 5503. 11

Principal Examiners Report Paper 5504.

17

Principal Examiners Report Paper 5505. 22

Principal Examiners Report Paper 5506... 27

Principal Moderators Report 5507.... 32

Statistics and Grade Boundaries 44

PRINCIPAL EXAMINERS REPORT PAPER 5501


(FOUNDATION)

1.1

GENERAL POINTS

1.1.1

This paper was accessible and gave candidates the opportunity to demonstrate
positive achievement. As intended, the greater number of grade G marks (33)
gave weaker candidates more chance to show what they knew and there were
relatively few single figure scores. The proportion of high scores (over 65) was
also low, although it is likely that this was the result of centres entry policies
rather than any intrinsic difficulty of the paper itself.

1.1.2

Several questions were very well answered, full marks probably being gained
most often on Question 8 (Pictogram) but the success rates on the first four
questions, Question 13 (Directed numbers), Question 14 (Number machine) and
Question 17 (Two-way table) were also high. At the other extreme, full marks
were rare on Question 16 (Frequency table) and, for all but the strongest
candidates, the final seven questions on the paper.

1.1.3

Inevitably, by failing to show working, many candidates sacrificed the chance to


be given credit for their methods, even when their final answers were wrong.
This applied particularly to Question 12 (Shopping) and Question 20 (Teddy
bears). Of course, when working is shown, it must be comprehensible to
examiners. A mass of figures, sometimes at a variety of inclinations to the
horizontal, gives candidate and examiner little chance.

1.1.4

The only equipment needed for this paper was a ruler and a small minority did
not have one. This was a serious handicap on the first two parts of Question 4
(Line and midpoint) but, in the rest of questions which required straight lines to
be drawn, careful freehand lines were accepted. Some candidates used their
protractors to measure the angles in Question 23, even though it was clearly
stated that the diagram was not accurately drawn.

1.2

REPORT ON INDIVIDUAL QUESTIONS

1.2.1

Question 1
This proved to be a straightforward start to the paper. 40.6 and 6 appeared
occasionally in part (a) and the decimal point was sometimes omitted in part (b).
330 and 340 were seen with some regularity in part (c) but, overall, this question
caused few problems and many candidates gained full marks.

1.2.2

Question 2
The majority of candidates scored both marks by writing the amounts of money
in an acceptable form e.g. 1.60, 1.60p, 1-60, 1,60 and 1 60 but not 1:60.
Errors were rare but, when made, were usually either 1.06 for one pound sixty
pence or 2.5 or 2.50 for two pounds five pence.

1
UG014129

1.2.3

Question 3
Many candidates gained full marks. In part (a), the most common mistakes were
giving a fraction, which was not in its simplest form, or giving the fraction of the
shape, which was unshaded. 1 mark out of 2 was awarded for the former but, for
the latter to earn a mark, it had to be in its simplest form. If the mark for part (b)
was lost, it was usually because

1
1
or
of the shape was shaded.
2
3

1.2.4

Question 4
The majority of candidates were able to draw a line 12cm long, mark its
midpoint, and draw a rectangle with the specified dimensions.

1.2.5

Question 5
Most candidates scored at least one mark, usually for litres, but only a
minority managed all three. Grams were quite popular for the weight of a turkey.
The fact that millimetres and metres were frequently suggested as sensible
imperial units for the width of a page, while ounces and pints appeared as metric
units, indicated some confusion about the terms metric and imperial. It was
not unusual for candidates to write numbers, instead of units, in the table.

1.2.6

Question 6
Many candidates demonstrated their knowledge of parallel lines and right angles
but the first two parts still proved far from trivial and, for a substantial number
of candidates, exposed misunderstanding of at least one of these basic
geometrical concepts. Most candidates gave the answer acute for part (c)(i)
but, in part (ii), obtuse appeared much more often than the correct answer.

1.2.7

Question 7
The quality of answers was almost as variable as the spelling. In general, any
recognisable attempt received credit. Thus, for example, Sophia was awarded
the mark in the first part, for which ball was the popular wrong answer. Part
(ii) had the highest success rate of the three, although tube was often seen and,
in part (iii), prism appeared frequently. The names of 2-D shapes e.g. circle,
triangle and trapezium made regular appearances as well as more intriguing
names such as overall (oval?) and tunes (tons?).

1.2.8

Question 8
This was probably the best answered question on the paper and a high
proportion of candidates achieved full marks, some of them because separate
half and quarter symbols were accepted as three quarters of a symbol.

1.2.9

Question 9
The majority had little trouble ordering the natural numbers in the first part but
all the remaining parts proved much more demanding. In the second part, only a
minority appreciated that 0.067 was the smallest number and, of those who did,
many thought that 0.605 was greater than 0.65. The most common error in the
third part was to reverse the order of the negative numbers. A large number of
candidates scored 1 mark out of 2 in the final part as three of the numbers were
in the correct order in their list e.g.

1 2 2 3
, , ,
but there was rarely any
2 5 3 4

indication that equivalent fractions or conversion to decimals had been used.


2
UG014129

1.2.10 Question 10
Few marks were lost in the first three parts but only stronger candidates had the
knowledge of algebra needed for the formula in part (d), for which m n , m =
n and n + 6 were popular wrong answers.
1.2.11 Question 11
Answers to this varied widely both within and between centres. As far as any
pattern was discernible, candidates appeared to be most familiar with multiples
and factors and least familiar with cube numbers.
1.2.12 Question 12
Full marks were awarded quite regularly and much more credit could have been
given if candidates had shown their working. Even if the final answer were
wrong, 3 of the 4 marks could be scored for the costs of the separate items; if
one of these were incorrect, the problem was usually with finding

1
of 72.
2

A substantial number of candidates simply added 72p, 24p and 25p and then
found the change from 5.
1.2.13 Question 13
This was very well answered, full marks often being gained. It was noticeable
that, after answering part (a)(i) correctly, some candidates gave 13 as the answer
to part (ii), even when they had drawn a number line. Presumably, they had
counted the numbers instead of the steps. The difference between two numbers
is regarded as a strictly positive number and so an answer of 12 received no
credit.
1.2.14 Question 14
This was another very well answered question with many candidates scoring full
marks. If one error were made, it was most likely to be with the final entry,
which was an input.
1.2.15 Question 15
In part (a), there appeared to be some doubts about the meaning of vertices.
From the labelling, some candidates seemed to confuse vertices with edges,
while others seemed to confuse them with faces. The success rate on part (b)
was very low, wrong answers being seen much more often than the correct
answer. The most frequent one was 28, the perimeter of the net, with 38, the sum
of the perimeter and the lengths of the internal lines, and 24, the surface area,
also appearing regularly. 6, which was occasionally given as the answer, may
have been an unsuccessful attempt to evaluate 2 3 .
1.2.16 Question 16
A minority of candidates achieved some success with the first two parts but very
few made any headway with the last part. In the second part, some of the
candidates who had some knowledge of range, albeit imperfect, gave answers
such as 29-32 and 5 1 4 while, in part (c), the majority of candidates either
gave 30 with no working or found the sum of 29, 30, 31 and 32 and then divided
their result, usually by 4 and sometimes by 10. Apart from the usual confusion
of mean, mode and median, interpreting the table was an additional stumbling

3
UG014129

block. In recognition of this, one mark was awarded for listing the ten numbers,
even if the candidate subsequently attempted to find the median.
1.2.17 Question 17
Few candidates failed to complete at least the two correct entries in the two-way
table needed to score one mark and many scored 2 or 3 marks.
1.2.18 Question 18
In part (a), most success was achieved on (iii) ( 3 g 5 g ), closely followed by (i)
(c c c c) , for which c 4 also had considerable support. The other two parts
proved more difficult. 4p and 4 p were often seen in part (ii) while 7rp and 2r5p
were popular in part (iv). Part (b) was poorly answered; 7 y 3 and 10 y 3 ,
sometimes simplified to 7y, were the most common wrong answers.
1.2.19 Question 19
This question discriminated well between candidates. A significant proportion
shaded both grids correctly and drew the correct conclusion. Correct shading
followed by an incorrect conclusion occurred more frequently than might have
been expected. Some shaded one of the grids correctly, usually nine squares to
3
, while the weakest candidates shaded all fifteen squares, 3 by 5, to
5
3
2
represent
and six squares, 2 by 3, to represent .
5
3

represent

1.2.20 Question 20
Overall, only a minority of candidates were able to make a meaningful attempt
at either part. Those who could, attempted a wide variety of methods in the first
part but many made too many computational errors to earn any marks. Methods
for evaluating 48 9.95 instead of 48 9.55 appeared regularly, as did the
answer 696, transferred from part (b). In the second part, a small minority
completed formal long division accurately but trial and improvement methods
were more common. As always, these received credit only if they led to a correct
answer. Some misinterpreted the information in the question, thinking the price
had remained unchanged at 9.55.
1.2.21 Question 21
Only the strongest Foundation candidates can realistically be expected to have
the competence in basic algebra needed for a question of this type and so it
proved. Thus, the answers x 12 and x 12 were often given for part (a), with
y 10 and x y 22 popular for part (b). Every effort was, however, made to
reward genuine evidence of understanding. For example, in part (b), one mark
was awarded for the appearance of 10y in the expression 12 x 10 y .
1.2.22 Question 22
Hardly any candidates made any headway with this question.

2
, an attempt to
7

1
1
1
and
, was common, as was , both of them generally unaccompanied
3
4
2
1
by working. Candidates who gave as the answer were presumably continuing
5

add

the sequence of fractions.

4
UG014129

1.2.23 Question 23
In parts (a) and (b)(i), a small minority found the size of each angle correctly.
Even if part (a) was wrong, candidates who showed how they had used their
incorrect x to find y could still gain 2 marks in part (b)(i) but few showed the
necessary working. The mark for the reasons in part (b)(ii) was rarely awarded; a
statement mentioning the equality of the sides and the equality of the angles was
required. If an attempt were made to give reasons, it usually related to the angle
sum of a triangle or to the sum of the angles on a straight line. Quite often, the
reason given was some variant of either I used my protractor. or, with
disarming honesty, I guessed. The regular appearance of parallel in
explanations suggested some misinterpretation of the symbols indicating two
equal lengths on the diagram
1.2.24 Question 24
Very few candidates had the algebraic skills to tackle successfully either part of
this question. Some were so suspicious of question setters that, in part (a), they
answered that neither Tayub nor Bryani was right.
1.2.25 Question 25
In the first part, a sizeable minority drew a rectangle with the correct
dimensions, which earned 2 marks out of 3, but the hidden detail line required
for the final mark was rarely present. In the second part, considerable tolerance
was exercised by examiners. Some candidates produced excellent drawings but
both marks were awarded for a perspective drawing showing the two key
features of the sloping face and the cutout, even if there were errors in the
drawing. Consequently, a substantial number of candidates received full marks.
Sketches of triangular prisms and, to a lesser extent, pyramids appeared
regularly but were not rewarded. Interestingly, there was little correlation
between candidates marks on this question, especially the 3-D sketch, and their
performance on the paper as a whole.
1.2.26 Question 26
The answer 20, read directly from the travel graph, appeared much more often
than the correct answer in part (a). In part (b), both marks were occasionally
awarded but many candidates scored one mark for a line from (45, 20) to the
time axis, often (60, 0) or for a line of the correct gradient, usually from (60, 20)
to (80, 0).

5
UG014129

PRINCIPAL EXAMINERS REPORT PAPER 5502


(FOUNDATION)

2.1

GENERAL POINTS

2.1.1

The questions on this paper were mostly well understood, with the vast majority
of candidates scoring between 20% and 80% of the marks available. Questions
on the new specification content were often the most poorly attempted.

2.1.2

There was evidence that not all candidates had access to a ruler, protractor or
pair of compasses. In fact it was more common to see the circle drawing
question attempted freehand!

2.1.3

The majority of candidates attempted all of the questions.

2.1.4

Questions 2, 3, 5, 8, 9, 13 were answered with the most success.

2.1.5

Questions 15, 16, 20, 24, 25, 26 were rarely successfully completed.

2.1.6

The pie chart question (No. 21) and the stem and leaf question (No. 24) were
poorly answered, in particular there were very few reasonable attempts at the
stem and leaf diagram a new topic in the specification for the first time this
year.

REPORT ON INDIVIDUAL QUESTIONS

2.2.1

Question 1
Few candidates gave completely correct answers to this question. About 20% of
candidates gained no marks, though the majority of candidates were able to
answer one or two parts successfully.
In part (a) common incorrect answers given were

7
1
7
and ,

10
7
59

In part (b) incorrect answers of 18 and 18 were often seen.


Part (c) however was correctly answered by about 25% of candidates.
2.2.2

Question 2
This question was well understood by most candidates and candidates scored
well on this easily accessible question. It is still a great shame that candidates
lost marks because they did not have the necessary equipment for the
examination and so found it difficult to draw a circle and find the midpoint and
length of the line.

2.2.3

Question 3
About 95% of candidates gave a fully correct answer to this question and the
other 5% scored 2 out of the 3 marks.

2.2.4

Question 4
Candidates had variable success with this question.
Part (a) was mostly well understood by all candidates.

6
UG014129

Part (b) proved difficult for a lot of candidates with 3.4 or 0.34 often seen as
incorrect answers.
Part (c) was usually correct in about 75% of cases whilst
Part (d) was completed fairly well by 90% of candidates.
2.2.5

Question 5
As usual time questions are often misunderstood by Foundation Tier candidates.
About 75% of candidates were able to give correct answers to parts (a) and (c).
In part (b), candidates often found the time taken to travel from Coventry to
London, rather than Crewe to London. Many candidates did not appreciate that a
length of time should be written in hours and minutes rather than using the same
notation as that used in stating a time i.e. 2 hours, 45 minutes rather than
2:45 or 2.45.

2.2.6

Question 6
Full marks on this question were rarely seen. Part (a) was generally more
successfully answered than part (b). Candidates generally lost marks on part (a)
by writing 384 or 38.40. In part (b) the absence of working cost marks, as about
75% of candidates obtained the answer of 3.28, but failed to work out the
difference. Difficulties often arose with the use of decimal points and the
concept of writing money correctly.

2.2.7

Question 7
Part (a) was usually correctly answered though 50,000 was a common incorrect
answer.
In part (b) answers of 50,000 or 10,000 were accepted and often seen. This part
was not answered as well as part (a). About 25% of candidates gave fractional
answers such as thousandths.

2.2.8

Question 8
This question was well understood by all candidates and about 50% of
candidates were generally successful and scored full marks. The correct
reflection was nearly always seen. The confusion between perimeter and area
still exists and answers to part (a) and (b) were often transposed.

2.2.9

Question 9
For their answers to (a), most candidates could correctly and clearly explain that
there was a label missing on the horizontal axis but fewer were able to give a
lucid explanation of what was wrong with the frequency axis. Many candidates
gave the reason that there was no title. This answer was not accepted.
Nearly all candidates successfully completed the bar chart in part (b) and went
on to give the correct answers of blue for the mode in (c) and 14 for the
number of teachers in (d). There were very few successful attempts to part (e).
The incorrect answer of one third was frequently seen.

2.2.10 Question 10
Over half of the candidates found this question difficult. Candidates were more
successful in explaining Barrys pattern than Kaths pattern. Common wrong
answers for Kaths pattern was that the numbers add up to 7. Some candidates
understood the method, but found the ability to explain it very difficult.
2.2.11 Question 11
7
UG014129

About a quarter of candidates gave entirely numerical answers to this question;


most answers given were in terms of n. About half of candidates who sat the
paper were able to give 2n or an acceptable equivalent (i.e. 2 n, n 2
or n2) in answer to (a). About 25% of candidates wrote n = 2n and scored no
marks. A significant minority of candidates were able to use their expression in
(a) + 15 as their answer in part (b), although n + 15 was often seen. Part (c)
was successfully answered by about a third of the candidates, and by some that
had been unsuccessful in the previous two parts. Some candidates confused
their answers by trying to include words or a p (presumably for denoting
pence) in their answers.
2.2.12 Question 12
Part (a) was well answered by about 75% of candidates. Correct answers for part
(b) appeared in only about 25% of cases. Candidates had relative difficulty
relating their answers to central tendency with the highest frequency often
mistaken for the mode. There was also clear confusion between frequency and
the number of cups.
2.2.13 Question 13
Parts (a) and (c) of this question were successfully answered by about 50% of
candidates. There were a sizeable number who were apparently confused by the
placement of the shapes on the diagram, despite the fact that the diagrams were
clearly labelled, and gave isosceles for their answer to part (a) and trapezium
for part (c).
Correct answers were usually given to parts (b) and (d). Sometimes (2, 3) was
given as (3, 2) and more commonly (4, 2) was plotted instead of the (2, 4)
given.
2.2.14 Question 14
Part (a) was generally more successfully answered than part (b). In part (a) the
most common wrong answer was 25000. In part (b) candidates lost a mark
because they failed to notice the million written on the answer line and wrote
7000000.
At least 20% of candidates failed to obtain a follow through mark on part (b)
because they did not show their working.
2.2.15 Question 15
In part (a) almost all candidates gave the correct answer.
In part (b) less than half the candidates wrote down an answer within the
acceptable tolerance of 55 03 pounds. Many either misread the graph or,
more probably, did not use the graph but applied the rough conversion factor of
1 kg to 2 pounds to obtain the answer 5.
In part (c) more candidates gave the unacceptable answer of 55 kg rather than
the 50 kg (or thereabouts) gained by accurate use of the graph or by using the
conversion factor 1 kg 22 pounds. Centres are reminded that this fact is stated
in the specification as one which candidates are expected to know.
2.2.16 Question 16
Many candidates found this multi-step question difficult. Lack of working
limited the marks that most candidates scored. In part (a) 5 was often seen which
scored 1 mark. In part (b) relatively few candidates scored any marks. Most
found it difficult to find 5% of 269.30, of those candidates who did obtain the
8
UG014129

correct answer a significant number did not round the answer correctly and lost
the final accuracy mark.
2.2.17 Question 17
There were surprisingly few totally or partially correct responses to this
question. The vast majority of candidates doubled rather than squared 41 and
went on to give the incorrect answer 8.774. It seemed that few candidates were
using the brackets and squaring features often available on calculators. Most
candidates did write down all the figures from their calculator display as
requested.
2.2.18 Question 18
The vast majority of candidates did not understand this question and few correct
answers were seen. Part (b) was generally better answered than part (a). Part (c)
was the most successful part with about 10% of candidates giving the correct
answer. All too often candidates tried to measure the angles.
2.2.19 Question 19
About 90% of candidates worked out the income of the club in (a) (i) correctly.
However, few went on to write down the correct fraction for (a) (ii).
In part (b) a significant proportion of answers or working shown indicated the
successful calculation of 60% of 1000 for which 2 marks were awarded, yet
only a small minority (less than 5%) were able to give the correctly and fully
simplified ratio 12:5.
2.2.20 Question 20
The vast majority of candidates did not understand this question. Many made no
attempt at this question. In part (a) 2x was often seen rather than x + 2. Little
attempt was made on part (b). A very small minority managed to gain the correct
answer in part (c), even though they had not completed part (a) and (b).
2.2.21 Question 21
There was some evidence of candidates not having a ruler and/or protractor, or
where a protractor was available, it was not used accurately to draw sectors
within the 2 tolerance required. This question was often not attempted. Few
candidates recorded sector angles in the table or showed any working. The
question was attempted less successfully than similar questions set in previous
years, with only a small percentage of candidates gaining more than one mark.
2.2.22 Question 22
Algebraic manipulation is not well understood by candidates at the Foundation
Tier. This question confirmed this again this year. This question proved too
difficult for most candidates. Part (a) was better answered than part (b). Quite
often candidates simplified the term in p correctly, simplifying q was more
difficult. Relatively few candidates made any attempts at part (b).
2.2.23 Question 23
A good proportion of candidates were able to complete rows 4 and 8 in the table
to gain the first two marks in this question. Few candidates were able to identify
the numerical expression needed to answer (c) and of those who successfully
did, hardly any worked out its value necessary to gain the mark available here.
2.2.24 Question 24

9
UG014129

This was a new topic to be tested on the specification. Most candidates had no
idea how to answer this question and success was very centre dependent. Most
candidates actually drew very artistic plants with stems and leaves. Those
candidates who could answer this question usually scored 2 out of the 3 marks
available - most losing the mark for not writing a key.
2.2.25 Question 25
This question was not well understood and poorly attempted.
It was rare to see candidates attempting to work out the total surface area of the
tank or, for that matter, the area of any of the faces. Usually the volume of the
tank was calculated, or the three lengths added together. Between 10 and 20%
of candidates were awarded one of the five marks given for using their surface
area to calculate the cost of paint needed. Only about 1% of the answers seen
scored more than one mark.
2.2.26 Question 26
This question was again not understood by candidates entered for this tier and it
was very poorly answered. Less than 1% of candidates got this question correct.
The most commonly seen incorrect answer was 250. No method was shown.

10
UG014129

PRINCIPAL EXAMINERS REPORT PAPER 5503


(INTERMEDIATE)

3.1

GENERAL POINTS

3.1.1

This was the first year of a new specification. In preparation for the written papers
centres have been provided with much information, including a set of specimen
and mock papers. It was therefore disappointing to find that candidates were
relatively unprepared for questions on the topics that are new to the 2003
specification. Inequalities, geometry of the circle and box & whisker diagrams
were questions in which candidates gained very few marks, and there was clear
evidence that this was centre-dependent. Centres need to spend more time
preparing candidates on the topics that are new to this specification.

3.1.2

The general standard of arithmetic remains weak, even from those who gained the
highest marks on this paper. This was most evident in question 14 and 15, where
candidates had great difficulty in multiplying and dividing by factors of 10, and
also in questions 6, 11 and 24(iii).

3.1.3

The increasing numbers of candidates who work solely in pencil run the risk of
their work being illegible, which frequently it is. It is these candidates, far more so
than those who work in ink, who also rub out their working, perhaps denying them
method marks if their answer is not correct. There are many candidates with a
desire to round answers and interim calculations quite discriminatory, rendering a
final answer inaccurate. On many occasions this can be compensated, but only if
the candidate shows the accurate figure, before rounding. It is also disappointing
when candidates give quite absurd answers, such as probabilities more than 1, or
monetary answers completely out of the context of the question. Centres are
advised to consider how they can help candidates avoid such errors.

3.1.4

Work associated with percentages and fractions showed some improvement this
year. There was clear evidence of sound percentage calculation in question 15,
whilst many candidates used conversion of fractions (frequently to percentages) to
assist in ordering in question 3. Indeed, the manipulation of fractions in questions
3, 5 and 9 was encouraging.

3.1.5

The concept of explanation of proof was expected to be a good discriminator, and


it was. Questions 10 and 25 required the use of geometrical theorem and property,
yet many candidates were unable to express their explanation in these terms.

3.1.6

There was a continuing improvement in algebraic manipulation. This was


evidenced in question 1 and attempts at finding the solutions to equations in
question 16. Many candidates showed their misunderstanding of algebraic
formulae in question 11, and it was disappointing how few Intermediate candidates
could correctly multiply out the bracket in question 27(a). There were some very
encouraging responses to a problem on questionnaire design (question 22),
demonstrating that some centres have placed a greater emphasis on Data Handling,
which is to be encouraged.

11
UG014129

3.2

REPORT ON INDIVIDUAL QUESTIONS

3.2.1

Question 1
The first part to this question was usually well attempted, with most candidates
gaining the marks. The common errors were in giving 8g2 and 7rp as the answers,
respectively. The second part was also well answered, but the weaker candidates
spoilt their answer by writing it as 5y. Whilst most candidates could multiply out
the brackets, very few correctly gave the final term as +15. Only partial credit was
therefore earned. The collection of terms also caused many candidates some
difficulty, far more so than the expansion of the brackets.

3.2.2

Question 2
Centres need to ensure that candidates are aware of the difference between giving a
formula and giving an expression. Many candidate omitted the m =, and could
not therefore be given full marks. n + 6 was a common incorrect answer.

3.2.3

Question 3
In general this question was well answered, with part (ii) usually correct. In part (i)
some candidates had difficulty in placing 0.605 since they thought 0.065>0.65
In part (iii) most candidates gained at least 1, and many 2, marks. Methods varied,
with some using equivalent fractions, and others making the conversion to
decimals or percentages. It was very encouraging to see this done with much
success.

3.2.4

Question 4
Most candidates gained full marks in this question. The only errors appeared to
occur in the bottom right of the table.

3.2.5

Question 5
Overall many gained full marks. This question enabled most candidates to
demonstrate their understanding of fraction size, usually by use of the grids for
shading, though more able candidates converted to equivalent fractions or
decimals. In the latter case this usually led to approximate answers since
candidates prematurely rounded the

2
to a limiting decimal. A common error in
3

use of the grids was to shade 2 out of 3 squares on one grid, and 3 out of 5 on the
other.
3.2.6

Question 6
This question proved to be a good discriminator. Some candidates wrote out all the
values in an attempt to find the median, whilst others stated the mode. A
significant number found 302, but then failed to undertake a division. The most
common error was the calculation 122 4 or 122 10. It was disappointing when
candidates exhibited correct method, flawed by poor summation of fx values.

3.2.7

Question 7
Most candidates undertook a rotation in part (a), though there were some errors in
the final positioning of the triangle. Those candidates who used tracing paper had
much greater success in the correct positioning of the triangle. There were very
few correct solutions to part (b). Most candidates chose to draw a triangle of a
scale factor 2 rather than ; there were, however, many errors in these attempts,
12
UG014129

since not all three sides were doubled in length, with a significant number adding
cm to each side. It is clear that of all the transformations, enlargement is the one
in which candidates are the weakest.
3.2.8

Question 8
This question was well answered by most candidates; the only common error in
part (a) was 12 + x, given by a minority of candidates. There were many correct
solutions in part (b), but a significant number of candidates spoilt their final answer
by incorrectly simplifying further, sometimes giving the response 22xy.

3.2.9

Question 9
The weaker candidates thought that

1 1
2

was
, but then usually went on to
3 4
7

5
as their answer. Candidates who attempted a conversion into decimals
7
1
gave approximated conversions for
, thereby losing any accuracy in their final
3

give

answer. There were some candidates who attempted a conversion to equivalent


fractions, and those who did so correctly usually went on to gain full marks.
Overall it was encouraging to see most candidates make an attempt at this question,
and despite the fact that many struggled with the fraction work, the quality of
fraction work in this question surpassed similar attempts at such questions in the
past.
3.2.10 Question 10
The numerical work in this question was usually well done, with full marks being
gained. A few candidates persist in thinking there are 360 in a triangle. The final
mark in (b)(ii) was not usually awarded since many candidates merely described
the process of calculation followed; in many cases this had already been
demonstrated. The marks on the two equal sides of the triangle were frequently
misinterpreted to mean parallel lines, though this type of geometrical notation is
universal. Candidates gained the final mark for using references to isosceles
triangles, or establishing the link between the two equal sides with the two equal.
Centres need to be aware that geometrical reasoning and explanation will in future
require sound mathematical communication and use of geometrical properties, and
that candidates need to be better prepared to give such explanations and
justification of numerical work undertaken.
3.2.11 Question 11
The choice between Tayub and Bryani was evenly split, but in both cases full
explanations were usually given. It was disappointing to see a significant number
of able candidates gaining full marks in part (a), to then make a common error in
part (b) of calculating (4(x + 1))2. Overall correct answers to part (b) were rare,
with 169 a common incorrect answer.
3.2.12 Question 12
Reponses to this question were centre-dependent. Most candidates obtained 2
marks for a correctly drawn rectangle outline. It was very rare to see the hidden
(usually dashed) line shown. It is hoped that candidates will have greater success
once this becomes a more familiar topic. It was encouraging to see the many
attempts at the 3-D sketch, most earning full marks. Common errors included a
failure to show the depression on both sides of the sketch, or a failure to show a

13
UG014129

sloping edge. A minority of candidates drew 3-D shapes that failed to relate to the
elevation, such as cylinders, triangular prisms or pyramids.
3.2.13 Question 13
In part (a) the conversion to km/h was beyond most candidates, with many merely
multiplying 20 by 30. This resulted in some answers given as 600km/h or 60km/h.
Few candidates related 30 minutes to 0.5 hour in order to perform the correct
calculation. In part (b) most candidates realised that their line had to arrive back on
the horizontal axis, but only a minority attempted to calculate the necessary
gradient.
3.2.14 Question 14
Many candidates tried to use long multiplication and division methods. The three
parts discriminated well, in that most candidates obtained the first part, but then
met decreasing success through parts (ii) and (iii). There was little evidence that
candidates understood the relationship between place value and the position of the
decimal points.
3.2.15 Question 15
A significant number of candidates merely assumed this could be calculated using
simple interest methods. Whilst many knew that they had to calculate 10% of
12000, and even wrote this out as 12000

10
, it was disappointing how many
100

such calculations resulted in an answer of 120. When both of these errors


occurred, no credit was earned. Some credit could have been earned if candidates
demonstrated, by their working out, that a compound, rather than simple interest
method was being used. More able candidates had little difficulty in obtaining the
correct answer.
3.2.16 Question 16
Weaker candidates attempted to use trial and improvement methods; inevitably
these failed to lead to a correct answer. Those candidates who could perform some
manipulation of algebra gained some credit, and usually arrived at the correct
answer. In part (b) some credit was gained when candidates multiplied out the
bracket, but the majority then failed to perform the correct manipulation, usually
giving 2r = 22, or 2r = 18. Negative signs were frequently lost in the course of
the manipulation. There was little evidence that candidates took the time to check
their answers to the equations.
3.2.17 Question 17
Many able candidates gained full marks. The most common incorrect answer was
n + 5. Some candidates extended the sequence, giving 31 as their answer. It was
discouraging to see candidates spoiling their answer by writing the incorrect
statement n = n + 5.
3.2.18 Question 18
Most candidates gained at least 1 mark in part (a), though attempts were not quite
as good as in previous years. There was clear confusion between and <. In part
(b) there was little evidence of mathematical reasoning. Few candidates drew lines
to assist them, even fewer indicating a region. As a result few marks were gained.
This is clearly a topic which centres are advised to spend more time on.

14
UG014129

3.2.19 Question 19
The first part of this question was well attempted, with nearly all candidates
gaining some credit. The most common error was in placing the corner of the
triangle at (1, 1). In part (b) it was encouraging to see far fewer candidates using
turn instead of rotation. Most gave a realistic attempt at a detailed description,
resulting in the award of marks, far better than in previous years. The most common
omission was the mention of the centre of rotation. There was clear evidence that
candidates who used tracing paper achieved greater success in this question.

3.2.20 Question 20
There was evidence that far more candidates are using compasses for this type of
question, and as a result there was more success at achieving marks. Arcs from A
were usually within tolerance, but many candidates lost this mark when they chose
to draw a line instead of an arc. The angle bisector at A was also usually accurate,
but candidates rarely constructed this line, choosing instead to draw it in by eye.
Most candidates shaded in the correct region having drawn two boundary lines.
3.2.21 Question 21
This is a question that is rarely done well from year to year, and this year was no
exception, with few candidates showing any real understanding of dimensions.
This question is also a test of whether candidates can follow instructions, and sadly
many failed in this respect also, ticking many squares in the table.
3.2.22 Question 22
Part (a) asked for a question that was to be part of a questionnaire. Questionnaires
do not include tally charts or data collection tables for several individuals, yet
many candidates appear to think that they do. Centres are advised to ensure that
candidates understand the difference between techniques for collecting data from
several individuals, and those (as in questionnaires) for just one individual. Many
candidates gave appropriate responses to their chosen question, which was then
credited. In part (b) candidates gave wide-ranging reasons, most of them relating
to the context. Unfortunately many candidates merely repeated their first reason
for the second reason, if only by changing the wording slightly.
3.2.23 Question 23
Many misconceptions were demonstrated in this question. Firstly there was
considerable misunderstanding about what operation was required, with many
candidates incorrectly choosing to use division in part (a), or multiplication in part
(b). Extra zeros were common, and a significant number of able candidates left
their final answer to part (a) as 48 106. It was disappointing to see so many
candidates who were unable to perform the calculation 6 8 correctly.
3.2.24 Question 24
There was quite a selection of answers to this question, and candidates who
achieved success in one part sometimes made errors in the other parts; there
appeared little consistency to their approach. Part (i) was answered best of all. In
part (ii) 3 = 1.5 was a common error, as was 1.5 2 = 3. In part (iii) many
candidates got as far as 144, but then halved the 144 rather than finding the square
root. It was disappointing to find so many (able) candidates unable to perform the
calculation 8 9 correctly.

15
UG014129

3.2.25 Question 25
Many weaker candidates failed to understand the three-letter notation for
describing angles. Many candidates were able to give the numerical values of the
two angles requested. Very few candidates gave appropriate reasons for their
deductions. As with question 10, merely stating the calculations undertaken is not
a reason for the properties used. As candidates use the circle theorems in both parts
to obtain the angles, their reasons should make some reference to those theorems, if
not by name, then by description.
3.2.26 Question 26
Responses to this question were centre-dependent. In part (a) candidates showed
little understanding of the term quartile, often giving the lowest (132) and
highest (182) number in the list. In part (b) it was very rare that any candidate
demonstrated what a box & whisker diagram is. Most merely plotted the numbers
as a series of points (crosses) across the bottom of the scale. Of those who did
show some understanding, there were usually errors in their diagram, perhaps
missing off the whiskers, or leaving the diagram as a series of vertical lines rather
than a box; rarely was there any indication of the median. This is a topic which
centres are advised to spend more time on in the future.
3.2.27 Question 27
It is disappointing that so few candidates were able to make a reasonable attempt at
the expansion of the brackets. Predictably x2 + y2 was the most common
(incorrect) answer seen. No candidates appeared to make the connection between
the two parts. Instead there were many long attempts at quite complex long
multiplication solutions, most unsuccessful.

16
UG014129

PRINCIPAL EXAMINERS REPORT PAPER 5504


(INTERMEDIATE)

4.1

GENERAL POINTS

4.1.1

Candidates generally found this a difficult paper. Although many candidates had
been carefully prepared there were a significant number who did not appear to
have studied the topics targeting Grade B. There were more candidates than
usual who could not even cope adequately with the most basic questions at this
level of entry.

4.1.2

This paper contained several new topics, some of which were clearly unfamiliar
to certain centres and groups of candidates. Many, for example, showed no
understanding of stem and leaf diagrams.

4.1.3

Questions 1(a), 2, 3(a)(d), 4, 5(c), 7(a) and (b), 10, 15(b) and 22(a) were
answered with the most success.

4.1.4

It was pleasing to note the confidence with which the majority of candidates
handled percentages, but questions involving the use of algebra were poorly
attempted by many candidates.

4.1.5

A surprising number of candidates displayed a poor understanding of units of


area. 2cm2 was confused with 22 (in question 3) and 2.5m2 with 2.52 (in question
8). Very few candidates were able to change 2.5m2 to cm2 in question 9 and the
area units were frequently omitted from the answer to question 16.

4.1.6

Many candidates made insufficient use of a calculator in this paper. Those


choosing non-calculator methods frequently arrived at an incorrect answer.
Centres are advised to give candidates guidance on sensible use of a calculator,
emphasising particularly its use for percentages (as in question 2) and standard
form (as in question 20).

4.1.7

There are still too many candidates disinclined to display any working out which
means that the available method marks cannot be earned when the final answer
is incorrect.

4.2

REPORT ON INDIVIDUAL QUESTIONS

4.2.1

Question 1
Part (a) was usually answered well. Incorrect responses often resulted from
(2.3 + 1.8)2 being evaluated as 2.32 + 1.82 and some candidates tried to square by
doubling. There were fewer successful attempts in part (b) with some
candidates failing to put any brackets at all in the expression. Some of those
giving a correct answer inserted superfluous brackets around 3.8 2.4.

4.2.2

Question 2
This was a well-answered question. In part (a) most candidates subtracted
56.80 and then divided the result by 42.50 to get 5 extra hours. A significant
minority forgot to add on the first hour and so lost the second mark. Repeated
17
UG014129

addition was a common approach used by candidates who scored low marks
overall on the paper. Part (b) was a good discriminator and the better candidates
gained full marks. Those finding 95% of the total were usually successful
although some gave the answer as 255.835 and lost the accuracy mark. Some
took a more complicated route by first finding 5% of each hourly rate and often
failed to gain the accuracy mark due to premature rounding or truncation of
values. A number of non-calculator methods were seen (e.g. finding 5% by
halving 10%) but this solution process was inappropriate on a calculator paper
and candidates using such an approach usually made errors of approximation in
their working.
4.2.3

Question 3
Many candidates gave the correct answer of 60 o in part (a) although answers of
45o and 120o were not uncommon. In part (b) the angle marked y was usually
calculated by using the sum of angles at a point or by dividing the sum of the
angles of a hexagon by 6. However, many candidates used 360o as the angle
sum of a hexagon. The majority of candidates appreciated that six triangles
were needed in part (c) and attempted to evaluate 6 2cm2 but it was
disappointing that a significant number of them then calculated 6 22, leading
to an answer of 24.

4.2.4

Question 4
The total income was found correctly by most candidates in part (i) of (a), but
50
1250
part (ii) was very poorly attempted. Many wrote
or
, leading to an
1200
50
incorrect answer. Finding the amount spent on the hall in (b) proved
straightforward for most candidates but many were then unable to give the ratio
in its simplest form. A ratio of 6:2.5 was seen often and some gave an answer in
unitary form.

4.2.5

Question 5
This question proved a good discriminator. Parts (a) and (b) highlighted
weaknesses in algebra. 2x and x2 were often seen instead of x + 2 and 4x was
confused with x4. Few candidates gave a correct expression in part (b) and a
large proportion of those who did failed to simplify it correctly. Incorrect
methods seen included finding the area, (x + 2) (x + 5), and finding half the
perimeter, 2x + 7. Many candidates gained full marks in part (c), even when
they had been unable to write correct algebraic expressions in the previous two
parts of the question.

4.2.6

Question 6
It was disappointing that more candidates did not gain both marks in part (a).
One mark was often achieved for 2p but many gave the answer in the form
2p + q. 8p 5q was a common incorrect response. In part (b) many of those
who correctly substituted y = 4 were unable to solve the equation correctly and it
was common to see 4 = 5x 3 followed by 4 3 = 5x. Some candidates
used trial and improvement to solve the equation without any reference to
algebraic techniques.

18
UG014129

4.2.7

Question 7
Nearly all candidates answered parts (a) and (b) correctly. Part(c) was less well
100 101
attempted. Many candidates wrote
but failed to evaluate it. A fully
2
correct expression was not often given in part (d) although quite a few
n n 1
candidates gained one mark for either
or some other quadratic in n.
2

4.2.8

Question 8
Many candidates calculated the volume of the cuboid instead of the surface area
and some simply added the three dimensions together. Some of those who
gained marks for finding the areas of faces omitted to include the base. Many of
the candidates with an incorrect surface area were then able to divide by 2.5 and
multiply by 2.99 to find the cost of the paint, often rounding up to a whole
number of litres in the process. Some showed insufficient working to gain this
mark but a significant number divided by 2.52 instead of 2.5.

4.2.9

Question 9
Candidates rarely gained any marks on this question with the majority
multiplying 2.5 by 100 to give an answer of 250. Other incorrect answers seen
included 0.025 and 2500. Some candidates confused 2.5m2 with 2.52.

4.2.10 Question 10
There were a large number of candidates who gained full marks in this question.
Many accurate pie charts were drawn in part (a) and they were almost always
correctly labelled. Some candidates calculated the correct angles but were not
able to draw them accurately, perhaps because they did not have a protractor.
Candidates using a percentage approach were usually unsuccessful. Part (b) was
nearly always correct.
4.2.11 Question 11
This question was answered well by those candidates who knew how to draw a
stem and leaf diagram although the key was frequently omitted. Unfortunately
many candidates were not familiar with stem and leaf diagrams and there were
numerous attempts at frequency tables and box plots as well as drawings of
plants. Some candidates did not attempt to answer this question.
4.2.12 Question 12
Less than half the candidates answered part (a) correctly. This was a
straightforward question but a significant number failed to recall the correct
formula. Many incorrect methods were seen. Often these started with 4 or
2 4 but some candidates did not use at all. Part (b) was answered very
poorly indeed. Few candidates thought of placing the pencil in the cylinder at an
angle and even fewer recognised it to be a question in which they could use
Pythagoras.
4.2.13 Question 13
In part (a) some did attempt repeated division by prime numbers (and often by
factors rather than primes) or drew factor trees. Answers were often given as a
list of factors or prime factors. Many candidates, however, did not know what
was required in this question. As few candidates achieved full marks in part (a)
the connection between parts (a) and (b) was rarely noticed. However, lack of
19
UG014129

success in part (a) was sometimes followed by the correct answer in part (b) and,
to a lesser extent, part (c). More candidates seemed familiar with the HCF than
the LCM. Some confused the two.
4.2.14 Question 14
The correct class interval was often given in part (a), but some candidates chose
the interval containing the number 20 (or 20.5), i.e. 0<C 50. Various incorrect
methods were used. Some candidates wrote down 40 5 = 8 and gave the
interval 0<C 50 and a few attempted to calculate an estimate of the mean and
used this to write down the class interval. Part (b) was completed poorly. Most
of those who gained a mark did so for explaining that 1000 was too large to be
included in the table rather than for identifying the position of the median as
being near the bottom of the class interval. Candidates usually gained either
three marks or no marks in part (c). A pleasing number gave the correct answer
of 6500 but many obtained an answer of 6240 by finding 120% of 5200.
4.2.15 Question 15
Few candidates did any work in part (a) that related to the problem and many
attempted to substitute numbers at this stage. Those who realised that the height
of the cuboid was x + 1 usually gained at least one mark. Trial and improvement
was generally popular and most candidates had some success in part (b). A
surprising number did not evaluate a trial in the range 5.8 < x 5.85, losing the
final mark. Some did not fully evaluate their trials and gained no marks. A few
candidates incorrectly evaluated expressions such as 53 + 62, i.e. they used x and
x +1.
4.2.16 Question 16
Answers to this question proved to be disappointing. Many candidates failed to
recall the correct circle formula and the circumference was often calculated in
error, although 152 and ( 7.5)2 were also quite common. Some of those
who used the correct formula failed to half the area of the circle to get the area
of the semi-circle. Candidates were expected to give units with their answer.
Many neglected to do so but when units were given cm2 was often seen.
4.2.17 Question 17
In part (a) many candidates correctly substituted y = 5 into the equation but were
then unable to solve this correctly. Some substituted 5 for x instead of y. Part
(b) was answered poorly. Many tried to rearrange the equation or simply wrote
1
proved difficult in
2
y 1
1 rather than
part (c) and even successful candidates tended to write
2

it in a different way, e.g. y = 0.5x + 1. Dealing with the

2(y 1). Few candidates rearranged the equation correctly and often no working
was shown so no mark could be awarded for a correct step. Some candidates
simply interchanged x and y in the equation.
4.2.18 Question 18
It was pleasing that many candidates used an algebraic approach to solve the
simultaneous equations. A surprising number multiplied the equations correctly
but then failed to choose the appropriate operation to eliminate an unknown.

20
UG014129

Those who chose to eliminate y tended to be more successful as eliminating x


left candidates with (15y) (4y), which was often incorrectly evaluated as
11y or 11y. The weaker candidates who used trial and improvement methods
usually earned no marks.
4.2.19 Question 19
This was a very poorly attempted question. Those candidates who recognised
6
similar triangles were usually unable to identify the correct scale factor, with
4
often being used. Some candidates gained one mark in part (b) for correctly
calculating the length of BC but many assumed the trapezium to be isosceles
with BC = ED.
4.2.20 Question 20
Full marks were rarely awarded in this question as candidates either failed to
write the answer in standard form or gave the value of y2. Many did gain two
marks and some weaker candidates gained one mark for evaluating the
numerator or denominator correctly, most commonly the numerator. An inability
to use a scientific calculator efficiently was evident in the manner many
candidates attempted the question. Some candidates wrote 615 instead of 6 1015
1015
and a significant minority replaced
with 1023.
10 8
4.2.21 Question 21
Many candidates realised that this question involved trigonometry and it was
answered well by the more able candidates. Many were able to identify the need
AB
to use tangent but some were unable to write down tan35 o=
and others
8.5
who did, could not rearrange this correctly to find AB. A significant minority
used cosine rather than tangent. There was evidence that some candidates had
their calculator set in the wrong mode for this question.
4.2.20 Question 20
Part (a) was answered well by candidates of all abilities. Acceptable
explanations often mentioned 100 as the expected number of sixes. The first
mark in part (b) for writing

5
on the Not Six branch was gained by many
6

candidates but the tree diagram was often not completed correctly. Candidates
commonly forgot labels, gave incorrect probabilities, or added only one more
branch to the diagram.

21
UG014129

PRINCIPAL EXAMINERS REPORT PAPER 5505 (HIGHER)

5.1

GENERAL POINTS

5.1.1

Candidates generally were able to attempt all the questions within the available
time and to display a good understanding of many of the topics that were being
tested. Very few candidates this year failed to cope with even the grade C topics.

5.1.2

Although presentation was generally good it is worth recording that some


candidates still use pencil only. The use of red biro throughout was also seen. In
the drawing questions, candidates should not use a non-HB pencil. Candidates
should also be aware that supplementary answer sheets may be used. Those who
attempted Question 22 by use of a large tree diagram would possibly have
benefited by using a supplementary answer sheet.

5.1.3

Candidates should continue to be reminded to check their basic arithmetic. The


following illustrate some of the more common arithmetical errors that were seen
by examiners: 12000 1200 = 11800, 6 8 = 42,
200000 + 30000 = 203000

5.1.4

The error in the grid printed in Question 14(b) is regrettable. Although


candidates who found the correct quartiles are unlikely to have used the interval
in which the error occurs, an increased tolerance was given to those who, for
example, needed to plot at 144.5

5.2

REPORT ON INDIVIDUAL QUESTIONS

5.2.1

Question 1
Many candidates gained full credit for this place value question. The most
common problems occurred in part (iii) where a significant number of
candidates gave the answer as either 1.1931 or as 0.0123.

5.2.2

Question 2
On this non-calculator paper most candidates successfully considered each year
separately rather than using the multiplication factor (0.9)2. Weaker candidates
frequently obtained the wrong answer 9600 by assuming that the depreciation
was 1200 in each year. Although careless subtractions resulted in the loss of the
accuracy mark it is pleasing to report that, in general, candidates showed
sufficient working to enable the examiners to award the method marks.

5.2.3

Question 3
The vast majority of candidates successfully expanded the bracket but then a
significant minority failed to carry out the rearrangement of the terms correctly.
Common wrong answers are illustrated as follows:
2r = 18, r = 9; 2r + 2 = 20, 2r = 18, r = 9;
7r = 5r 22, 2r = 22, r =11.

5.2.4

Question 4
22
UG014129

Almost all candidates gained some credit in this inequality question. In part (a)
the main error was the omission of 0. Even some high-grade candidates believed
that 0 was not an integer. Many candidates gained two of the three marks in part
(b). The best candidates frequently shaded out the unwanted areas and then
plotted the 6 correct points.
5.2.5

Question 5
Most candidates gave the correct answer to this familiar style question. The most
common wrong answer was n + 5.

5.2.6

Question 6
Most candidates correctly marked the position of triangle D and described the
correct single transformation in part (b) although some descriptions were not full
enough or involved a wrong angle of rotation. Fewer descriptions involved more
than one transformation, which is an improvement on previous years.

5.2.7

Question 7
Most candidates gained some credit in this question although a straight line
segment instead of an arc was sometimes drawn from the relevant line from A to
the side AB. Those who correctly drew the two boundaries almost always went
on to shade the correct region.

5.2.8

Question 8
The first two expressions were normally correctly identified as length and
volume but the expression xy + yz + xz was frequently not identified as area.
It was frequently linked to None of these.

5.2.9

Question 9
In part (a), although many candidates provided a valid question and many of
those also gave a list of relevant choices, a significant minority saw this part of
the question as a data collection exercise. In part (b) the vast majority of
candidates scored at least one of the two marks. Those who failed to gain the
second mark normally gave more information based on their first reason rather
than providing a second reason.

5.2.10 Question 10
In part (a) the vast majority of candidates realised that they had to calculate
6 102 8 104 but a significant minority could then not evaluate this correctly.
Some candidates who had correctly evaluated the product left their final answer
in non-standard form as 48 106. In part (b) credit was awarded for writing
either of the given numbers as an ordinary number but subsequent wrong
calculations or wrong operations gained no further credit. Those who had a basic
understanding of standard form notation and who showed working in part (b)
normally gained at least the first mark.

5.2.11 Question 11
23
UG014129

Many candidates were able to correctly expand the double brackets but some
then had problems simplifying the middle two terms. Only a minority saw the
connection between the two parts and frequently embarked on three long
multiplications with very little success. Some of those who spotted the
connection unfortunately made a subsequent basic error and wrote
(3.47+1.53)2 = 62 = 36.
5.2.12 Question 12
Although most candidates obtained the correct numerical values for the two
angles only a minority of these candidates could give valid reasons for their
answers. Some explanations were vague and were not awarded the mark, for
example, the angle between the chord AC and the tangent EC is 90. A
minority of the weaker candidates indicated that they did not fully understand
the three-letter angle notation.
5.2.13 Question 13
Although the correct answer p9 was often stated, a very common wrong answer
was p6.
Candidates generally were less successful in part (ii). The common wrong
approaches are indicated by these common wrong answers:
6q 9
5q 9
3
2
3

6
q
5q 6 .
, or 3q 2q = 6q , or
q3
q3

5.2.14 Question 14
The response to this question on box plots was very mixed and tended to be very
centre-based. In part (a) the most common wrong answers were 114.5 and 169.5.
Although candidates were able to plot these values a significant minority did not
draw the relevant (follow through) box. Those who drew the box frequently also
drew the correct whiskers but some then failed to indicate the median or
indicated its value at 157.
5.2.15 Question 15
Although many candidates had a good understanding of the required method it
was not uncommon to see others using the wrong formula for the circumference
of the circle. Again, basic numerical slips were in evidence, for example
360
8 , but the most common loss of a mark was due to forgetting to add the
40

two radii to the arc length to find the perimeter. A small minority of candidates
insisted on using a value for and generally got lost in a maze of numbers.
5.2.16 Question 16
Most candidates gave the correct value 1 in part (i) although 0 and 4 were
common wrong answers. In part (ii) the answer was sometimes left in an
1
incomplete form as 2 with other common wrong answers 0.04 and 16
4
appearing. The final part was, as expected, only generally answered correctly by
the more able candidates although some others simplified the given expression
to 4 3 but then evaluated this to 32 (or 48).
5.2.17 Question 17

24
UG014129

This question tested a topic, which has regularly appeared in past papers. Such
questions have to be read very carefully. It was disappointing to find that a
significant number of candidates could not get started.
The better candidates, provided they got the initial algebra correct involving
F, x and a k, generally had no problems picking up the marks in parts (a) and (b).
Generally only a minority of candidates were able to correctly rearrange the
formula to obtain the value for x in a form not involving a square root.
5.2.18 Question 18
Although many candidates gained partial credit for their solution to this surds
question, it was only a minority of candidates who went beyond

22
22

by

rationalising the denominator to reach the final correct answer.


5.2.19 Question 19
Many candidates gained full marks for this question. All but the very weakest
generally drew the first bar correctly. The common wrong values in the table
were 40 and 80.
5.2.20 Question 20
In part (a) most candidates gained credit for correctly obtaining at least 3 of the
4 terms but sign errors and numerical errors were seen far too frequently. In part
(b) a significant number of candidates left x without any power. In part (c)
those who used the difference of two squares were generally successful but, for
many, this became a question where everything cancelled.
5.2.21 Question 21
Negative enlargement is a topic, which does not seem to be understood by many
candidates. Some just drew a triangle of the same size somewhere on the grid;
better attempts had a triangle of the correct size. Those who displayed an
understanding of the topic had the correct orientation and within the correct
quadrant but often a triangle with at least one side too long/short.
5.2.22 Question 22
Some of the most able candidates presented precise elegant solutions within a
few lines of working. The vast majority of the non-A* candidates drew a tree
diagram and proceeded to calculate the probabilities of all the possible
combinations. Those who showed the results as double products of fractions
generally scored more than half the marks for the question but those who
evaluated without any evidence generally scored poorly. Many candidates did
not go on to add the 18 relevant probabilities from their tree diagram. A high
proportion of those who did attempt the correct sum made arithmetical errors.
5.2.23 Question 23
Many candidates gained credit for a correct vector in part (a) but a common
error with the below average candidate was to assume that since all the sides of
the hexagon were of equal length, each side could be represented by the same
vector. Those candidates who wrote down a relevant vector journey normally
gained credit in part (b). Part (c) was generally only completed correctly by the
A* candidates although many other candidates gained some credit for correctly
writing, for example, vector BY in terms of a and b.

25
UG014129

5.2.24 Question 24
It was not uncommon to see grade A and A* candidates picking up at least two
marks in part (a) although (iv) was frequently incorrect. Part (b), as expected,
proved to be a test even for the A* candidates although many such candidates
were awarded partial credit. Those who had a thorough understanding of the
topic had no problem writing down the answer.

PRINCIPAL EXAMINERS REPORT PAPER 5506 (HIGHER)


26
UG014129

6.1

GENERAL POINTS

6.1.1

The paper was found to be challenging.

6.1.2

The combination of increased algebraic demands and a greater number of


unstructured multistep questions produced difficulties for many candidates.
There was a lack of rigour when dealing with algebraic expressions.

6.1.3

Many candidates could not begin to solve a quadratic equation or expand a


perfect square.

6.2

REPORT ON INDIVIDUAL QUESTIONS

6.2.1

Question 1
Part (a) was usually well done. There were some students who thought that the
correct formula was 2r2 presumably because of the two ends.
Part (b) caused no difficulty for candidates who realised that the crucial idea was
to consider the pencil lying diagonally in the can, so making a right-angled
triangle. Once that was realised most candidates scored full marks, although
there was a significant number who used 102 + 42.

6.2.2

Question 2
Part (a) proved straightforward for those that used a systematic method, by
either systematic division by small primes or by using some sort of factor tree.
Some candidates thought that 1 is a prime.
Many candidates did not understand the term product of its prime factors and
gave as an answer a list of all of the factors of the numbers.
Part (b) was done well by those who understood the term highest common
factor although the lowest common multiple in part (c) proved to be more
difficult to find. Many candidates showed some awareness of what was required
by multiplying numbers 60 and 96 together.

6.2.3

Question 3
Competent candidates did part (a) well, by finding the location of the 20 th or
interpolating between the 20th and 21st values. Part (b) proved to be more
difficult as candidates had to give a clear explanation for their answer. The most
successful ones were those who referred to the 20.5th or 21st values, but other
candidates gained the mark by commenting that the old median was at the start
of the interval so the class interval would not change.
Part (c) was a standard reverse percentage, which many candidates recognised.
However, many did not and gave the answer as 6240.

6.2.4

Question 4
Part (a) was the first question on the paper where an equation was to be derived,
which was then to be subsequently solved. Many candidates had the correct idea
of multiplying length by width by height, but lost marks through poor algebra,
such as x x x + 1. Some candidates tried to identify the cube term as a
volume and the square term as an area and then argued that adding them together
gave the total volume.

27
UG014129

The second part was well done, with no evidence that candidates on this paper
were put off by the squared term.
6.2.5

Question 5
Most candidates made a good go of this question, although some forgot to add
the units or, incredibly, put in the wrong units (typically cm3). There were a few
who used the formula for a circumference.

6.2.6

Question 6
The presence of the half as the coefficient of x caused more problems than it
should have. A common answer to part (a) was 9, which was obtained by
multiplying 5 by 2 and then subtracting 1. A similar process was carried out in
many cases for part (c), where the answer of x = 2y 1 was very common.
There were many correct answers to part (b), although some candidates thought
that they had to write the same equation in an alternative fashion, giving, for
example, the response 2y = x+2.

6.2.7

Question 7
This was a standard simultaneous equation, so it is disappointing to see so many
poor attempts. The principal error was one of method where the wrong operation
was used to eliminate a variable, for example, subtraction when the coefficients
were equal but opposite in sign.

6.2.8

Question 8
Candidates who realised that this was the standard question on similar triangles,
or enlargement had little trouble with the question. However, there was a great
deal of confusion over which sides to use in order to find the scale factor. Few
candidates opted to use the expedient of drawing the two triangles separately and
specifically identifying the corresponding sides.
Part (b) was a more unusual question. Many candidates tried to find the
perimeter of the triangle.
There was a great deal of confusion what to use as scale factors.

6.2.9

Question 9
Most candidates managed to gain one mark. Many gained two by leaving the
answer as 1.875 107, overlooking the need to take a square root.

6.2.10 Question 10
A standard trigonometry question. Good candidates had little difficulty with it.
However, a sizeable number took advantage of the formula sheet and used the
sign rule. Often this led to finding the length of the hypotenuse. (Methods such
as this gain no marks unless it is a complete method leading to, in this case, the
finding of the opposite side.)
6.2.11 Question 11
Part (a) required candidates to comment on a statement about a probability. Most
thought that the dice was unfair, maintaining that they would have expected 100
sixes. A few used the phrase about 100 sixes. Some did say that the dice was
fair, because it is possible to get 200 out of 600 sixes from a fair dice.
Part (b) required candidates to complete a probability tree diagram. Most did so
by drawing two more sets of two branches, correctly labelling and getting full
marks. A few candidates thought that they should just draw 2 out of 4 branches.
28
UG014129

A few candidates drew the 4 branches but the probabilities on pairs of branches
did not add up to 1.
Part (c) was a standard task and was well done by many candidates. The main
error of good candidates was in (ii) where they interpreted the task as finding
exactly one six. However, there were a sizeable number who thought that
1 1 = 2 when multiplying the fractions together.
6.2.12 Question 12
The first part was competently done with many candidates scoring full marks.
Some thought they could take a short cut by using 20cm as the height.
Answers for part (b) varied considerably, but the general standard of algebra was
poor. Common errors were as follows:
h2 d 2 h d
S 2d

h2 d 2

S 2 2d ( h 2 d 2 )

Some candidates produced a correct formula for h, but went on to simplify the
square root, writing
S2
S
d2
d
2
2d
(2d )

Part (c) was poorly answered except by candidates who knew that the scale
factor for areas was the square of the scale factor for lengths, or used the
corresponding result for the areas of similar shapes.
6.2.13 Question 13
Part (a) required candidates to write down an appropriate expression for the area
of the trapezium, set it equal to 230 and to simplify. Again, the standard of
algebra was poor, with brackets often omitted. Some candidates did not realise
that they had to derive an equation but tried to find the value of x in part (a).
Part (b) was also disappointing, in that many candidates did not recognise this as
a quadratic equation. There were many attempts involving incorrect algebra as
well as using trial and error.
6.2.14 Question 14
Part (a) was generally well done, as candidates had simply to substitute into the
formula. Some candidates thought that if they used the cosine rule to find AB
then they were answering the question.
Part (b) proved to be something of a challenge for many students. Essentially the
question involves 2 stages. The first entails finding the length of AB using the
cosine rule. The most direct way to proceed then is to use the half base times
height formula for the area with AB as the base and CX as the height. Many good
students found one of the angles instead and then trigonometry in the rightangled triangle.
6.2.15 Question 15
Part (a) once again exposed weaknesses in algebra. There were first of all the
candidates who thought that expanding the square meant just squaring the first

29
UG014129

term and then squaring the second term. Further, the square of 2a is 2a2 and the
expansion of the left-hand side yields 4a 2 4a 1 4b 2 4b 1 .
Candidates also found difficulty with the right hand side. Some could not deal
with the 4 correctly, but the major error from those that knew something, was to
get the sign of the b term wrong.
A few candidates saw the connection between this part and part (b). Only the
very best were able to reason why the expression on the right hand side should
be a multiple of 8, but many managed to get partial credit by arguing that it was
a multiple of 4. An interesting approach, which appeared on occasions, was to
consider the difference between consecutive odd squares. This also gained
partial credit as by arguing that any two odd squares are connected by a
sequence of consecutive odd square numbers, a candidate could have obtained
full marks.
It was disappointing to see so many candidates writing ( 2r 1) 2 ( 2r 1) 2
where clearly they had not understood the basic rule of algebra, that the same
letter in an algebraic expression always carries the same value.
6.2.16 Question 16
Many candidates were able to get half marks by identifying the upper and lower
bounds on the individual variables correctly. Very few went on to score full
marks for the correct combinations of the individual upper and lower bounds.
6.2.17 Question 17
Most candidates managed to score at least one mark for part (a). However, there
were problems in interpreting the powers of 2 in terms of the variables x and y.
A common answer to (i) was x

y
and a common answer to (iii) was x-1.
2

Part (b) was found to be difficult by most candidates, although some reflection
on the form of the two equations quickly gives the result that 2y = 1, so that
y = 2-1. Many candidates gave the answer as p = 2 and q = 3, which satisfies the
first equation, but not the second.
6.2.18 Question 18
This proved to be too hard for the candidature. Many candidates were let down
in part (a) by poor algebra, in particular ( x m) 2 x 2 2mx m 2 or
( x m) 2 x 2 m 2 . However, many did not realise the nature of the task. Very
few were able to complete part (b).
6.2.19 Question 19
Many candidates did not realise that the numerical values given in the stem of
the question had some relevance to the answer! Many candidates tried to argue
that they may be dependent or not depending on whether they came to school
together or not.
Some candidates did realise that for events to be independent Prob (both A and
B) = Prob (A) Prob (B) and were able to use the given information to come to
the correct conclusion.
6.2.20 Question 20
This question proved to be too challenging. There were essentially two possible
approaches that candidates could take. One involves a basic knowledge of the
trigonometric curve and realising that the constant b is given by the amplitude of
30
UG014129

the curve. The constant a can be found by noticing that at t = 0, y = a-b = 0, so


that a = b. The value of the constant k can be found from the observation that
k 90 = 360. The second approach involves the use of transformations, a
translation parallel to the y - axis, a stretch along the y - axis and a stretch
parallel to the x - axis. Many candidates thought that a = 100, b = 100 and
k = 90.

PRINCIPAL MODERATORS REPORT 5507


(COURSEWORK)

31
UG014129

7.1

GENERAL POINTS

7.1.1

This year there has been a change to the coursework requirement for all GCSE
Mathematics specifications in that the two tasks are now mandatory. The
previous practice of marking the two pieces against the assessment criteria and
selecting the better mark in each of the three strands no longer applies and all six
strand marks now count (there are new criteria for the Handling Data task).

7.1.2

Previously the agreed notional grade boundaries for grades A, C and F were 20,
14 and 8 respectively out of 24 and this would suggest revised boundaries of 40,
28 and 16 out of 48 for the current year. In order to alleviate any initial
problems encountered by the changes to the coursework requirement all
Awarding Bodies have agreed to set reduced boundaries for the June 2003
examinations with A, C and F being fixed at 37, 26 and 14 respectively. These
boundaries have been agreed for 2003 and it should not be assumed that they
will remain at this reduced level in future years.

7.2

REPORT ON 5507A (TEACHER ASSESSED COURSEWORK)

7.2.1

GENERAL ADMINISTRATION
Centres were able to send their marks and the requested coursework portfolios to
their allocated moderator within the deadlines. A minority of centres experienced
difficulty in achieving the deadlines.
Generally, each candidates work was fastened securely (although many centres
stapled and enclosed in a plastic wallet and then placed the work in a card
folder!), a task form was attached for each student and the teacher responsible
for that candidate signed the relevant section. The majority of centres, however,
had used out of date task forms, which resulted in a comment on their centre
report; the U9. The correct task form has a space for both the AO1 and AO4
components marks, a space for teacher comments and two separate sections for
signatures. It is a requirement by QCA that both teacher and candidate sign to
authenticate the work as their own.
A few centres had only submitted one piece of work, the AO1 task. Each
component; AO1 and AO4 contributes 10% each to the overall mark. Failure to
submit an AO4 project resulted in the loss of the marks for that component.
Work which uses predominantly AO4 skills i.e. drawn from the Handling Data
section of the National Curriculum, cannot be used for AO1. AO1 tasks must
draw predominantly from AO2 and AO3 of the National Curriculum.

7.2.2

ASSESSMENT OF AO1: COURSEWORK


Once again, a wide variety of AO1 tasks were seen, with the overwhelming
majority taken from the Edexcel materials. Centres did, however, submit
their own tasks. This is not discouraged, but centres using any tasks not
listed in the Edexcel coursework guidance folder are encouraged to send
their performance criteria. Where this had occurred, moderators found them
helpful.

32
UG014129

Popular tasks were, once again; Beyond Pythagoras, Fencing Problem, TTotals, The Open Box Problem, Borders, Hidden Faces and Opposite
Corners. A significant number of centres had replaced T-Totals with Number
Stairs with success. A large number of centres had persisted with Emmas
Dilemma, although the assessment of this task had caused problems in the
past and has once again this year.
Centres should be assessing AO1 using the AO1 Assessment Criteria sent
to all schools by QCA and the awarding bodies. In addition, the revised
assessment guidance for each task was provided in the folder: Teacher
Support Materials for GCSE mathematics sent to centres in September
2001. There was evidence that neither the elaboration document nor the
revised guidelines had been followed.
There were more discrepancies this year between the marks awarded by the
centre and those allocated by the moderator. Many centres that had, in
previous years, awarded marks within tolerance were out of tolerance on
AO1 this year. Centres are reminded that internal standardisation is a
requirement and this should be carried out rigorously on both components.
There were a significant number of centres that were out of tolerance
because of one task or one teachers assessment. There are no mechanisms to
allow for either of these situations. The whole centres marks are adjusted.

BORDERS
Candidates were capable of producing a systematic list of results, tabulated
and patterns spotted. Their understanding of why the pattern worked was
weak, with few demonstrating an understanding of the structure of the
patterns that they had drawn. Many were able to symbolise their pattern, but
once again, this tended to rely upon a mechanism such as differencing rather
than an awareness of the link between the symbolic and the physical
situation. Consequently, many centres over awarded marks in strand 3.
Symbolism was often poorly defined, referring (if at all) to a position
number in their sequence rather than a dimension. Consequently, candidates
could not evaluate how many squares there would be in any shape without
knowing its position in their sequence, not a general solution.
THE FENCING PROBLEM
Candidates produced some fine examples of the use of Pythagoras and
Trigonometry to evaluate the areas of their shapes. However, central to this
piece is establishing that the regular case, for a given number of sides, will
give the greatest area for a fixed length of perimeter. All too often this was
not derived or stated. Similarly, stating this after only exploring one
classification i.e. a quadrilateral is not convincing enough to build a reasoned
argument. Far too many students produced the limiting case of a circle
without any justification as to why it is so.
Production of a graph asymptotic to the area of a circle does not convince,
especially when only based upon 4 or 5 sets of regular shapes.
The best candidates were able to adopt an argument based upon the
development of the general equation for the area as the number of sides
increased. The very best moved away from a numerical argument, which can
never be convincing, towards a general symbolic argument.

33
UG014129

NUMBER STAIRS, T-TOTALS


These two tasks proved very popular, once again. Both enabled candidates to
produce a systematic list of results, tabulate and spot patterns. Many
followed rote procedures to arrive at a linear expression, but few were
capable of explaining why the expressions worked. Numerical substitution
does not suffice for a general proof.
Most were able to extend their approach to another feature; grid size or
template size, but repeated earlier approaches rather than building upon their
findings. Consequently, many reproduced far too many numerical
calculations when a structural argument linking the number of cells to one
co-efficient and exploring the other in relation to their added feature would
have proved more fruitful.
Symbolism was often poorly defined, with capital letters and lower case
letters used to describe the same variable, different letters used to describe
the same variable etc.
LINES, CROSSOVERS AND REGIONS
This task did not prove as popular as in previous years. In general,
candidates were capable of producing a systematic list of results, tabulating
their results and spotting patterns. Their understanding of why the pattern
worked was weak.
It is essential that candidates state that the maximum number of crossovers
is when every line crosses the others as this leads to the generalisation
easily. Most missed this point and relied upon mechanisms to achieve the
generalisation with little subsequent understanding or development.
HIDDEN FACES
This task remains very popular with those whose target grades in AO1 are 5
or 6. Candidates were capable of producing a systematic list of results,
tabulating and spotting patterns. Generally, they could explain where both of
the co-efficients in their linear expression came from, although this was
often added on as justification for an expression found through differencing.
Having established a structural link, few used it to make progress, regressing
to previous numerical methods to generate further expressions.
Symbolism tended to be well defined.
BEYOND PYTHAGORAS
Many candidates attempted to make progress with this task through
differencing, without an understanding of the structure. Any expression
obtained from a finite list of numbers cannot be a generalisation without an
explanation of why it works beyond this list. As with other tasks where the
candidate relies upon a mechanical technique to obtain a quadratic
expression, definitions of variables were often missing. There was often
confusion between different letters representing the same variable. There
were many examples of tasks where candidates produced expressions
without any development. Teachers must ensure that work is authenticated.
Genuine development for the triple a,b,b+n, algebraically were rare,
although the techniques required are well within the capability of a genuine
34
UG014129

A/A* candidate. Numerical substitution into derived expressions was often


offered as a proof, and often accepted by the teacher as sufficient. A purely
numerical argument cannot be convincing.
THE GRADIENT FUNCTION
This task remains particularly popular with candidates on higher tier.
Generally the standard achieved was high. There were examples where
candidates with lower GCSE target grades had attempted this task with little
success.
The key areas in the convincing reasoned argument remain how the
candidate deals with the limiting case as the base of the triangle tends to 0.
All too often this was ignored. Too many candidates offered explanations
from an A level textbook without applying or extending their reasoning.
At the highest level, it was common to see candidates deriving a general
function based on the binomial expansion.
THE OPEN BOX PROBLEM
This task remains popular and offers an opportunity to candidates at all
levels.
The best work used ICT to develop their numerical results and search for a
linking characteristic.
Symbolic results for the optimum cut sizes for squares were seen regularly.
These candidates could then extend this reasoning to rectangular sheets with
different ratios. It was common for these ratios to be chosen randomly, rather
than controlled. At the highest level, candidates controlled the ratio, 1:n,
exploring the behaviour of the optimum cut size as n tended towards a large
number.
TUBES
Candidates who attempted this task achieved more success than on Fencing
Problem. Although the techniques used are similar, there is a greater
opportunity for development at the top of the mark range as this is a much
more demanding problem.
It was common to see candidates adopting a symbolic reasoned approach to
the behaviour of the tubes as the dimensions vary.
LAYERS
Although not popular, those that did attempt this task could produce a
systematic list of results, tabulate and spot patterns. Their understanding of
why the pattern worked, however, was weak. Definition of variables was
often missing.
Symbolic expressions were often produced with little attempt to relate to the
physical situation. Unfortunately, it was common for candidates to place
cubes on the next layer above spaces on the layer below. Sadly, this
illustrated the general point that candidates treat their investigation in
isolation from the physical situation that created the problem.
OPPOSITE CORNERS
Candidates were capable of producing a systematic list of results, tabulating
their results and spotting patterns. However, the use of algebra was generally
weak. Missing brackets, errors in the expansions were very common and
35
UG014129

were often given credit. Consistent symbolism must be accurate and largely
free from error.
EMMAS DILEMMA
There were a significant number of centres whose candidates had attempted
this task. The majority of these candidates relied heavily upon listing the
combinations and finding patterns from their tables. Numerical pattern
spotting was rarely expanded to include the structure. Factorial notation was
introduced without justification or explanation. Subsequent work relied upon
further listing of letters.
To progress into the higher marks, candidates must understand why the
combinations produce the patterns they do. Rarely was a structural argument
developed.
The revised assessment criterion printed in the Teacher Support Materials
for GCSE Mathematics has restrictions upon the marks that can be awarded
for this task. Too few centres had applied this revised criterion when
assessing their centres work. Consequently, adjustment of centres marks
was most commonly associated with this task.

7.2.3

AO4: HANDLING DATA PROJECT


This component was introduced for the first time this year. Four project
suggestions and data had been provided in Teacher Support Materials for
GCSE Mathematics, but centres were not restricted to these. A small
minority had introduced their own project starters, often with pre-written
data sets. There were no significant differences in the attainment or
assessment where centres had opted to do this.
The candidates who produced the best projects had produced work that was
well planned, succinct and well presented. Candidates who stated what they
expected to find, used and justified appropriate skills only and gave full
reasoned results invariably achieved the better marks at their level.
Assessment of this component varied considerably across centres, although
the assessment within centres showed a greater degree of standardisation.
There were a significantly high number of centres where the marks awarded
differed from those of the moderator. There were no significant discrepancies
between the marks obtained on any of the four choices of project.
It was clear that candidates did not generally understand the requirements of
this project. There was an (incorrect) assumption that marks would be
awarded for the use of skills, resulting in far too many diagrams and
calculations occurring rather than candidates selecting the most appropriate
and effective skill. It was common for candidates to list many hypotheses
which were unrelated and then to explore each in isolation. This is not what
is expected. Candidates need only investigate one hypothesis, which could
be divided into smaller inter-related statements. Separate, unrelated
hypotheses were treated by moderators as separate mini projects and were
moderated accordingly.
There was strong evidence that teachers assessment had only taken into
account the skills and had not considered whether the candidates had
satisfied their original objective. Candidates were clearly encouraged to over
produce, with many projects running to over 100 sides!
36
UG014129

It should be noted that many of the techniques used by the candidates were
either inappropriate or often incorrectly carried out. It was very common to
see the same result displayed in many different forms e.g. range and interquartile range and standard deviation used, when only one would be
sufficient to indicate spread in a particular case.
The best work came from candidates who had spent time producing a clear
plan, with clear statements of expectation, full pre-analysis of what they
expected to do and why. Sampling (when it occurred) was well thought
through and justified. The techniques were accurately carried out. Their
results were discussed thoroughly and possible inconsistencies discussed.

MAYFIELD HIGH SCHOOL


This was the most popular choice of candidates. The data was well used and
the statements explored were satisfactory. Too many candidates combined
unrelated hypotheses e.g. Males are taller than females with People watch
less television as they get older. Only one of these would have sufficed.
Candidates developed effective strategies to overcome the erroneous data in
the database. The best projects concentrated on male and female
comparisons, splitting by year group for a fuller analysis. More imaginative
investigations split by hair colour and IQ to investigate whether blondes
were more intelligent than other hair colours, although this choice rarely led
to a complex problem.
There were a worryingly large number of able pupils who limited themselves
to a straightforward problem and hence restricted their marks.
USED CARS
It was disappointing to find candidates restricted by this small data set,
especially when some of the car makes have few in them. The best
candidates made use of other sources to expand their data set. The very best
were able to investigate the factors affecting the second hand price for
different makes of car.
Sampling from such a small database was not advisable. A significant
number sampled from each car make resulting in, for instance, scatter
diagrams drawn with two points! Sampling should be carried out for a
purpose, not as a matter of course.
NEWSPAPERS
Some of the best work seen used this starting point. Candidates used a
variety of sources, but controlled their data collection well. Articles on the
same topic were compared on the same day across different newspapers,
using measures such as word length or sentence length or, in the very best
projects, readability measures.
A worrying number of centres sent the full newspapers with the project. This
was not necessary.
GOAL
Candidates were capable of collecting appropriate data but seemed unsure of
what they were trying to show. It was clear that candidates did not know
what to expect. A common hypothesis was that Goals are scored at all times
37
UG014129

in a game. The candidate than drew a cumulative frequency curve and


calculated the inter-quartile range, without thinking what the distribution of
goals scored against time should look like. This inability to think what their
results should look like and then use this to set up comparisons hindered
many students on this task.
7.2.4

CONCLUDING REMARKS
Generally, too few candidates had any idea of the processes involved in the
data-handling cycle outlined in the National Curriculum and the National
Numeracy framework. The assessment criteria published by QCA stresses the
need for candidates to set up an investigation, plan the appropriate approach and
analyse the results. It is essential that centres practice this approach with their
candidates before they attempt the project. It was very evident that the majority
of candidates were attempting this type of project for the first time.

7.3

REPORT ON 5507B (EDEXCEL MARKED COURSEWORK)

7.3.1

This was the inaugural year for this particular specification where Edexcel
marked the coursework completed by the candidates. Approximately 5% of the
total entry opted for this option.

7.3.2

ADMINISTRATION PROBLEMS
Overall the administration by most of the centres was first class. However, there
were some problems experienced in this first year, which will hopefully be
avoided in all future examinations:
Centres missed the entry deadline date. Hopefully the date will be a little
later next year but the deadline is imperative if the marking is to be
completed on time.
Some centres did not send the correct Candidate Record Form with the
work. It is now necessary for the teacher and the candidate to sign the task
form to authenticate the work submitted.
The work was submitted in a variety of ways to the marker. Centres should
keep the work of each candidate together with a treasury tag and avoid
stabling, plastic folders etc.
There were some errors on the attendance registers, which created extra
work for the markers in chasing up the problems raised.

7.3.3

REPORT ON THE CANDIDATES WORK


For this option Edexcel offered the candidates a choice of tasks. For the AO1
aspect of the work they could choose from: The Fencing Problem; Borders or
Number Stairs, and for the AO4 aspect Newspapers or Mayfield High
School.

7.3.4

AO1 TASKS
Each of the tasks appeared to be equally popular. Each task has been itemised
below looking at the various areas where candidates gained/lost marks.

THE FENCING PROBLEM


38
UG014129

The candidates must have a clear process to establish the regular case as
the one, which gives rise to the maximum area. Without this, the higher
marks are difficult to achieve.
Most candidates approached the task through a consideration of the
quadrilateral family, recognising the square as the maximum area. However,
this needs to be justified by reference to the symmetry or by using a
graphical approach. Many candidates failed to achieve this, as they did not
recognise the symmetry of the situation. There were far too many candidates
who assumed the square to be the maximum area without sufficient
justification.
1. The candidates then moved on to consider other families but again the lack
of justification for the regular case left their arguments flawed as they were
working on assumptions and not justification.
2. Many candidates did not achieve the move into the higher marks as they
failed to introduce new techniques when analysing the relationship between
the number of sides and the increase in the area of the polygons. This
requires more than merely repeating the use of trigonometry as used in the
earlier work. The use of ICT, with the fields clearly defined would have
sufficed or the use of the general formula for an n-sided polygon. This aspect
was poor in the majority of the work.
3. The idea of the limiting case was not really attempted by the vast majority
of candidates. Very few looked at the general formula and tried to interpret
what would happen as n approached infinity.
NUMBER STAIRS
Generally well done up to mark 5/6. Most candidates could work
systematically and obtain the generalisation T = 6n + 44 or equivalent.
Many candidates moved on to changing the grid size and obtained other
generalisations. This gained marks of 5, 5, 4.
Several candidates moved the task on further and obtained many other linear
generalisations of the type T = an + b or T = 6n + 4g + 4 as a
generalisation for a particular grid size. If all variables were defined this
gained mark 6 in strand 2. A common mark for this task was 5, 6, 4.
It still remains a problem at mark 6 in strand 2 where the candidates fail to
define their variables. To gain this mark they MUST define the variables.
Not many candidates used the structure of the task to obtain the
generalisations and hence limited the marks available in strand 3. The
dominant way of obtaining the generalisation was to use the technique of
differencing.
Many candidates failed to recognise the role of triangular numbers in the
task even though the structure was inherent in the diagram. The failure to
look at the structure of the task limited many candidates in the marks that
they gained. Too often did they rely solely upon the differencing technique.
Several candidates, at the higher tier, were able to obtain an overall
generalisation for any number stairs but few used the structure.
Failure to use the structure limited their mark, as mark 8 required them to
understand how and why the final results arose through a summation of
the triangular numbers aspect in the task.

39
UG014129

BORDERS
The use of the differencing technique was evident again in this task with the
result that any real understanding of the task was lost.
Candidates were not always systematic in their approach and often did not
start with the correct first shape, resulting in an expression that did not
always work, or required the substitution of a negative number to obtain the
correct results.
As with number stairs the popular mark of 5, 6, 4 was evident where
candidates used diagrams and differencing together with some predicting
and testing. The weakness was in strand 3 where the use of structure had to
be used to justify the expressions obtained.
1. It is importation, again, that the candidates define the variables at mark 6 and
above. In this task the variable has to be defined as a physical structure of the
shape and not the nth term.
2. Many candidates, at the higher tier, obtained the correct general formula by
differencing but limited themselves to marks of 7 in strands 1 and 2 by not
looking at the structure and the way in which the shape developed in 3-D.
7.3.5

AO4 TASKS
The centres were given a choice between using either of the tasks Newspapers
or Mayfield High School for this part of the option.
The most popular task used by the centres was Mayfield High School mainly
because the secondary data was presented and they had to choose appropriate
samples from it. The candidates had numerous different hypotheses for
whichever task they chose. However, schools should give a little more guidance
in terms of the:
Number of hypothesis given. Many candidates had three or more whereas
one would have been appropriate.
The suitability of the hypothesis. Many candidates had hypothesis that
limited the possible outcomes and hence the final marks. It is difficult to see
how a hypothesis along the lines of A persons IQ depends upon their shoe
size is really going to allow suitable skills to be used.
The essence of a good data-handling project is based upon a well thought out
plan, clearly setting out the aims together with the sample size to be used and the
techniques to be used. There should also be some thought given as to the
possible outcome. The nature of the task is also important in that it has to be of
an appropriate nature to access certain marks.
At the foundation tier the task can be very simple in nature, merely looking at
the relationship between two variables. This did not really create any problems
at this tier where candidates often set up such a task.
At the Intermediate tier the task had to be substantial in nature where the
candidates were comparing several features across different variables. Again,
many candidates were able to do this by looking at boys/girls, heights/weights.
In the newspaper task they compared newspapers across areas such as number of
words per sentence, etc.
At the Higher tier they had to set up a more complex task with some creative
thinking involved. This aspect proved a little more difficult for the candidates

40
UG014129

and often they only set up a substantial task, which did not warrant a mark of
7 in this strand.
7.3.5.1 STRAND 1
An important part of the planning aspect in strand 1 was the size/make up of the
sample. Collecting a suitable sample of an appropriate size was often well done
by the candidates at all tiers. The main error was where the candidates decided
to use Stratified Sampling. Whilst this method certainly has its merits the
candidates often used the technique inappropriately/unnecessarily and this gave
rise to sample sizes that were far too small. Often candidates, in the Mayfield
Task, would use this technique and then use the results to compare each year
group. This gave sample sizes of 4/5 in year 11 which were too small for any
techniques to be used. This did not deter some candidates and we often saw
Lines of best fit/cumulative frequency curves with 3 or 4 points.
It was pleasing to note that many candidates did try to plan their work, with an
appropriate sample, and set out some of the techniques that they were going to
use. There were, however, several candidates who merely had no plan at all and
they just presented a variety of statistical techniques.
7.3.5.2 STRAND 2
The second strand requires the candidates to use appropriate techniques at a
certain level to fulfil their plan. This was often a weak point where candidates,
even with a good plan in strand 1, often used simple techniques or produced an
indiscriminate number of techniques without any thought, or reasoning, as to
their suitability.
The main areas of concern in this strand were:
Techniques just done and not used within the task. Skills have to be used to
gain credit. Often candidates produced diagrams/calculations that merely
covered all aspects of the specifications without any though as to their
appropriateness in the work.
Candidates only using limited skills well below the level at which they
should have been working. It was common to see a well thought out plan at
mark 5 and above but then the candidate using skills at mark at a grade well
below this mark.
Candidates producing pages of repetitive data/calculations, often not linked
to their hypothesis in anyway.
At mark 5 and above it candidates would often produce several diagrams
relating to the work but they then found some difficulty in seeing the results
as a global picture. It would have helped if they had produced a table of
results at the end which would have brought all their results together, helping
them to make any comparisons quicker and more easily.
At mark 7 and above the candidates are expected to select the best skill to
demonstrate their work. This can be modified as the task progresses if
required. Far too often, at this level, the candidates merely completed every
data-handling skill they knew without any thought as to appropriateness.
There were many occasions when graphs were used to display the results.
However, these need to have a title and scales marked on the axes. Without
these it is difficult for the examiner to piece together the work and hence
give it the appropriate credit.

41
UG014129

1. When using Box Plots in the comparisons the candidates need to display the
diagrams together on a suitably scale so that comparisons are easy to make.
It was not rare to see box plots, relating to different categories, drawn on the
same-scaled diagram.
2. Candidates must be aware that Correlation/lines of best fit are not possible
for non-numerical data. Often a bar chart relating, for example, shoe size to
frequency would be drawn and then a line of best fit attempted, together with
a comment that there was a good/bad correlation.
3. Where candidates are applying techniques beyond the National Curriculum;
they must be applied correctly and used within the context of the work.
Several candidates performed calculations relating to such techniques but
often never used them within the work.
It is worth noting that many candidates were able to use appropriate
techniques within their work at the correct level and hence they gained
marks accordingly.
7.3.5.3 STRAND 3
The third strand was poorly done. The interpretation of the results linked with
some analysis is certainly an area that will need concentrating upon for the
future.
Some of the major concerns here were:
A failure by the candidates to use their results/diagrams to come to a
conclusion. Often the candidates would base their comments upon what they
thought should happen without any reference to their work.
Comments were far too general and not specific enough based upon the
results obtained.
The candidates must, at mark 5 and above, relate their results back to the
original hypothesis. The candidates at these levels did not always do this.
There should be some evaluation of the work done and the results obtained
at mark 5 and above. Any evaluation was rare.
Candidates working at mark 7 and above are expected to look at the
significance of their results, recognising any limitations in the work and
being prepared to do something about it. These aspects were very poor at this
level.
Techniques are used for a purpose and as such the interpretation should echo
this. For example, where box Plots were used they should be for a
comparison between 2 or more variables linking the median AND the spread
of the data together. It is not sufficient to say that one box is bigger than the
other therefore..,
Where higher level skills are used then the interpretation has to link in with
the work and not just be a mere quote of a numerical results. It was often the
case, when Standard Deviation had been used, for the candidate to say the
SD for A is x and the SD for B is y. No interpretation or meaning within the
context of the work.
7.3.6

CONCLUDING REMARKS
It was obvious that the candidates had the ability to do the skills at AO4 but they
generally lacked the understanding as to the appropriateness of such skills and
what the results meant in terms of their initial aims.

42
UG014129

It was pleasing to note that the candidates choosing this option achieved marks
similar to those candidates opting for the Centre moderated option.
In conclusion can I thank all of the centres for their hard work throughout the
year in getting the work done by the candidates and the efficiency with which
the work was forwarded to the relevant examiners.

43
UG014129

STATISTICS

8.1

1387 OVERALL CUMMULATIVE % OF CANDIDATES AT EACH GRADE


(314,016 CANDIDATES)

Grade
Cummulative % of
Candidates at grade

8.2

A*

3.6

13.7

33.0 53.1

71.1

86.2

93.9

96.6

100

MARK RANGES AND AWARD OF GRADES

Unit/Component
5501
5502
5503
5504
5505
5506
5507a
5507b

Maximum Mark
(Raw)
100
100
100
100
100
100
48
48

Mean Mark
(Adjusted)
69.1
69.9
77.5
77
79.9
82
26.9
26.2

44
UG014129

Standard
Deviation
15.5
15
11.8
11.9
8.6
10
9.3
8.3

% Contribution
to Award
40
40
40
40
40
40
20
20

8.3

Grade Boundaries

The table below gives the lowest (adjusted) raw marks for the award of the stated
uniform marks (UMS).

GCSE Mathematics 1387


Summer 2003
GCSE Mathematics
A*

Paper 5501

62 46

31

16

Paper 5502

57 41

26

11

14

10

Paper 5503

62

45 30 15

Paper 5504

58

38 23

Paper 5505

73 55 37

20

Paper 5506

60 45 30

15

Paper 5507

43 37 31

26 22 18

Marks for papers 5501 5506 are each out of 100; marks for coursework (5507) are out of 48.

45
UG014156

GCSE Modular Mathematics 1388


Summer 2003
1388

A*

Foundation Stage 3/1

Paper 5514

35

26

17

Foundation Stage 3/2

Paper 5515

35

26

17

Intermediate Stage 3/1

Paper 5516

34

25

16

Intermediate Stage 3/2

Paper 5517

34

25

15

Higher Stage 3/1

Paper 5518

35

26

17

Higher Stage 3/2

Paper 5519

37

27

17

Coursework

Paper 5507

43

37

31

26

22

18

14

10

(Marks for papers 5514 5519 are each out of 62; marks for coursework (5507) are out of 48)

GCSE Modular Mathematics 1388


March 2003
1388

A*

Foundation Stage 1

Paper 5508

Intermediate Stage 1

Paper 5509

27

Higher Stage 1

Paper 5510

31 24 16

Foundation Stage 2

Paper 5511

Intermediate Stage 2

Paper 5512

29

Higher Stage 2

Paper 5513

30 22 14

24 17

10

13

27 20

13

13

21 13

9
26 19
22 13

(Marks for papers 5508 5513 are each out of 38)

GCSE Modular Mathematics 1388


January 2003
1388

A*

Foundation Stage 1

Paper 5508

Intermediate Stage 1

Paper 5509

29

Higher Stage 1

Paper 5510

33 26 18

Foundation Stage 2

Paper 5511

Intermediate Stage 2

Paper 5512

29

Higher Stage 2

Paper 5513

33 25 17

22 14

11
27 20
21 12
10

(Marks for papers 5508 5513 are each out of 38)

46
UG014156

GCSE Modular Mathematics 1388


March 2002
1388

A*

Foundation Stage 1

Paper 5508

Intermediate Stage 1

Paper 5509

29

Higher Stage 1

Paper 5510

31 24 17

28 21

14

28 21

15

22 14

10

(Marks for papers 5508 5510 are each out of 38)

GCSE Modular Mathematics 1388


January 2002
1388

A*

Foundation Stage 1

Paper 5508

Intermediate Stage 1

Paper 5509

28

Higher Stage 1

Paper 5510

30 23 16

21 13
10

(Marks for papers 5508 5510 are each out of 38)

47
UG014156

The grade boundaries suggested by the awarding committee for GCSE Mathematics were:
Unit
5501/01
5502/02
5503/03
5504/04
5505/05
5506/06
5514/14
5515/15
5516/16
5517/17
5518/18
5519/19

A*
73
60
35
37

A
55
45
26
27

B
62
58
37
30
34
34
17
17

C
45
38
20
15
25
25
9
8

D
62
57
30
23
35
35
16
15
-

E
46
41
15
8
26
26
7
5
-

F
31
26
17
17
-

G
16
11
8
8
-

These values could not be converted directly to uniform marks since they gave very dramatic
changes in the rates of conversion from raw mark to uniform mark just above and just below the
highest and lowest grade boundaries within the tier. The adjusted raw marks which were used to
determine uniform marks are shown below. These adjusted raw marks more closely resemble
the percentages of marks required for the award of notional grades on the uniform mark scale.
These are the boundaries that appear on the component mark listings.
Unit
5501/01
5502/02
5503/03
5504/04
5505/05
5506/06
5514/14
5515/15
5516/16
5517/17
5518/18
5519/19

A*
90
90
56
56

A
80
80
50
50

B
88
88
70
70
55
55
43
43

C
75
75
60
60
47
47
37
37

D
84
84
63
63
52
52
39
39
-

E
67
67
51
51
41
41
31
31
-

The reason for making these statistical adjustments is given overleaf.

48
UG014156

F
50
50
31
31
-

G
33
33
21
21
-

1387
1388

Mathematics A
Mathematics B

This is the first year in which the above specifications have been awarded using a uniform mark
scale system. On the uniform mark scale, A* is given 90% of the available uniform mark, A
80%, B 70%, C 60%, D 50%, E 40%, F 30% and G 20%.
Normally, the raw component mark boundaries are translated directly on to the uniform mark
scale. However, because these specifications are tiered, there are complications because of the
limited range of grades that is available for the tier.
When the actual raw mark boundaries were translated on to the uniform mark scale, there were
sudden, sharp changes in the rate of exchange of raw mark to uniform mark above and below
the boundary mark for the highest and the lowest targeted grade. This meant that candidates
were not receiving sufficient credit for achievement above the minimum required for the award
of the highest targeted grade. Also, candidates were being severely penalised for failing to
achieve the minimum required mark for the award of the lowest targeted grade.
To counteract these effects, it was decided to adjust the raw marks to give a more equitable rate
of exchange from raw mark to uniform mark across the whole mark range. In other words, the
raw marks were adjusted to fall more closely in line with the percentages of the uniform mark
range required for the award of the grade.
This simple statistical procedure has resulted in a more acceptable rate of exchange across the
whole of the mark range. There are no sudden and severe changes in the rate of exchange.
The rank order of candidates has not been interfered with. The vast majority of candidates will
notice no difference in the grade they have achieved. However, those few candidates just above
the maximum grade threshold or just below the minimum grade threshold are now being treated
fairly.

49
UG014156

A worked example is shown.


1387 Mathematics B Component 5506/06
This is the higher tier Paper 6. Total raw mark for the paper is 100
Raw mark grade boundaries:
Grade
Raw Mark
Adjusted Raw Mark
Uniform Mark

Max
100
100
240

A*
60
90
216

A
45
80
192

B
30
70
168

C
15
60
144

This shows that the maximum raw mark of 100 translates to a uniform mark of 240.
Raw Mark
100
60
45

Uniform Mark
240
216
192

Adjusted Raw Mark


100
90
80

Between a raw mark of 60 (grade A*) and 100 (maximum), two raw marks are needed to gain
approximately one uniform mark.
Between a raw mark of 45 (grade A) and 60 (grade A*), one raw mark is needed to gain
approximately one and a half uniform marks.
A similar abrupt change in the conversion occurs below grade C. It is these sudden changes in
the rate of exchange that have been removed by the statistical adjustment of the raw mark.
If the adjusted raw mark is used, 10 adjusted raw marks always equate to 24 uniform marks.

Dr Jim Sinclair
General Manager, Standards & Awards

50
UG014156

51
UG014156

Further copies of this publication are available from


Edexcel Publications, Adamsway, Mansfield, Notts, NG18 4LN
Telephone 01623 467467
Fax 01623 450481
Order Code (insert code) June 2003
For more information on Edexcel qualifications please contact our
Customer Response Centre on 0870 240 9800
or email: enquiries@edexcel.org.uk
or visit our website: www.edexcel.org.uk
London Qualifications Limited. Registered in England and Wales no.4496750
Registered Office: Stewart House, 32 Russell Square, London WC1B 5DN

52
UG014156

Potrebbero piacerti anche