Browse Month

August 2017

Elevation (Hotseat: STEP I 2015)

Background

STEP (short for Sixth Term Examination Paper) is a somewhat difficult Maths exam that is used by several UK universities (including Cambridge, Warwick and Imperial) as part of a conditional offer for courses in Mathematics and Computer Science. A part of my conditional offer for Imperial included a 2 in any STEP paper of my choice, though it was paired with a relatively awkward 35 points for IB – perhaps because the rest of my portfolio was pretty strong.

There are three papers – I, II and III; III includes A Level Further Mathematics content, while I and II remain within the A Level Mathematics syllabus. I is also somewhat easier than II; that said, I think both papers exist because Cambridge does sometimes want students who didn’t take Further Mathematics to get a pair of grades in these exams. Nonetheless, STEP I itself is certainly no pushover. Students are graded on a scale of S, 1, 2, 3, U; the 2015 STEP I paper had 73.1 percent of students scoring at least ‘3’ (the lowest pass grade), and just 42.6 percent scoring at least ‘2’ (the lowest grade many universities would consider). This may be compared with A Level mathematics in 2015, where the analogous metrics of A*-E and A*-C respectively are 98.7 and 80.8 percent; and this is even before we factor in selection bias.

Each paper consists of 13 questions, but candidates are only required to pick six of them; their highest-scoring six questions will be used to determine their final score. Questions have equal weight (and each is marked with integers out of 20, which seems suspiciously similar to how I’ve seen this done at many universities!). Eight of the 13 questions are classified as “pure mathematics” and include tasks testing concepts like calculus, trigonometry, algebraic manipulation, series and even number theory. Three are classified as “mechanics”, typically requiring calculations on Newtonian mechanics, and two as “probability and statistics”. I almost always do 4/0/2 or 3/1/2. Note that it is possible to attempt seven or even more questions as a form of “insurance”, though given the strict time constraints this is likely to be difficult.

Performance

I had a fairly decent run, picking up 114 out of 120 points; mainly losing these to minor slips/cases where an important statement was not explicitly asserted, and a good chunk in question 7 with not clearly handling a case which was expected to be shown to bear no fruit (I thought it was rather obvious that it was not needed, and dismissed it in one line).

The last row indicates the order in which I attempted the problems; it seems this was generally consistent with how long I actually took on them (problems 2 and 13 were fast; 1 and 8 were somewhat in-between, and I struggled for a bit with 12 and messed up 7 while using up a fair bit of the time). Note that the “break-even” time if one wants to answer all questions would be 30 minutes per question.

Selected Problems in Depth

Problem 8: Series Division

First, prove that 1 + 2 + \ldots + n = \frac{n(n+1)}{2} and (N-m)^k + m^k is divisible by N. Then, consider

S = 1^k + 2^k + 3^k + \ldots + n^k

Show that if n is a positive odd integer, then S is divisible by n, and if n is even then S is divisible by n/2. Show further, that S is divisible by 1 + 2 + 3 + \ldots + n.

The two lead ins were simple. Induction does work but in both cases there were much better methods that could be used (write the series twice and pair terms up, and then pair terms in the binomial expansion). Later parts involved pairing S with a zero term, but the general theme of “pairing terms” persisted throughout the question. I think the toughest part of this one was, knowing that one had to show divisibility by \frac{n(n+1)}{2} at the very end, figuring out that it was safe to split this into two terms and show them separately. This works because the highest common factor of n and n + 1 is 1. My number theory was a bit rusty so I wasn’t sure if that was correct, and proved it during the paper.

Problem 12: On Fish Distributions

The number X of casualties arriving at a hospital each day follows a Poisson distribution with mean 8. Casualties require surgery with probability 1/4. The number of casualties arriving on each day are independent of the number arriving on any other day, as are the casualties’ requirements for surgery. (After some initial work) Prove that the number requiring surgery each day also follows a Poisson distribution and state its mean. Given that in a particular random chosen week 12 casualties require surgery on Monday and Tuesday, find the probability that 8 casualties require surgery on Monday (as a fraction, in its lowest terms).

This one really wasn’t too bad, though it involved a lot of algebraic manipulation and it seems I took quite a long time on it when doing the paper. Essentially, the independence condition should hint that if we have X casualties, the probability of S needing surgery is clearly binomially distributed. X itself is a random variable, but that’s fine; the law of conditional expectation gives us

P(S = s) = \displaystyle \sum_{t = s}^{\infty} P(S = s | X = t) P (X = t)

and a suitable substitution yields this:

P(S = s) = \displaystyle \sum_{t = s}^{\infty} \left( \frac{t!}{s! (t - s)!} \times \left( \frac{1}{4} \right)^s \times \left( \frac{3}{4} \right)^{t-s} \times \frac{e^{-8} 8^t}{t!}\right)

After some fairly involved algebraic manipulation, one can indeed recover a Poisson form for the pdf of S. Using this, the last part is relatively simple actually; we want P(S_1 = 8 | S_1 + S_2 = 12) and relying on the fact that a sum of independent Poissons is Poisson itself (so the means of S_1 and S_2 are 2 each gives us that the mean of S_1 + S_2 is Poisson, and with mean 4).

Problem 13: Probability and State Machines

Already covered in a previous post. I dispatched this one very quickly, though I was already familiar with the Markov process model that I used here.

Meta-Analysis

The main datasource we have available here is an Examiner’s Report that discusses to some extent what happened (though we don’t have full details). The grade boundary for an S is 96, so a 114 is comfortably in that range; 3.5 percent of candidates scored that. Question-level data isn’t published beyond the comments in the Examiner’s Report.

Focusing on the questions that were attempted, my opener Q1 which was a test of curve-sketching was also the highest-scoring question on the paper, with students scoring an average mark of over 13.5 (with caveats that some students were worried about curve sketching and thus avoided it). This was the second “easiest” question as far as I was concerned.

The other pure maths questions I attempted (2, 7 and 8) were also popular, and attempted with varying degrees of success (questions 2 and 7 were the second and third highest scoring questions). Probability and Statistics for some reason seems to always cause issues for students attempting these papers, with mean scores in the 2-3 range, though having done Olympiads in the past (and specialising in combinatorics and probability) I understandably found these easier.

Generally, STEP I for me tends to be fairly tough but certainly comfortable, II is solidly difficult, and doing III often makes me feel like a dog behind a chemistry set (“I have no idea what I’m doing”), especially since I didn’t take Further Maths in high school.

On Teaching and Teaching Assistants

I read an article in the Guardian a few days ago on the Finnish education system which is often cited in the western world as one example of a successful education system (in spite of some rather angry bold negative deltas in PISA 2015 – however reliable said tests are as a form of assessment). I’ve been through the table-topping Singapore education system, and while it certainly is rigorous (especially in mathematics – I recently looked at an A level Thinking Skills problem-solving question that, chillingly, wouldn’t be too out of place on the Mathematics PSLE in Singapore) there are valid concerns regarding excessive stress levels, teaching not being perceived as a high-profile job and a lack of time for students to explore things on their own. I would certainly understand a desire not to bring some of these elements into an education system.

The headline message being trust your teachers is something I can appreciate to some extent, even though I was never explicitly a teacher, at least in terms of profession. I had the privilege of being an undergraduate teaching assistant during my third and fourth years at Imperial, and I like to think that the professors and lecturers who were supervising me placed a lot of trust in me; they certainly gave me a fair bit of latitude in the content I could cover (perhaps not the “unfettered flexibility” mentioned in the article, but I was supposed to teach rather specific modules – Logic, Discrete Mathematics, Reasoning about Programs, and Data Structures and Algorithms).

I was given high ability groups in both years, and this resulted in advanced tutorials that introduced students to various slightly more advanced topics they would see soon (concurrency, temporal logics, optimisation algorithms), as well as stretched their abilities in applying the concepts and knowledge learnt (okay, you know what a loop invariant is – how can we extend it to nested loops? Functions containing loops that could be recursive?). I believe these were appreciated, and did collect feedback on them (though, of course, it’s difficult to be sure how much negative feedback was swept under the rug with these kinds of questionnaires).

Unfortunately, I did also indulge in some “teaching to the test”, explaining strategies for tackling the various exams that were certainly not a part of the syllabus. Thankfully, Imperial’s examinations don’t have too much exploitability here, as far as I can recall; I think much effort was spent identifying common pitfalls and explaining how to avoid them (e.g. unnecessary quantifier movement in Logic, and telling students to clearly demonstrate their thought processes even if they couldn’t answer the question). Some of this was certainly backed by popular demand, and it did pay off in that my students did win multiple prizes. I certainly wasn’t in a position to change the assessment system at Imperial!

I did encounter minimal bureaucracy, mainly around marking the attendance of students (some of this is part of the Home Office’s “expected contacts” requirement for non-EU students). I can’t remember if a DBS check was necessary, though I already had one from year 1, in any case. Thankfully, there was nothing along the scale of what was being described in the article:

Contrast this with the UK, where schools have data managers, where some teachers are told which colour pens to use for marking, and where books are periodically checked to ensure that learning intentions are neatly stuck in place.

Not necessarily sure that the existence of data managers is a bad thing (after all, I do work for a company that helps others make data-driven decisions!) – but that said, drawing conclusions from data that doesn’t truly reflect students’ abilities (if that is what is going on) is very unlikely to be effective (“garbage in, garbage out” springs to mind).

I did do a stint as a volunteer student helper with a few schools near Imperial as part of the Pimlico Connection programme. Although I didn’t witness said book checks, I certainly did notice teachers explicitly reference curricular objectives and the level system (this was pre-September 2014 changes). Obviously, I’m not sure how representative this is in schools in general, though. I think the only time I recall encountering this in Singapore was when preparing for the Computer Science IB HL exams.

The article concludes with an expansion of this notion of trusting individual teachers to societal trends towards trust in general, though not much evidence or data is presented on this. I guess some connections can be drawn to a point raised earlier on relative economic homogeneity. Looking at an issue of trust in the UK in specific, there is interestingly a series of studies that attempt to collect data on this. Slide 19 on the linked page suggests that 55% of British people trust the British people to “do the right thing”, whatever that entails.

In terms of trusting individual teachers, I’d probably be comfortable with that only if there was a good process for selecting teachers. That’s a difficult problem – simply going for the “best and brightest” in terms of students’ academic results certainly isn’t enough, as the Finnish process acknowledges. We did do that at Imperial, though in some sense the stakes are lower there as there is a supervisor monitoring the process and it is, still, one of the “least worst” indicators one can use. However, I think once one acquires confidence and skill such that one will not struggle with the concepts one is teaching, and one can answer students’ questions comfortably (within reason), there are many other more important traits. My knowledge of tricky automata theory or familiarity with theoretical complexity classes, or for that matter ability to knock in a 96% in the Logic and Reasoning about Programs exams (as opposed to say an 85 or 90) were generally not directly relevant to doing my job as a teaching assistant!

Running the Gauntlet (Hotseat: OCR Mathematics C1-C4)

Background

The GCE A Level is a school-leaving qualification that students in the UK take at the end of high school. Students usually take exams for 3-4 subjects. The exams are graded on a scale from A* to U (though not with all characters in between); typically an A* is awarded to the roughly top 8-9 percent of students.

This is a rather different type of challenge – previous installments of this series have featured especially difficult exams (or rather, competitions; only the MAT is realistically speaking an exam there). I’ve usually struggled to finish in the time limit (I didn’t finish the AIME and barely finished the MAT; I had some spare time on the BMO R1, but still not that much). I could of course do this in the same way as the other tests, though the score distribution would likely be close to the ceiling, with random variation simply down to careless mistakes.

Interestingly, the UK has multiple exam boards, so for this discussion we’ll be looking at OCR, which here stands not for Optical Character Recognition, but for Oxford, Cambridge and RSA (the Royal Society of Arts). The A level Maths curriculum is split into five strands: core (C), further pure (FP), mechanics (M), statistics (S) and decision (D); each strand features between two and four modules, which generally are part of a linear dependency chain – apart from FP, where FP3 is not dependent on FP2 (though it still is dependent on FP1). For the Mathematics A level, students need to take four modules from the core strand, and two additional “applied” modules; Further Mathematics involves two of the FP strand modules plus any four additional modules (but these cannot overlap with the mathematics A level ones). Thus, a student pursuing a Further Mathematics A level will take 12 distinct modules, including C1 – C4 and at least two FP modules, for example C1-4, FP{1,3}, S1-4, D1 and M1.

(In high school I took the IB diploma programme instead, which did have Further Mathematics (FM), though I didn’t take it as I picked Computer Science instead. That was before Computer Science became a group 4 subject; even then, I think I would still have wanted to do Physics, and thus would not have taken FM in any case.)

Setup

I attempted the June 2015 series of exams (C1 – C4). Each of these papers is set for 90 minutes, and is a problem set that features between about seven and ten multi-part questions. The overall maximum mark is 72 (a bit of a strange number; perhaps to give 1 minute and 15 seconds per mark?). To make things a little more interesting, we define a performance metric

P = \dfrac{M^2}{T}

where M is the proportion of marks scored, and T is the proportion of time used. For example, scoring 100 percent in half of the time allowed results in a metric of 2; scoring 50 percent of the marks using up all of the time yields a metric of 0.25. The penalty is deliberately harsher than proportional, to limit the benefit of gaming the system (i.e. finding the easiest marks and only attempting those questions).

Most of the errors were results of arithmetical or algebraic slips (there weren’t any questions which I didn’t know how to answer, though I did make a rather egregious error on C3, and stumbled a little on C4 with trying to do a complex substitution for an integral, rather than preprocessing the term). There are a few things I noted:

  • The scores for the AS-level modules (C1, C2) were considerably higher than that for the A-level modules (C3, C4). This is fairly expected, given that students only taking AS Mathematics would still need to do C1 and C2. Furthermore, from reading examiners’ reports the expectation in these exams is that students should have enough time to answer all of the questions.
  • The score for C1 was much higher than that for C2. I think there are two reasons for this – firstly, C1 is meant to be an introductory module; and secondly, no calculators are allowed in C1, meaning that examiners have to allocate time for students to perform calculations (which as far as I’m aware is something I’m relatively quick at).
  • The score for C4 was actually slightly higher than that for C3 (contrary to a possibly expected consistent decrease). While there is meant to be a linear progression, I certainly found the C3 paper notably tougher than that for C4 as well. That said, this may come from a perspective of someone aiming to secure all marks as opposed to the quantity required for a pass or an A.

We also see the penalty effect of the metric kicking in; it might be down to mental anchoring, but observe that perfect performances on C1 and C2 in the same amount of time would have yielded performance numbers just above 5 and 3, respectively.

Selected Problems in Depth

C3, Question 9

Given f(\theta) = \sin(\theta + 30^{\circ}) + \cos(\theta + 60^{\circ}), show that f(\theta) = \cos(\theta) and that f(4\theta) + 4f(2\theta) \equiv 8\cos^4\theta - 3. Then determine the greatest and least values of \frac{1}{f(4\theta) + 4f(2\theta) + 7} as \theta varies, and solve the equation, for 0^{\circ} \leq \alpha \leq 60^{\circ},

\sin(12\alpha + 30^{\circ}) + \cos(12\alpha + 60^{\circ}) + 4\sin(6\alpha + 30^{\circ}) + 4\cos(6\alpha + 30^{\circ}) = 1

This might have appeared a little intimidating, though it isn’t too bad if worked through carefully. The first expression is derived fairly quickly by using the addition formulas for sine and cosine. I then wasted a bit of time on the second part by trying to be cheeky and applying De Moivre’s theorem (so, for instance, \cos(4\theta) is the real part of e^{i(4\theta)} which is the binomial expansion of (\cos \theta + i \sin \theta)^4), subsequently using \sin^2 x = 1 - \cos^2 x where needed. This of course worked, but yielded a rather unpleasant algebra bash that could have been avoided by simply applying the double angle formulas multiple times.

The “range” part involved substitution and then reasoning on the range of \cos^4\theta (to be between 0 and 1). The final equation looked like a mouthful; using the result we had at the beginning yields

f (12 \alpha) + 4 f (6 \alpha) = 1

and then using a substitution like \beta = 3 \alpha, we can reduce the equation to 8 \cos^4 \beta - 3 = 1. We then get \cos \beta = \pm \left( \frac{1}{2} \right)^{(1/4)} and we can finish by dividing the values of \beta by 3 to recover \alpha.

C4, Question 6

Using the quotient rule, show that the derivative of \frac{\cos x}{\sin x} is \frac{-1}{\sin^2x}. Then show that

\displaystyle \int_{\frac{1}{6}\pi}^{\frac{1}{4}\pi} \dfrac{\sqrt{1 + \cos 2x}}{\sin x \sin 2x} = \dfrac{1}{2}\left(\sqrt{6} - \sqrt{2}\right)

The first part is easy (you’re given the answer, and even told how to do it). The second was more interesting; my first instinct was to attempt to substitute t = \sqrt{1 + \cos 2x} which removed the square root, but it was extremely difficult to rewrite the resulting expression in terms of t as opposed to x. I then noticed that there was a nice way to eliminate the square root with \cos 2x = 2 \cos^2 x - 1. The integrand then simplifies down into a constant multiple of \frac{-1}{\sin^2x}; using the first result and simplifying the resultant expression should yield the result. That said, I wasted a fair bit of time here with the initial substitution attempt.

Meta-Analysis

To some extent this is difficult, because students don’t generally do A levels in this way (for very good reasons), and I’m sure that there must be students out there who could similarly blast through the modules in less than half the time given or better (but there is no data about this). Nonetheless, the A level boards usually publish Examiners’ Reports, which can be fairly interesting to read through though generally lacking in data. The C3 report was fairly rich in detail, though; and the 68/72 score was actually not too great (notice that “8% of candidates scored 70 or higher”). Indeed the aforementioned question 9 caused difficulties, though the preceding question 8 on logarithms was hardest in terms of having the lowest proportion of candidates recording full marks.