Browse Category

Mathematics

Algorithmic Modelling – Codenames

Codenames is a board game designed by Vlaada Chvatil which tests communication. The game is played with two teams; each team splits itself into clue-givers and guessers.

There is a board of 25 cards, each with one (or more) words on them. Of these cards, 9 are owned by the first team, 8 owned by the second team, 7 are neutral and one is the ‘assassin’.

The clue-givers attempt to communicate to the guessers which cards are owned by their team. This is done by giving a one-word clue followed by the number of cards N corresponding to the clue. The rules establish some bounds on allowed associations (for example, ‘sounds like’ clues are not allowed).

I don’t know how much time went into selecting the words to appear on individual cards, but there are certainly many words in the deck that can be interpreted in many ways, which makes the game fun. For example, I can think of quite a number of ways to clue BOND (COVALENT for unambiguity; DURATION, ASSET or DEBT for the debt; JAMES, AGENT or SEVEN or even at a stretch something like CASINO for the character; FRIEND or FRIENDSHIP; STREET or TUBE; CONTRACT, PROMISE or WORD for the idea of a promise; ADHERE or CONNECT, among others). Which one I might pick would depend on the other words on the board.

Guessers then identify up to N+1 cards they think their team owns. This can be based on the entire history of clues given, not just the previous clue. These cards are guessed one at a time; a team is only allowed to make further guesses if they guess ‘correctly’ one of their team’s cards. Teams may refrain from making all guesses.

The game ends either when all cards owned by a team are revealed (that team wins), or if a team reveals the assassin (that team loses).

In practice, clue-givers will need to consider all cards, not just the ones their team owns. The penalty for ‘failure’ varies; the assassin is an instant loss, and revealing a card owned by the other team is also bad. For example, with the cards APPLE, BANANA and GRAPE it would be very tempting to declare (FRUIT, 3) as a clue; yet, if KIWI is owned by the other team (or worse, is the assassin) it might be dangerous.

However, if the other team has already revealed that KIWI is theirs (e.g. with another clue, maybe SOUTH or BIRD) then the FRUIT clue becomes safe. Thus, pre-computing a strategy at the beginning of the game (e.g. by partitioning the eight or nine clues owned into several logical groups while avoiding other words) may not be optimal.

I tend to consider making an effort to win to be a key part of games, and thus what moves may be considered good also often depends on the state of the game. For example, if the opposing team has just one card left, I will give a broader clue that may have more tenuous links in a ‘do or die’ effort to finish on this turn. A less extreme example would be attempting weaker links if behind, or perhaps playing slightly more conservatively if ahead.

Mathematically modelling Codenames can be tough. We can try modelling the game state as a tuple (O, E, N, x, h, f_O, f_E) where O, E, N are sets of the remaining clue words our own team has, words the enemy team has, and the neutral words, x is the assassin, h = (t,w,n)^+ is the history of the game, and f_O and f_E are preference functions of the form h, w, n, O, E, N, x \rightarrow P, returning an ordered list O \cup E \cup N \cup \lbrace x \rbrace. (t, w, n) is a clue meaning that a given team gave a clue word and hinted that some number of cards was relevant. This already abstracts two difficult parts away – ordering the clues, and determining how many of the top preferences to pick.

I think the preference functions need to take into account previous clues from both teams; if a previous clue could clearly have corresponded to two words and I picked the wrong one, it might be fairly high on the list even if unrelated. Similarly if this scenario happens to the opposing team, I would probably avoid the word that would blatantly be theirs.

The notion of degree of confidence also isn’t captured as well in our ordering; going back to the fruit example, having a clear ordering would imply that clues for less than 4 items could reliably result in correct guesses (if we knew that APPLE was by a small margin the best pick, we could safely and confidently give (FRUIT, 1) to clue it, which seems wrong). One could imagine modelling these with selection probabilities, though things would be even more complex.

The above representation still seems computationally difficult to work with. The evolution of the preference function as one moves from one state to another is unclear (so in that representation it is fully general), making lookahead difficult. It doesn’t seem like a greedy choice is always best; for example, given eight clues that are reasonably divisible into four pairs, a clue for 4 words taking one element from each pair might be a bad idea if the other words can’t easily be linked.

We can proceed by simplifying the preference functions; a simple approach could be that for each w, n each team has a persistent preference function that returns an ordered list of O_0 \cup E_0 \cup N_0 \cup \lbrace x \rbrace. The preference functions during the game then return the subsequence of that list that contains strictly the words still left in the game. This of course doesn’t take into account past information or clues from the other team.

With this, we can attempt to solve the game using standard techniques; assuming that the vocabulary of clues is bounded (let’s say it must be from the Linux dictionary), a game state is winning for the current team if there exists some word for which the preference function returns everything in O as a prefix. A state is losing if all moves in that state produce a state which is winning for the opposing team.

We can generalise this to a simple probabilistic model as well; the preference ‘functions’ instead return a discrete random variable that indicates either guessing some word or passing. A simplified model could then, at the start of the game, assign weights to each candidate indicating the relative probability that candidate would be selected. These can be normalized to account for a pass probability. As words are removed from the board, the probability of the remaining words being selected scales (we can imagine a simple rejection-sampling where we discard words that aren’t actually on the board any more).

The algorithm for the probability that we get a win from a given state is then slightly more complex (though I think still reasonably covered by standard techniques).

Likelihood Estimation

One of my team-mates introduced another interesting question over lunch. Working through it reminded me of some of the statistics problems I struggled with at Imperial, specifically during Intelligent Data and Probabilistic Inference. It reinforced that in spite of scoring 90 for that course I’m still not confident I knew what I was doing then (or now).

Suppose you have a symmetric triangular distribution of unknown width and mean. Given that the distribution has yielded three independent samples of 2, 4 and 5, what is the expectation of the mean?

The triangular distribution can be used as an estimate for something which is known to be bounded both above and below, that also takes into account a value known to be the most probable. Some argue that it is the simplest distribution satisfying these (though one could argue that a cosine or some kind of truncated normal might be more applicable).

The instinctive answer I have is simply the mean of the samples, or \frac{11}{3}, though I was suspicious as probability and statistics often yield non-intuitive results.

The distribution is named as such because it has a triangular probability density function; because of the laws of probability (area under the function must be 1), specifying the minimum, maximum and mean is enough to uniquely identify it. Supposing we have a minimum a, a maximum b and a mean c, this yields a pdf of:

f(x) =\begin{cases} \dfrac{2(x-a)}{(b-a)(c-a)} & a \leq x \leq c \\ \dfrac{2(b-x)}{(b-a)(b-c)} & c \leq x \leq b \\ 0 & \text{otherwise} \end{cases}

We are only dealing with a symmetric case, so we can set c = \frac{a+b}{2} which cleans things up a little:

f(x) =\begin{cases} \dfrac{4(x-a)}{(b-a)^2} & a \leq x \leq c \\ \dfrac{4(b-x)}{(b-a)^2} & c \leq x \leq b \\ 0 & \text{otherwise} \end{cases}

Based on our observations that we have three samples of 2, 4 and 5, we can construct the likelihood that a given triangular distribution gave rise to a certain result. While the probability sampling a given distribution resolves to a precise value is infinitesimally small, we can still compare them in relative terms using the density functions. We can write this as

P(2,4,5;a,b) = f(2)f(4)f(5)

Expanding this term will depend on where exactly 2, 4 and 5 fall in our triangle. Let’s work out the most promising case (where 2 falls on the left of c while 4 and 5 fall on its right); the rest are left as an exercise to the reader. In this case, we have

P(2,4,5;a,b) = \dfrac{4(2-a)}{(b-a)^2} \times \dfrac{4(b-4)}{(b-a)^2} \times \dfrac{4(b-5)}{(b-a)^2} = \dfrac{64(2-a)(b-4)(b-5)}{(b-a)^6}

At this point, we notice the original question needs a bit more specification. We aren’t given what the distribution of possible values of a and b is. One way of getting around this is just to pick a uniform distribution; however, that isn’t quite defined over the real line. We can for now simply find the maximum likelihood estimate for a and b.

Alternatively, if we give prior probability distributions for a and b, we could also use the samples as information to get a posterior distribution. Usually we would pick a conjugate prior distribution that wouldn’t fundamentally change even when accounting for the sample; I didn’t find one for the triangular distribution, though.

If we want to find the most likely distribution, we seek to find an extreme point; this can be done by taking partial derivatives (and this expression actually lines up quite well with the quotient rule). There is a fairly standard ‘trick’ for handling these, though; since the logarithm is a strictly increasing function, we compute the log-likelihood instead. The maximum of the logarithm will also be the maximum of the original function. Using the laws of logarithms, we get something a lot more tractable:

\log P(2,4,5;a,b) = \log 64 - 6 \log(b-a) + \log(2-a) + \log(b-4) + \log(b-5)

Computing the partial derivatives is then straightforward. We then set them to zero, and solve the resulting equations simultaneously; this yields

a, b = \left( \dfrac{1}{3} \left( 4 + \sqrt{\frac{2}{5}} \right), \dfrac{1}{3} \left( 16 - \sqrt{10} \right) \right), \left( \dfrac{1}{3} \left( 4 - \sqrt{\frac{2}{5}} \right), \dfrac{1}{3} \left( 16 + \sqrt{10} \right) \right)

We’ll want the second pair of solutions; the first actually has b \approx 4.279 which is no good (we need b > 5 ). Interestingly, the mean of that triangular distribution is then \dfrac{2}{15} \left(25 + \sqrt{10} \right) \approx 3.75497 which is not quite 11/3.

Indeed, though, the log-likelihood we get with these values of a and b is about -4.74053. Indeed, if we look at the family of distributions with  a = \frac{11}{3} - \alpha and  b = \frac{11}{3} + \alpha , the best we get is about -4.7473.

On Challenges that Build

On my return flight from Singapore to London, I listened to quite a few hours of music. Two of the songs I listened to and enjoyed at least partially for similar reasons were It’s Gonna Be Me (by NSync), and I Can’t Be Mad (by Nathan Sykes). It’s a bit of a strange pairing as the former seems to be an upbeat, relaxed pop song while the latter is a fairly moody piano ballad. However, the common element I latched on to here was that both songs feature sections that are repeated multiple times, with the vocals developing additional complexity on each iteration (thinking about it this is fairly common in songs that are critically reviewed well, and also in songs I like). For example, in It’s Gonna Be Me there is a line in the chorus which is sung four times over the course of the song, and its complexity develops:

The challenges in I Can’t Be Mad have a couple of changed notes, but also (if trying to reproduce the original) demand different productions of the notes (falsetto vs not, belts, etc). There’s always a risk of adding too many embellishments, though I find expanding upon base melodies can be quite interesting. Singing these, and considering what would be reasonable for my voice (adding a closing run to the last syllable above, for instance) and what would not be (adding a +1 semitone key change after the second chorus in I Can’t Be Mad – original is already awfully hard), can be enjoyable too.

Generalising this, I quite like the idea of “increasingly complex variations on the same theme” when learning concepts and when teaching them. This already seems to happen for many concepts in mathematics. Over the course of an A-level student’s mathematics education, he/she might understand how to write a quadratic expression as a product of linear factors (e.g. converting 6x^2 - 19x - 7 into (2x-7)(3x+1)). This could first begin with expressions where inspection works feasibly. However, students should also be presented with some examples where inspection is extremely difficult or even impossible (though probably only after gaining some confidence with the cases where inspection is plausible). For general expressions, one could try to use both the quadratic formula and factor theorem to factorise something like 6x^2 - 19x - 8 into -\frac{1}{24}(-12x + \sqrt{553} + 19)(12x + \sqrt{553} - 19). However, there will be some expressions like 6x^2 - 19x + 16 where the solutions to the quadratic are not real; later, with some understanding of complex numbers, these would make sense. Students will also learn about problems which may not obviously be quadratics but can be written as such (like x^4 + 2x^2 + 1); the ability to synthesise the various techniques can then be tested with something like 7x^8 - 10x^4.

To some extent my Masters project also had this theme – linear time logic, adding knowledge, adding dynamic modalities, generalising that to full branching time logic, and then switching out the infinite traces for finite traces. I haven’t written a course or a book on a computer science topic yet, but I can imagine that there might at least be sections that follow this kind of sequence.

This pattern also occurs a fair bit in many technical interviews I’ve seen as well, where problems start easy, but additional and progressively more challenging constraints are repeatedly introduced. The purposes here could include testing for a breaking point, seeing how candidates react to problems without an obvious solution, or whether they are able to synthesise additional information to come to a solution.

I find that I often learn best by practicing on smaller examples at first, and then (attempting to) generalise their conclusions to larger models, considering when these conclusions may fail or not. Having multiple variations of progressive difficulty can be useful as they can give a sense of achievement as partial progress towards an overall goal is made. Furthermore, I find understanding how changes in the problem scenario leads to the base solution method being applicable or inapplicable to be a key part of understanding as well; there is a clear need to reason about this when considering incremental variations. Going back to It’s Gonna Be Me, for example, aiming downwards at the word ‘love’ and not conserving sufficient air or energy for it might work for the first three passes, but it’s unlikely to on the last round.

There is a risk that the method can be frustrating in that it seems like it is consistently ‘moving the goalposts’, especially if one forgets that the partial goals are partial goals (and starts to think of them as complete ends in and of themselves). The standard I’m using for understanding (ability to critically evaluate applicability in novel contexts) may be seen as a little high. I also haven’t covered how to bootstrap the method (that is, how to develop an understanding of how to attack the base problem before any variations are introduced). Nonetheless I think there are some contexts where this works well. I’ve found it to be useful in singing, mathematics and interviewing at least!

Making Heads of Tail Risks

I remember that I was fairly anxious at the beginning of my fourth year at Imperial. I was concerned about securing work after university. Looking back, this seemed patently ridiculous; I had topped my class for the third time and already had a return offer in hand from Palantir. However, owing to sweeping government rhetoric about controlling post-study work visas at the time, I saw “not being able to get a work visa” as the primary risk then, even if it was remote. That statement in and of itself was probably correct, though the time I spent to monitor and mitigate that risk (reading up on government committee reports, and considering alternatives like a H1B1, EU blue card or doing a Tier-2 ICT after a year) was excessive.

Of course, this never materialised; and even if it did, the only likely impact would be that I’d have to fly home to Singapore in between finishing uni and starting work (I did not; though on hindsight that might have been a good thing to do).

I’m not sure when I first became aware of the concept of probability distribution functions (or, for that matter, continuous random variables). These functions are continuous, take on nonnegative values and integrate (across all variables) to 1. In the case of single variable functions, one can plot them on a two-dimensional graph; one may get results looking somewhat like the picture above, in some cases.

Areas of regions underneath the graph are proportional to the probability that a value falls in that region. For example, a uniform distribution would have a probability function that’s just a horizontal line. The graphs for the return of investments 1 and 2 in the example above follow what’s called a normal distribution; investment 3 follows a Student’s t distribution which has fatter tails.

Since areas are proportional, a simple technique for generating random values from an arbitrary distribution is called rejection sampling; if one draws a box around the distribution and throws darts randomly at it, one can take the x-coordinate of the first dart that lands underneath the function as a representative random sample.

That’s a basic mathematical introduction. If we had to rank the quality of the return profiles above (remember: right means higher returns), a lot would depend on what we were trying to do. I would personally rank investment 2 (the green curve) on top; it has a considerably higher mean return than investment 1 (blue) and adds only a small amount of variability. We can calculate what’s known as the standard deviation of a given distribution; this is a measure of how much variability there is with respect to the mean. In fact, the blue curve has a standard deviation of 0.6; this is 0.7 for the green curve.

Ranking investments 1 and 3 is more difficult; the mean of 3 is higher, but you add a lot of uncertainty. I’d probably rank them 2, 1, 3. However, there is also an argument in favour of investment 3 – if one is only interested if the returns exceed a certain level. It’s a similar line of argument where if you’d ask me to double a large sum of money (nominally) in 20 years, I’d pick a bond; 10 years, a general stock index fund, and 10 minutes, probably blackjack or aggressive forex speculation.

Whichever investment we pick, it’s possible that we may get unexpectedly awful (or excellent!) results. The standard deviation could give us some measure of what to expect, but there is still a non-zero probability that we get an extreme result. For the normal distributions (the blue and green curves), there is a 99.7% probability that a single observation will be within three standard deviations of the mean; this does also mean that there’s a 0.3% probability it does not, and about a 0.15% probability it’s lower than three standard deviations below the mean.

Tail risk refers to the risk of events that may have severe impact but are low-probability; considering them is important. Going back to the work visa situation, I think I correctly identified visa policy changes as a tail risk, though in hindsight controlling the amount of time spent mitigating them was done poorly – akin to spending $10 to insure against a 1% probability of $100 loss (provided the $100 loss wasn’t crippling – which it wouldn’t have been).

I also spent a lot of time focusing on mitigating this specific tail risk, when perhaps a better solution could be developing resilience to general tail risks that may affect my employment. The obvious routes at the time would have been continuing to do well academically and develop my skills, though others exist too – such as having a greater willingness to relocate, living below one’s means and building up an emergency fund. There are still further tail risks that the above wouldn’t address (e.g. a scenario where computers and automation are universally condemned, all countries practice strict closed-border policies and the global fiat money system collapses) but the costs in mitigating those risks seem untenably high. I haven’t read Antifragile yet (what I describe here is weaker, as it doesn’t demonstrate benefiting from low-probability events), though that’s planned to be on my reading list at some point in the future.

Quantitative Challenges

I spent about two to three hours studying and then working through mathematical problems on quantitative finance today. Specifically, these were questions from Blyth’s An Introduction to Quantitative Finance and dealt with interest rate swaps. These are contracts where one party typically pays fixed payments (e.g. 5%) and receives floating payments, which are dependent on market rates (e.g. LIBOR + 1.5%), though float-float swaps exist too (e.g. across currencies). Swaps can be used to mitigate interest rate risk (or gain exposure!); they can also be mutually beneficial depending on companies’ borrowing characteristics.

If I had to classify the mathematics involved in the problems I did, beyond “applied” I’m not sure how else I could label it. For many of these problems, the first steps involved figuring out how to mathematically model the financial products involved. There then followed some elementary algebra, along with proofs that required some intuition to pick the right approach (for lack of a better word). The exercises I did this time around relied less on probability than normal; much of this probably stemmed from a result that forward interest rates (i.e. the interest rate you’d get from future time T1 to later future time T2) could be valued independent of the distribution of possible values.

I’ve struggled quite a fair bit with the book, both in terms of the reading material as well as the exercises. Today’s chapter was relatively easier, though that might have been because I was reading through the chapter for the second time. It was my first time doing many of the exercises, though it seems like they went relatively smoothly today.

Some of this might be because I work through the chapters at a very relaxed clip of about one per month. Like many other mathematical domains, there tend to be many dependencies on previous topics. The earlier result I mentioned on forward interest rates, for example, was from the previous chapter; yet, it was instrumental in computing the valuation of a swap. There are certain fundamental ideas that I learned way back in 422 (Computational Finance at Imperial). I also think my mathematical knowledge and logical intuition have (hopefully) mostly stayed with me. Furthermore, I like to think that I remember the concepts at a high level. However, many proofs require recognizing that expressions are in certain forms and can thus be rewritten; I’m still yet to develop that level of familiarity – or shall I say intimacy – with the content.

This might also be partially self-created, especially where the reading material is concerned. When I see theorems, I tend to try my hand at proving them on my own first. These often prove to be rather tricky endeavors; the aforementioned lack of keen familiarity with the material certainly doesn’t help. Typically, I can understand the proofs fairly easily when reading them. However, I usually expect myself to figure out the intuition behind the proof (including reproducing it, at least at a high level), which isn’t always so forthcoming. That actually reminds me of what I used to do at Imperial for certain modules, especially the (in my opinion) two hardest of the course: 438 Complexity and 493 Intelligent Data and Probabilistic Inference. I would make the effort to understand why many of the proofs in those courses worked. I’d also try to figure out how the author might have come up with the proof, or at least what the core intuitions might have been. This included relatively nasty ones (e.g. SAT being NP-complete via direct argument, or ELBO results in variational inference), and I think it paid off in terms of understanding.

It could be argued that finding the material difficult is expected, because the subject matter is itself complex. I tried to obtain a popular estimate of the complexity of the material covered by the book, but didn’t find much data; there were only a handful of reviews on Amazon, which offered a wide spectrum of views (from “[o]ne of the best introductory treatise (sic)” to “I would hardly call it an “Introduction” to quantitative finance”). I’m not sure how to start more simply, though; there is a fair bit of assumed mathematical knowledge, but this is at least partly spelled out in the introduction.

While the book has proved challenging at times, I’ve not found it too hard to follow. Discussing the problems with a friend has also helped a fair bit, especially since the book doesn’t have solutions!

What Does “Likely” Mean? (Estimative Probability)

I typically take some time out on Saturday mornings to do some reading. This often begins with developments in personal finance or computer science, though there’s a tendency to branch out from that; I generally don’t restrict myself from going off on tangents.

For some reason, this week I came across a Masters thesis discussing communication of probabilities, specifically in the intelligence domain. It seems that I found it via A Wealth of Common Sense, a blog concerning personal finance that I read occasionally; there was a link there to a Library of Economics blogpost discussing issues with mapping qualitative descriptions to quantitative probabilities. For example, if I say that something is highly likely to happen, I would be implying that I believe it would happen with probability at least N; however, the numerical value of N is uncertain, and differs among different people.

For me at least, N would be around 80 percent (and, incidentally, the author of that post agrees); that said, I can certainly envisage people assigning values of N as low as 0.6 or as high as 0.95. Note that the problem can also be two-tailed (e.g. about evenmightpossiblenot inconceivable). The LoE author’s son proposes a reasonable scheme, which is to have authors outline their own mapping in a glossary. This is probably (well, I mean P >=0.7) a good first step, though there are implementation challenges in terms of length as well as completeness.

It turns out that the concept of words of estimative probability is treated very seriously in the intelligence domain. It is understandably important, as briefs are typically prepared in natural language, and often need to be communicated to audiences that may not be entirely comfortable with mathematical notation. To quote a CIA officer:

Most consumers of intelligence aren’t particularly sophisticated when it comes to probabilistic analysis. They like words and pictures, too. My experience is that [they] prefer briefings that don’t center on numerical calculation. That’s not to say we can’t do it, but there’s really not that much demand for it.

Furthermore, deriving highly precise (though possibly not highly accurate) estimates for probabilities is almost certainly (*cough* I mean P <= 0.03) pointless, and is likely (P >= 0.7) damaging in that it tends to (P >= 0.6) give a false sense of security and accuracy when that does not actually exist.

The original proposal divides probabilities onto a seven-point scale (arguably five, as the ends are meant to reflect absolute certainties): certain, almost certain, probable, chances about even, probably not, almost certainly not, impossible. I think most people would agree that the above ordering is in decreasing order of probabilities. Of course, strictly adhering to the above labels would impart a degree of awkwardness to writing, and a group of variants for each level (such as highly doubtful for almost certainly not) soon developed.

Interestingly, the author also gives possible a fairly specific meaning; he defines it to mean “greater than zero and less than one” (which makes sense; of course, something always happening is certainly possible – but it seems pointless to not use the more precise word), but also adds the restriction that “no numerical odds (can) be assigned”. This seems like a useful construct, especially in the early stages of knowing things when uncertainty tends to be high; the other descriptive terms were conceived with uncertainty ranges of about 10% on each side.

The author of the Masters thesis also considers how words of estimative probability are used in other fields. I found the comparison to weather forecasting particularly interesting, as the author rightly points out that that is one field in which the general public is given numeric estimates (of the probability of precipitation). Weather forecasters typically add their own prose when reporting as well, which allowed some comparison. That said, a major difference is that in forecasting, these estimates can be derived with fair precision (and, as it turns out, accuracy) as they can (and, in the UK, do) originate from predictive models of atmospheric conditions. There seem to be standardised terms as far as the US National Weather Service is concerned, though I wasn’t able to find comparable guidance from the UK Met Office.

The clarity required from words of estimative probability depends on the consequences of miscommunication, as well. This is of course important in intelligence, with some claiming that there was miscommunication regarding warnings related to the September 11 terrorist attacks. Incorrectly reporting a weather forecast is almost certainly (ugh, P >= 0.95) less consequential, though people may make bad decisions concerning taking umbrellas when going out or hanging clothes out to dry. I can imagine contexts where this would also be very important (such as experimental trials in medicine or financial planning), though it seems for the most part some degree of ambiguity or even unknown disagreement is probably okay.

Elevation (Hotseat: STEP I 2015)

Background

STEP (short for Sixth Term Examination Paper) is a somewhat difficult Maths exam that is used by several UK universities (including Cambridge, Warwick and Imperial) as part of a conditional offer for courses in Mathematics and Computer Science. A part of my conditional offer for Imperial included a 2 in any STEP paper of my choice, though it was paired with a relatively awkward 35 points for IB – perhaps because the rest of my portfolio was pretty strong.

There are three papers – I, II and III; III includes A Level Further Mathematics content, while I and II remain within the A Level Mathematics syllabus. I is also somewhat easier than II; that said, I think both papers exist because Cambridge does sometimes want students who didn’t take Further Mathematics to get a pair of grades in these exams. Nonetheless, STEP I itself is certainly no pushover. Students are graded on a scale of S, 1, 2, 3, U; the 2015 STEP I paper had 73.1 percent of students scoring at least ‘3’ (the lowest pass grade), and just 42.6 percent scoring at least ‘2’ (the lowest grade many universities would consider). This may be compared with A Level mathematics in 2015, where the analogous metrics of A*-E and A*-C respectively are 98.7 and 80.8 percent; and this is even before we factor in selection bias.

Each paper consists of 13 questions, but candidates are only required to pick six of them; their highest-scoring six questions will be used to determine their final score. Questions have equal weight (and each is marked with integers out of 20, which seems suspiciously similar to how I’ve seen this done at many universities!). Eight of the 13 questions are classified as “pure mathematics” and include tasks testing concepts like calculus, trigonometry, algebraic manipulation, series and even number theory. Three are classified as “mechanics”, typically requiring calculations on Newtonian mechanics, and two as “probability and statistics”. I almost always do 4/0/2 or 3/1/2. Note that it is possible to attempt seven or even more questions as a form of “insurance”, though given the strict time constraints this is likely to be difficult.

Performance

I had a fairly decent run, picking up 114 out of 120 points; mainly losing these to minor slips/cases where an important statement was not explicitly asserted, and a good chunk in question 7 with not clearly handling a case which was expected to be shown to bear no fruit (I thought it was rather obvious that it was not needed, and dismissed it in one line).

The last row indicates the order in which I attempted the problems; it seems this was generally consistent with how long I actually took on them (problems 2 and 13 were fast; 1 and 8 were somewhat in-between, and I struggled for a bit with 12 and messed up 7 while using up a fair bit of the time). Note that the “break-even” time if one wants to answer all questions would be 30 minutes per question.

Selected Problems in Depth

Problem 8: Series Division

First, prove that 1 + 2 + \ldots + n = \frac{n(n+1)}{2} and (N-m)^k + m^k is divisible by N. Then, consider

S = 1^k + 2^k + 3^k + \ldots + n^k

Show that if n is a positive odd integer, then S is divisible by n, and if n is even then S is divisible by n/2. Show further, that S is divisible by 1 + 2 + 3 + \ldots + n.

The two lead ins were simple. Induction does work but in both cases there were much better methods that could be used (write the series twice and pair terms up, and then pair terms in the binomial expansion). Later parts involved pairing S with a zero term, but the general theme of “pairing terms” persisted throughout the question. I think the toughest part of this one was, knowing that one had to show divisibility by \frac{n(n+1)}{2} at the very end, figuring out that it was safe to split this into two terms and show them separately. This works because the highest common factor of n and n + 1 is 1. My number theory was a bit rusty so I wasn’t sure if that was correct, and proved it during the paper.

Problem 12: On Fish Distributions

The number X of casualties arriving at a hospital each day follows a Poisson distribution with mean 8. Casualties require surgery with probability 1/4. The number of casualties arriving on each day are independent of the number arriving on any other day, as are the casualties’ requirements for surgery. (After some initial work) Prove that the number requiring surgery each day also follows a Poisson distribution and state its mean. Given that in a particular random chosen week 12 casualties require surgery on Monday and Tuesday, find the probability that 8 casualties require surgery on Monday (as a fraction, in its lowest terms).

This one really wasn’t too bad, though it involved a lot of algebraic manipulation and it seems I took quite a long time on it when doing the paper. Essentially, the independence condition should hint that if we have X casualties, the probability of S needing surgery is clearly binomially distributed. X itself is a random variable, but that’s fine; the law of conditional expectation gives us

P(S = s) = \displaystyle \sum_{t = s}^{\infty} P(S = s | X = t) P (X = t)

and a suitable substitution yields this:

P(S = s) = \displaystyle \sum_{t = s}^{\infty} \left( \frac{t!}{s! (t - s)!} \times \left( \frac{1}{4} \right)^s \times \left( \frac{3}{4} \right)^{t-s} \times \frac{e^{-8} 8^t}{t!}\right)

After some fairly involved algebraic manipulation, one can indeed recover a Poisson form for the pdf of S. Using this, the last part is relatively simple actually; we want P(S_1 = 8 | S_1 + S_2 = 12) and relying on the fact that a sum of independent Poissons is Poisson itself (so the means of S_1 and S_2 are 2 each gives us that the mean of S_1 + S_2 is Poisson, and with mean 4).

Problem 13: Probability and State Machines

Already covered in a previous post. I dispatched this one very quickly, though I was already familiar with the Markov process model that I used here.

Meta-Analysis

The main datasource we have available here is an Examiner’s Report that discusses to some extent what happened (though we don’t have full details). The grade boundary for an S is 96, so a 114 is comfortably in that range; 3.5 percent of candidates scored that. Question-level data isn’t published beyond the comments in the Examiner’s Report.

Focusing on the questions that were attempted, my opener Q1 which was a test of curve-sketching was also the highest-scoring question on the paper, with students scoring an average mark of over 13.5 (with caveats that some students were worried about curve sketching and thus avoided it). This was the second “easiest” question as far as I was concerned.

The other pure maths questions I attempted (2, 7 and 8) were also popular, and attempted with varying degrees of success (questions 2 and 7 were the second and third highest scoring questions). Probability and Statistics for some reason seems to always cause issues for students attempting these papers, with mean scores in the 2-3 range, though having done Olympiads in the past (and specialising in combinatorics and probability) I understandably found these easier.

Generally, STEP I for me tends to be fairly tough but certainly comfortable, II is solidly difficult, and doing III often makes me feel like a dog behind a chemistry set (“I have no idea what I’m doing”), especially since I didn’t take Further Maths in high school.

Probability and State Machines

Suppose you have a fair six-sided die, and repeatedly toss it. What is the probability that you throw exactly one 4 before your first 6, or you throw exactly one 5 before your first 6 (inclusive or)?
(STEP I Q13, 2015)

There are several ways to approach this problem. There’s a “conventional” method that involves careful use of binomial series (and that was the approach taken in the mark scheme). However, the approach I used for this was somewhat different; some of it may be attributable to my computer science (or perhaps even logic/verification) background.

We can model this event using a finite state machine, where each state has a transition distribution that indicates the probability of transitioning to each other state in the state machine. For example, we can consider a simpler problem: what is the probability that you throw exactly one 4 before your first 6?

Well, we can be in multiple “states”:

  • We could not have rolled anything of significance yet. In this case, we have a 4 in 6 probability of staying in this state (1, 2, 3 or 5); a 1 in 6 probability of immediate failure (rolling a 6), and a 1 in 6 probability of registering a 4.
  • We could have already registered a 4. In this case, we again have a 4 in 6 probability of staying in this state (1, 2, 3 or 5), a 1 in 6 probability of success (rolling a 6), and a 1 in 6 probability of failure (rolling a 4).
  • We could already have succeeded or failed. Additional rolls won’t change anything in these cases.

This can be modelled as the following state machine:

We can then compute a success probability for each state – that is, “given I’m in this state, what’s the probability I succeed”?

  • The success probability PS of the state Pass is 1, and that of Fail is 0; we write this as PS(Pass) = 1 and PS(Fail) = 0.
  • PS(4) is a bit more interesting; it is \frac{4}{6}PS(4) + \frac{1}{6}PS(Pass) + \frac{1}{6}PS(Fail) and solving this yields half. This is intuitively expected; at this point, we’re counting on rolling a 6 before rolling a 4, and on a fair die the probability of that is half.
  • We can now determine PS(Start); this is \frac{4}{6}PS(Start) + \frac{1}{6}PS(4) + \frac{1}{6}PS(Fail). Solving the equation yields the answer of \frac{1}{4}.

Note that we could only use this approach because, with the way we’ve defined the states, the process is memoryless; that is, the transition probabilities depend only on the current state. We could try to directly construct a state machine for the original question (either exactly one 4 or exactly one 5 before the first 6, or both), though it seems that the requirement of memorylessness makes things somewhat complex; we would need states tracking whether we experienced zero, one, or two-plus of each number.

We can use a well-known probability axiom here, though; P(A \cup B) = P(A) + P(B) - P(A \cap B) – defining rolling exactly one 4 before one 6 as event A and respectively with a 5 for event B. Furthermore, our initial exploration already yielded values for P(A) as a quarter, and P(B) would also be a quarter by symmetry. We thus, instead, construct a state machine for the intersection, where both need to be satisfied.

  • Throughout, a 1, 2 or 3 does nothing and simply puts us back in the state we were in.
  • As we start, we can also roll a 4 or 5, which mean we’ve got the one 4 or 5 we want; or we can roll a 6, which is an immediate failure.
  • Once we have a 4 registered, we want to roll a 5 which gets us into the (4, 5) state. 6 remains an immediate failure, but 4 also now becomes one (remember that we want to have exactly one 4 and exactly one 5 before our 6).
  • The above logic also holds for 5s, except with 4 and 5 swapped.
  • Finally, in the (4, 5) state, rolling a 6 yields success, but either a 4 or a 5 would become immediate failures.

Here’s the result:

We could use the same method of success probabilities used earlier; I’ll take a little shortcut.

  • PS(4,5) must be \frac{1}{3} as we’re counting on a 6 being rolled before either a 4 or a 5, and with a fair die each of these has equal probability of being the first of the set to be rolled.
  • PS(5) is \frac{1}{6}PS(4,5) + \frac{2}{6}PS(Fail) + \frac{3}{6}PS(5), which gives us PS(5) = \frac{1}{9}. By symmetry, we can be sure that PS(4) = \frac{1}{9} too.
  • PS(Start) is \frac{3}{6}PS(Start) + \frac{1}{6}PS(4) + \frac{1}{6}PS(5) + \frac{1}{6}PS(Fail). Solving that yields PS(Start) = \frac{2}{27}.

We’re not done yet, though, as this isn’t what we wanted. We wanted P(A \cup B); but now we can apply the identity and get

P(A \cup B) = P(A) + P(B) - P(A \cap B) = \dfrac{1}{4} + \dfrac{1}{4} - \dfrac{2}{27} = \dfrac{23}{54}

which is the same answer obtained by the STEP problem-setters, though they used binomial series instead.

The above approach can of course be generalised to handle expected values (on the basis that there can be multiple distinct terminal states, where each state is assigned a value). For example, if a player won a consolation prize of $21 if he ‘bust out’ (rolled a 6 before rolling either a 4 or a 5), and a full $60 if he won, and we wanted to determine the value of the game, we could draw the graph with three terminals instead, perhaps something like this:

We would then perform our computations on the expected value (EV) of each state, with terminals being exactly their value.

Also, though for our examples we didn’t have cycles in our graphs, it’s possible that in general there could be cycles. Of course, these can be handled by solving the multiple PS or EV-equations simultaneously.

A Metric for Speedrunning Exams

In gaming, a speedrun is a playthrough of a game where one attempts to complete it as quickly as possible, perhaps subject to some constraints (such as completing a game with 100% completion of all sidequests). I wouldn’t say I’m skilled/fast enough for aggressively focusing on speed in many of the tests I’ve covered; I didn’t finish the AIME, and while I did pretty much “complete” the BMO and MAT within their time limits it wasn’t with much time to spare at all.

Typically when taking exams where one expects to finish comfortably (and even in cases when one does not) I find it’s rarely a good idea to go as fast as possible; mistakes are likely to be made and one typically wants to secure the marks for questions one is able to answer. I’d almost always use up almost all of the time as well; there’s typically no reason not to go back and check one’s past work, and/or improve the quality of one’s existing answers. Generally up to and including the IB exams I took at the end of high school, for most subjects there was time left over at the end; once at Imperial most exams tended to be very fast-paced – I would either finish them with typically fewer than ten minutes on the clock, or not finish them at all. (There were exceptions, such as first-year Logic, second-year Concurrency and fourth-year Modal Logic.)

A speedrun of an exam would involve completing the questions as quickly as possible; perhaps using much less time than is given. In constructing a metric for this, it becomes apparent that incorrect answers need to be penalised in some way (it would otherwise be feasible to leave all questions blank and immediately stop the clock). However, ideally the penalties would not be too harsh; for example, invalidating performances with any answers incorrect would address our first concern, but would not be enjoyable at all. It also seems awkward to invalidate an entire run on the basis of not knowing, say, a single fact or definition.

There are two obvious raw metrics for performance of a speedrun of an exam:

  • the proportion of total marks scored, M \in [0, 1] where higher is better;
  • the proportion of time taken, T \in [0, 1] where lower is better.

Combining those metrics, I think the following metric for an overall performance P is fairly intuitive.

P_0 = M/T

In a sense we’re measuring efficiency with respect to time, against a benchmark student who uses all of the time and scores 100% of the marks. However, I don’t think this is a good metric, because it can readily be abused; a strategy that quickly determines the easiest mark(s) to score on the paper and then attempts only those marks will do very well (note: not necessarily questions; for example, “find the roots of f(x) = 36x^4 - 23x^3 - 31x^2 - 102x” is a rather nasty question, but there’s one obvious answer that drops out in a second or two).

Of course, a way around this could be that scripts must be checked at the end, to ensure that there’s a reasonable bona fide attempt to each and every problem. However, that requires manual verification. An alternative option could be to introduce a minimum mark threshold m; attempts that have M < m are invalidated.

P_1 = \begin{cases} 0 & M < m \\ M/T & M \geq m \end{cases}

This metric is a decent improvement, though it still has some issues:

  • A strategy that seeks to identify the m easiest marks to obtain and then attempts only those would perform well. This can be mitigated if m is fairly high; for example, for A level mathematics papers I would set something like m = 0.93 or so.
  • Also, if m is set too high (e.g. m = 1 is the aforementioned “invalidate all non-perfect runs” strategy), too many runs, including runs that fail because of unknown facts/definitions may be invalidated.

We can also consider continuous penalty functions based on M that increase perhaps more harshly than M itself, as M decreases from 1. For example,

P_2 = \max \left\lbrace 0, \dfrac{1 - 2(1 - M)}{T} \right\rbrace

Thus, a run with M = 0.7 for a given time has its score reduced to 40 percent of another run with the same time but M = 1. The max from 0 could be left out, if one wished to give negative scores to runs with M < 0.5 though I think that’s fairly harsh.

There’s also no reason we should restrict ourselves to linear functions. Consider

P_3 = M^\alpha / T, \alpha > 1

Higher values of \alpha will penalise flaws more heavily; consider two runs with the same time, but one having M = 0.9 and the other M = 1; with \alpha = 2 the imperfect run scores 81 percent the score of the perfect run, but with \alpha = 10 the imperfect run scores a mere 34.9 percent the score of the perfect run! Also, observe that as \alpha \rightarrow \infty we approach P_1 with a 100 percent threshold.

Of course, we also have the exponential option:

P_4 = e^{-\lambda (1 - M)} / T, \lambda > 0

In approaches 3 and 4, each additional mistake is penalised more. I think this makes sense for papers where candidates who understand the concepts can be expected to lose most or all of the points they lose to careless errors.

One or two slips should, in my opinion, result in a notable performance hit relative to perfect runs, but the run should still be salvageable; more than a few would seem like grounds for invalidating the run. It could be possible to blend the threshold idea (approach 1) with either approach 3 or 4, though we could argue that the heavy penalties involved would already destroy runs with “more than a few” errors.

 

The University Gatekeepers (Hotseat: MAT 2013)

Background

I didn’t have to take the Mathematics Admission Test when I applied to Imperial (though the Cambridge STEP was a part of the conditions of my offer, probably because I didn’t take FM). This was added as a part of the admission process in 2013, though seemingly only for students aiming to read Mathematics. Oxford has been running this test for fairly long, and they use it both for Mathematics as well as Computer Science. Unlike STEP, candidates don’t get a choice of questions, but instead are prescribed questions based on the programme they’re applying to.

The paper has seven questions and candidates need to answer five. All candidates have to answer question 1, which is actually ten multiple-choice questions worth 4 marks each (no penalties for guessing). There are then six 15-mark questions that follow. Candidates in Computer Science generally have two questions that the broader Mathematics population also takes – in 2013 this featured an algebra question on functional equations, and a discrete mathematics/combinatorics question. There are two further questions, one of which tends to incorporate a significant element of mathematical logic (2013’s problem revolved around the notion of common knowledge, much like the infamous Cheryl’s birthday question), and another that deals with formal languages and reasoning about similar structures (the 2013 problem focused on an inductively generated language).

I did the Computer Science stream of the paper, and managed to put together a score of 95 (40 on the first problem; 14 on Q2, 15 on Q5, 15 on Q6 and 11 on Q7) owing to a lucky (or, depending on how you see it, educated) guess on one of the multiple-choice problems. The “Mathematics-only” questions Q3 and Q4 didn’t seem any harder, though; I cleared both of them without much difficulty (whereas Q2 was somewhat nastier, in general). I lost the points on Q7 owing to a careless error; specifically, that if an expression Bw is reversed, the resultant expression is certainly not wB (well, unless w is palindromic!)

The specific paper I did is here.

Selected Problems in Depth

Problem 1: The Gauntlet

I don’t find multiple choice questions to play very well with mathematics, because it’s very often possible to reverse engineer a solution or otherwise arrive at the correct answer via an unexpected solution path. For example, consider problem G:

p_n(x) = (x - 1) + (x - 2) + \ldots + (x - n)

What is the remainder when p_n(x) is divided by p_{n-1}(x)?

Presumably, the intended solution path was to express p_n(x) in a more convenient form (noting the arithmetic progression in the constant terms) and then actually do the division. However, given the options, it sufficed to simply test it for p_3(x) and p_2(x), noting that the result of the division was negative, and that only one of the options was negative.

Nonetheless, there were certainly ample opportunities for trick questions (this paper had E, where there was a trap with an algebraic expression easily being bounded by degree 7, though it wasn’t the case that it actually was degree 7).

I got through 9 of the problems fairly smoothly, though struggled with H (failing to realise that one of the equations was actually that of a semicircle, and plowing into aggressive trigonometry-based integration). In the end I ran out of time towards the end of the integration, but I guessed the only option that featured the subtraction of the area under one of the lines, which turned out to be correct.

Problem 2: Functional Equations

This was generally fairly straightforward till the end, where the task was to find some function f that satisfied f(t) - f(1-t) = (2t - 1)^3. In hindsight it seemed fairly obvious to conclude that a good way of solving this would be to treat f as a cubic polynomial, though I struggled for a while substituting random values of t in to the functional equation to see if anything dropped out. I then went through some messy reasoning along the lines of “the difference between two points increases cubically as the (x-)distance between said points increases linearly, so the derivative is probably quadratic, thus the function is probably cubic”. I then established f(t) = At^3 + Bt^2 + Ct + D and was on my way after a modest algebraic bash.

Problem 6: Reasoning about Knowledge

This question introduces Alice, Bob and Charlie; Charlie writes a (different) number on Alice and Bob’s foreheads. Thus Alice and Bob can see the other’s number, and Charlie can see both. The question begins with a little warm-up to prompt candidates into thinking about epistemic reasoning (if Alice knows her number right off the bat, it must be 2 – because she can see Bob’s 1; for any other number she wouldn’t be able to know). Things heat up a little later on:

  • Charlie: Your numbers are between 1 and 10 inclusive.
  • Alice: I don’t know my number. Is my number a square number?
  • Charlie: If I told you, you would know your number.
  • Bob: I don’t know my number.

What is Alice’s number?

We start by noting that the squares are 1, 4 and 9. Since Alice can see Bob’s number, she already knows that her number is one of two possibilities, spaced two apart. For Charlie’s statement to make sense, exactly one of the possibilities has to be a square. This tells us that Bob’s number is 2, 3, 5 or 8. (It couldn’t be 10, because if it was Alice would know that her number is 9.)

Next, Bob can see Alice’s number, but doesn’t know his. Since he can see Alice’s number, Alice’s number A must be one for which both A + 1 and A - 1 are candidates. This gives us the answer: her number has to be 4.

This one, in my opinion, stopped at a fairly early level; things could develop a fair bit more. However, assuming proportional time weighting (and bearing in mind that I did write my thesis on epistemic logic) the 22.5 minutes candidates were given for this is reasonably challenging.

Synthesis

I found STEP to be somewhat more demanding, perhaps because it had questions that focused more on depth rather than breadth, and also because I took the computer science stream of the paper while STEP just focuses on mathematics.

The average score of all candidates on questions 1 through 5 was 44.8, with 60.6 for successful applicants. Nonetheless, I subsequently worked through problems 3 and 4 and actually found them considerably easier than 2. Interestingly, there was a candidate who scored between 85 and 89 but was nonetheless rejected; also, no one scored 100, which seems surprising (this would certainly have been possible if not for the careless slip I made on problem 2 – or maybe the marking for some aspects such as the rigor demanded in proofs was stricter than my own marking).

This was quite enjoyable, actually. I’m not sure how I would have fared had I taken this exam in the past! I’d think that my knowledge of logic, as well as the more computer-science oriented topics (discrete mathematics and formal languages) would have improved, though knowledge of say calculus or analysis probably would be slightly weaker than when I finished high school.

  • 1
  • 2