On Teaching and Teaching Assistants

I read an article in the Guardian a few days ago on the Finnish education system which is often cited in the western world as one example of a successful education system (in spite of some rather angry bold negative deltas in PISA 2015 – however reliable said tests are as a form of assessment). I’ve been through the table-topping Singapore education system, and while it certainly is rigorous (especially in mathematics – I recently looked at an A level Thinking Skills problem-solving question that, chillingly, wouldn’t be too out of place on the Mathematics PSLE in Singapore) there are valid concerns regarding excessive stress levels, teaching not being perceived as a high-profile job and a lack of time for students to explore things on their own. I would certainly understand a desire not to bring some of these elements into an education system.

The headline message being trust your teachers is something I can appreciate to some extent, even though I was never explicitly a teacher, at least in terms of profession. I had the privilege of being an undergraduate teaching assistant during my third and fourth years at Imperial, and I like to think that the professors and lecturers who were supervising me placed a lot of trust in me; they certainly gave me a fair bit of latitude in the content I could cover (perhaps not the “unfettered flexibility” mentioned in the article, but I was supposed to teach rather specific modules – Logic, Discrete Mathematics, Reasoning about Programs, and Data Structures and Algorithms).

I was given high ability groups in both years, and this resulted in advanced tutorials that introduced students to various slightly more advanced topics they would see soon (concurrency, temporal logics, optimisation algorithms), as well as stretched their abilities in applying the concepts and knowledge learnt (okay, you know what a loop invariant is – how can we extend it to nested loops? Functions containing loops that could be recursive?). I believe these were appreciated, and did collect feedback on them (though, of course, it’s difficult to be sure how much negative feedback was swept under the rug with these kinds of questionnaires).

Unfortunately, I did also indulge in some “teaching to the test”, explaining strategies for tackling the various exams that were certainly not a part of the syllabus. Thankfully, Imperial’s examinations don’t have too much exploitability here, as far as I can recall; I think much effort was spent identifying common pitfalls and explaining how to avoid them (e.g. unnecessary quantifier movement in Logic, and telling students to clearly demonstrate their thought processes even if they couldn’t answer the question). Some of this was certainly backed by popular demand, and it did pay off in that my students did win multiple prizes. I certainly wasn’t in a position to change the assessment system at Imperial!

I did encounter minimal bureaucracy, mainly around marking the attendance of students (some of this is part of the Home Office’s “expected contacts” requirement for non-EU students). I can’t remember if a DBS check was necessary, though I already had one from year 1, in any case. Thankfully, there was nothing along the scale of what was being described in the article:

Contrast this with the UK, where schools have data managers, where some teachers are told which colour pens to use for marking, and where books are periodically checked to ensure that learning intentions are neatly stuck in place.

Not necessarily sure that the existence of data managers is a bad thing (after all, I do work for a company that helps others make data-driven decisions!) – but that said, drawing conclusions from data that doesn’t truly reflect students’ abilities (if that is what is going on) is very unlikely to be effective (“garbage in, garbage out” springs to mind).

I did do a stint as a volunteer student helper with a few schools near Imperial as part of the Pimlico Connection programme. Although I didn’t witness said book checks, I certainly did notice teachers explicitly reference curricular objectives and the level system (this was pre-September 2014 changes). Obviously, I’m not sure how representative this is in schools in general, though. I think the only time I recall encountering this in Singapore was when preparing for the Computer Science IB HL exams.

The article concludes with an expansion of this notion of trusting individual teachers to societal trends towards trust in general, though not much evidence or data is presented on this. I guess some connections can be drawn to a point raised earlier on relative economic homogeneity. Looking at an issue of trust in the UK in specific, there is interestingly a series of studies that attempt to collect data on this. Slide 19 on the linked page suggests that 55% of British people trust the British people to “do the right thing”, whatever that entails.

In terms of trusting individual teachers, I’d probably be comfortable with that only if there was a good process for selecting teachers. That’s a difficult problem – simply going for the “best and brightest” in terms of students’ academic results certainly isn’t enough, as the Finnish process acknowledges. We did do that at Imperial, though in some sense the stakes are lower there as there is a supervisor monitoring the process and it is, still, one of the “least worst” indicators one can use. However, I think once one acquires confidence and skill such that one will not struggle with the concepts one is teaching, and one can answer students’ questions comfortably (within reason), there are many other more important traits. My knowledge of tricky automata theory or familiarity with theoretical complexity classes, or for that matter ability to knock in a 96% in the Logic and Reasoning about Programs exams (as opposed to say an 85 or 90) were generally not directly relevant to doing my job as a teaching assistant!

Running the Gauntlet (Hotseat: OCR Mathematics C1-C4)

Background

The GCE A Level is a school-leaving qualification that students in the UK take at the end of high school. Students usually take exams for 3-4 subjects. The exams are graded on a scale from A* to U (though not with all characters in between); typically an A* is awarded to the roughly top 8-9 percent of students.

This is a rather different type of challenge – previous installments of this series have featured especially difficult exams (or rather, competitions; only the MAT is realistically speaking an exam there). I’ve usually struggled to finish in the time limit (I didn’t finish the AIME and barely finished the MAT; I had some spare time on the BMO R1, but still not that much). I could of course do this in the same way as the other tests, though the score distribution would likely be close to the ceiling, with random variation simply down to careless mistakes.

Interestingly, the UK has multiple exam boards, so for this discussion we’ll be looking at OCR, which here stands not for Optical Character Recognition, but for Oxford, Cambridge and RSA (the Royal Society of Arts). The A level Maths curriculum is split into five strands: core (C), further pure (FP), mechanics (M), statistics (S) and decision (D); each strand features between two and four modules, which generally are part of a linear dependency chain – apart from FP, where FP3 is not dependent on FP2 (though it still is dependent on FP1). For the Mathematics A level, students need to take four modules from the core strand, and two additional “applied” modules; Further Mathematics involves two of the FP strand modules plus any four additional modules (but these cannot overlap with the mathematics A level ones). Thus, a student pursuing a Further Mathematics A level will take 12 distinct modules, including C1 – C4 and at least two FP modules, for example C1-4, FP{1,3}, S1-4, D1 and M1.

(In high school I took the IB diploma programme instead, which did have Further Mathematics (FM), though I didn’t take it as I picked Computer Science instead. That was before Computer Science became a group 4 subject; even then, I think I would still have wanted to do Physics, and thus would not have taken FM in any case.)

Setup

I attempted the June 2015 series of exams (C1 – C4). Each of these papers is set for 90 minutes, and is a problem set that features between about seven and ten multi-part questions. The overall maximum mark is 72 (a bit of a strange number; perhaps to give 1 minute and 15 seconds per mark?). To make things a little more interesting, we define a performance metric

P = \dfrac{M^2}{T}

where M is the proportion of marks scored, and T is the proportion of time used. For example, scoring 100 percent in half of the time allowed results in a metric of 2; scoring 50 percent of the marks using up all of the time yields a metric of 0.25. The penalty is deliberately harsher than proportional, to limit the benefit of gaming the system (i.e. finding the easiest marks and only attempting those questions).

Most of the errors were results of arithmetical or algebraic slips (there weren’t any questions which I didn’t know how to answer, though I did make a rather egregious error on C3, and stumbled a little on C4 with trying to do a complex substitution for an integral, rather than preprocessing the term). There are a few things I noted:

  • The scores for the AS-level modules (C1, C2) were considerably higher than that for the A-level modules (C3, C4). This is fairly expected, given that students only taking AS Mathematics would still need to do C1 and C2. Furthermore, from reading examiners’ reports the expectation in these exams is that students should have enough time to answer all of the questions.
  • The score for C1 was much higher than that for C2. I think there are two reasons for this – firstly, C1 is meant to be an introductory module; and secondly, no calculators are allowed in C1, meaning that examiners have to allocate time for students to perform calculations (which as far as I’m aware is something I’m relatively quick at).
  • The score for C4 was actually slightly higher than that for C3 (contrary to a possibly expected consistent decrease). While there is meant to be a linear progression, I certainly found the C3 paper notably tougher than that for C4 as well. That said, this may come from a perspective of someone aiming to secure all marks as opposed to the quantity required for a pass or an A.

We also see the penalty effect of the metric kicking in; it might be down to mental anchoring, but observe that perfect performances on C1 and C2 in the same amount of time would have yielded performance numbers just above 5 and 3, respectively.

Selected Problems in Depth

C3, Question 9

Given f(\theta) = \sin(\theta + 30^{\circ}) + \cos(\theta + 60^{\circ}), show that f(\theta) = \cos(\theta) and that f(4\theta) + 4f(2\theta) \equiv 8\cos^4\theta - 3. Then determine the greatest and least values of \frac{1}{f(4\theta) + 4f(2\theta) + 7} as \theta varies, and solve the equation, for 0^{\circ} \leq \alpha \leq 60^{\circ},

\sin(12\alpha + 30^{\circ}) + \cos(12\alpha + 60^{\circ}) + 4\sin(6\alpha + 30^{\circ}) + 4\cos(6\alpha + 30^{\circ}) = 1

This might have appeared a little intimidating, though it isn’t too bad if worked through carefully. The first expression is derived fairly quickly by using the addition formulas for sine and cosine. I then wasted a bit of time on the second part by trying to be cheeky and applying De Moivre’s theorem (so, for instance, \cos(4\theta) is the real part of e^{i(4\theta)} which is the binomial expansion of (\cos \theta + i \sin \theta)^4), subsequently using \sin^2 x = 1 - \cos^2 x where needed. This of course worked, but yielded a rather unpleasant algebra bash that could have been avoided by simply applying the double angle formulas multiple times.

The “range” part involved substitution and then reasoning on the range of \cos^4\theta (to be between 0 and 1). The final equation looked like a mouthful; using the result we had at the beginning yields

f (12 \alpha) + 4 f (6 \alpha) = 1

and then using a substitution like \beta = 3 \alpha, we can reduce the equation to 8 \cos^4 \beta - 3 = 1. We then get \cos \beta = \pm \left( \frac{1}{2} \right)^{(1/4)} and we can finish by dividing the values of \beta by 3 to recover \alpha.

C4, Question 6

Using the quotient rule, show that the derivative of \frac{\cos x}{\sin x} is \frac{-1}{\sin^2x}. Then show that

\displaystyle \int_{\frac{1}{6}\pi}^{\frac{1}{4}\pi} \dfrac{\sqrt{1 + \cos 2x}}{\sin x \sin 2x} = \dfrac{1}{2}\left(\sqrt{6} - \sqrt{2}\right)

The first part is easy (you’re given the answer, and even told how to do it). The second was more interesting; my first instinct was to attempt to substitute t = \sqrt{1 + \cos 2x} which removed the square root, but it was extremely difficult to rewrite the resulting expression in terms of t as opposed to x. I then noticed that there was a nice way to eliminate the square root with \cos 2x = 2 \cos^2 x - 1. The integrand then simplifies down into a constant multiple of \frac{-1}{\sin^2x}; using the first result and simplifying the resultant expression should yield the result. That said, I wasted a fair bit of time here with the initial substitution attempt.

Meta-Analysis

To some extent this is difficult, because students don’t generally do A levels in this way (for very good reasons), and I’m sure that there must be students out there who could similarly blast through the modules in less than half the time given or better (but there is no data about this). Nonetheless, the A level boards usually publish Examiners’ Reports, which can be fairly interesting to read through though generally lacking in data. The C3 report was fairly rich in detail, though; and the 68/72 score was actually not too great (notice that “8% of candidates scored 70 or higher”). Indeed the aforementioned question 9 caused difficulties, though the preceding question 8 on logarithms was hardest in terms of having the lowest proportion of candidates recording full marks.

Probability and State Machines

Suppose you have a fair six-sided die, and repeatedly toss it. What is the probability that you throw exactly one 4 before your first 6, or you throw exactly one 5 before your first 6 (inclusive or)?
(STEP I Q13, 2015)

There are several ways to approach this problem. There’s a “conventional” method that involves careful use of binomial series (and that was the approach taken in the mark scheme). However, the approach I used for this was somewhat different; some of it may be attributable to my computer science (or perhaps even logic/verification) background.

We can model this event using a finite state machine, where each state has a transition distribution that indicates the probability of transitioning to each other state in the state machine. For example, we can consider a simpler problem: what is the probability that you throw exactly one 4 before your first 6?

Well, we can be in multiple “states”:

  • We could not have rolled anything of significance yet. In this case, we have a 4 in 6 probability of staying in this state (1, 2, 3 or 5); a 1 in 6 probability of immediate failure (rolling a 6), and a 1 in 6 probability of registering a 4.
  • We could have already registered a 4. In this case, we again have a 4 in 6 probability of staying in this state (1, 2, 3 or 5), a 1 in 6 probability of success (rolling a 6), and a 1 in 6 probability of failure (rolling a 4).
  • We could already have succeeded or failed. Additional rolls won’t change anything in these cases.

This can be modelled as the following state machine:

We can then compute a success probability for each state – that is, “given I’m in this state, what’s the probability I succeed”?

  • The success probability PS of the state Pass is 1, and that of Fail is 0; we write this as PS(Pass) = 1 and PS(Fail) = 0.
  • PS(4) is a bit more interesting; it is \frac{4}{6}PS(4) + \frac{1}{6}PS(Pass) + \frac{1}{6}PS(Fail) and solving this yields half. This is intuitively expected; at this point, we’re counting on rolling a 6 before rolling a 4, and on a fair die the probability of that is half.
  • We can now determine PS(Start); this is \frac{4}{6}PS(Start) + \frac{1}{6}PS(4) + \frac{1}{6}PS(Fail). Solving the equation yields the answer of \frac{1}{4}.

Note that we could only use this approach because, with the way we’ve defined the states, the process is memoryless; that is, the transition probabilities depend only on the current state. We could try to directly construct a state machine for the original question (either exactly one 4 or exactly one 5 before the first 6, or both), though it seems that the requirement of memorylessness makes things somewhat complex; we would need states tracking whether we experienced zero, one, or two-plus of each number.

We can use a well-known probability axiom here, though; P(A \cup B) = P(A) + P(B) - P(A \cap B) – defining rolling exactly one 4 before one 6 as event A and respectively with a 5 for event B. Furthermore, our initial exploration already yielded values for P(A) as a quarter, and P(B) would also be a quarter by symmetry. We thus, instead, construct a state machine for the intersection, where both need to be satisfied.

  • Throughout, a 1, 2 or 3 does nothing and simply puts us back in the state we were in.
  • As we start, we can also roll a 4 or 5, which mean we’ve got the one 4 or 5 we want; or we can roll a 6, which is an immediate failure.
  • Once we have a 4 registered, we want to roll a 5 which gets us into the (4, 5) state. 6 remains an immediate failure, but 4 also now becomes one (remember that we want to have exactly one 4 and exactly one 5 before our 6).
  • The above logic also holds for 5s, except with 4 and 5 swapped.
  • Finally, in the (4, 5) state, rolling a 6 yields success, but either a 4 or a 5 would become immediate failures.

Here’s the result:

We could use the same method of success probabilities used earlier; I’ll take a little shortcut.

  • PS(4,5) must be \frac{1}{3} as we’re counting on a 6 being rolled before either a 4 or a 5, and with a fair die each of these has equal probability of being the first of the set to be rolled.
  • PS(5) is \frac{1}{6}PS(4,5) + \frac{2}{6}PS(Fail) + \frac{3}{6}PS(5), which gives us PS(5) = \frac{1}{9}. By symmetry, we can be sure that PS(4) = \frac{1}{9} too.
  • PS(Start) is \frac{3}{6}PS(Start) + \frac{1}{6}PS(4) + \frac{1}{6}PS(5) + \frac{1}{6}PS(Fail). Solving that yields PS(Start) = \frac{2}{27}.

We’re not done yet, though, as this isn’t what we wanted. We wanted P(A \cup B); but now we can apply the identity and get

P(A \cup B) = P(A) + P(B) - P(A \cap B) = \dfrac{1}{4} + \dfrac{1}{4} - \dfrac{2}{27} = \dfrac{23}{54}

which is the same answer obtained by the STEP problem-setters, though they used binomial series instead.

The above approach can of course be generalised to handle expected values (on the basis that there can be multiple distinct terminal states, where each state is assigned a value). For example, if a player won a consolation prize of $21 if he ‘bust out’ (rolled a 6 before rolling either a 4 or a 5), and a full $60 if he won, and we wanted to determine the value of the game, we could draw the graph with three terminals instead, perhaps something like this:

We would then perform our computations on the expected value (EV) of each state, with terminals being exactly their value.

Also, though for our examples we didn’t have cycles in our graphs, it’s possible that in general there could be cycles. Of course, these can be handled by solving the multiple PS or EV-equations simultaneously.

A Metric for Speedrunning Exams

In gaming, a speedrun is a playthrough of a game where one attempts to complete it as quickly as possible, perhaps subject to some constraints (such as completing a game with 100% completion of all sidequests). I wouldn’t say I’m skilled/fast enough for aggressively focusing on speed in many of the tests I’ve covered; I didn’t finish the AIME, and while I did pretty much “complete” the BMO and MAT within their time limits it wasn’t with much time to spare at all.

Typically when taking exams where one expects to finish comfortably (and even in cases when one does not) I find it’s rarely a good idea to go as fast as possible; mistakes are likely to be made and one typically wants to secure the marks for questions one is able to answer. I’d almost always use up almost all of the time as well; there’s typically no reason not to go back and check one’s past work, and/or improve the quality of one’s existing answers. Generally up to and including the IB exams I took at the end of high school, for most subjects there was time left over at the end; once at Imperial most exams tended to be very fast-paced – I would either finish them with typically fewer than ten minutes on the clock, or not finish them at all. (There were exceptions, such as first-year Logic, second-year Concurrency and fourth-year Modal Logic.)

A speedrun of an exam would involve completing the questions as quickly as possible; perhaps using much less time than is given. In constructing a metric for this, it becomes apparent that incorrect answers need to be penalised in some way (it would otherwise be feasible to leave all questions blank and immediately stop the clock). However, ideally the penalties would not be too harsh; for example, invalidating performances with any answers incorrect would address our first concern, but would not be enjoyable at all. It also seems awkward to invalidate an entire run on the basis of not knowing, say, a single fact or definition.

There are two obvious raw metrics for performance of a speedrun of an exam:

  • the proportion of total marks scored, M \in [0, 1] where higher is better;
  • the proportion of time taken, T \in [0, 1] where lower is better.

Combining those metrics, I think the following metric for an overall performance P is fairly intuitive.

P_0 = M/T

In a sense we’re measuring efficiency with respect to time, against a benchmark student who uses all of the time and scores 100% of the marks. However, I don’t think this is a good metric, because it can readily be abused; a strategy that quickly determines the easiest mark(s) to score on the paper and then attempts only those marks will do very well (note: not necessarily questions; for example, “find the roots of f(x) = 36x^4 - 23x^3 - 31x^2 - 102x” is a rather nasty question, but there’s one obvious answer that drops out in a second or two).

Of course, a way around this could be that scripts must be checked at the end, to ensure that there’s a reasonable bona fide attempt to each and every problem. However, that requires manual verification. An alternative option could be to introduce a minimum mark threshold m; attempts that have M < m are invalidated.

P_1 = \begin{cases} 0 & M < m \\ M/T & M \geq m \end{cases}

This metric is a decent improvement, though it still has some issues:

  • A strategy that seeks to identify the m easiest marks to obtain and then attempts only those would perform well. This can be mitigated if m is fairly high; for example, for A level mathematics papers I would set something like m = 0.93 or so.
  • Also, if m is set too high (e.g. m = 1 is the aforementioned “invalidate all non-perfect runs” strategy), too many runs, including runs that fail because of unknown facts/definitions may be invalidated.

We can also consider continuous penalty functions based on M that increase perhaps more harshly than M itself, as M decreases from 1. For example,

P_2 = \max \left\lbrace 0, \dfrac{1 - 2(1 - M)}{T} \right\rbrace

Thus, a run with M = 0.7 for a given time has its score reduced to 40 percent of another run with the same time but M = 1. The max from 0 could be left out, if one wished to give negative scores to runs with M < 0.5 though I think that’s fairly harsh.

There’s also no reason we should restrict ourselves to linear functions. Consider

P_3 = M^\alpha / T, \alpha > 1

Higher values of \alpha will penalise flaws more heavily; consider two runs with the same time, but one having M = 0.9 and the other M = 1; with \alpha = 2 the imperfect run scores 81 percent the score of the perfect run, but with \alpha = 10 the imperfect run scores a mere 34.9 percent the score of the perfect run! Also, observe that as \alpha \rightarrow \infty we approach P_1 with a 100 percent threshold.

Of course, we also have the exponential option:

P_4 = e^{-\lambda (1 - M)} / T, \lambda > 0

In approaches 3 and 4, each additional mistake is penalised more. I think this makes sense for papers where candidates who understand the concepts can be expected to lose most or all of the points they lose to careless errors.

One or two slips should, in my opinion, result in a notable performance hit relative to perfect runs, but the run should still be salvageable; more than a few would seem like grounds for invalidating the run. It could be possible to blend the threshold idea (approach 1) with either approach 3 or 4, though we could argue that the heavy penalties involved would already destroy runs with “more than a few” errors.

 

Portmanteau (Spliterator)

For the most part, classes in the standard Java libraries consist of fairly normal words (e.g. a ScheduledExecutorService is an ExecutorService – a service that manages the execution of tasks – which allows for scheduling). There are exceptions (and I don’t mean the Java ones), of course, such as the OMGVMCID.

Java 8, specifically JSR-335 brought about a bunch of new classes, including some of my Guava favourites (the function package, with Supplier and friends, and Optional) and Streams. Most of these, again, have fairly normal-sounding names; there are a few that might sound a bit hacked together like BiConsumer but I think the meaning is generally intuitive (a consumer of two arguments). There is one that was initially fairly puzzling, though – “Spliterator” (rendered as one word). The obvious reading of that is as a portmanteau of “split” and “iterator”, but I wasn’t sure at first how useful it would be. Of course I have written code to partition collections or other iterables, but this was usually done on an ad-hoc basis.

The API seems designed around easily scheduling parallel computation. The key method here is trySplit(). To quote the Java API docs:

If this spliterator can be partitioned, returns a Spliterator covering elements, that will, upon return from this method, not be covered by this Spliterator.

I’ll go over this in a bit more detail.

  • If the spliterator can currently be split, then trySplit() returns a new spliterator, and this “takes over” some of the elements in our existing spliterator. Our existing spliterator, if traversed directly after this method returns (e.g. with tryAdvance()), will not have these elements.
  • Otherwise, trySplit() returns null. Our existing spliterator retains its elements.

For example, this is one way of doing a parallel file deletion, which starts from one spliterator and splits it as necessary.

Viewing this in relation to parallel programming, there are obvious similarities to fork-join based workflows, especially in cases where the work involves a map or filter, or doing some reduction that’s commutative and associative. A traditional way of writing the above wouldn’t be too different; you might write some logic that splits the files into batches (which now lives in the spliterator), and then similarly dispatch deletion jobs to an ExecutorService.

From the above, it seems that a one notable difference is where responsibility for how the stream should be split lives. The Spliterator has some responsibility for deciding whether it should split itself. To some extent this makes sense; in cases where we want batches to have rather specific sizes (e.g. if performance is highly sensitive to elements being processed in specific batch sizes), we can guard trySplit() with suitable calls to the estimateSize() or getExactSizeIfKnown() methods. This also can be useful for allowing us to avoid fiddly edge cases (where our size bounds pass, but we must work with less than a full batch of elements, for instance).

Spliterators are also useful because they include characteristics that dictate, among other things, how the elements are to be traversed. For example (though this doesn’t quite apply to the file deletion example) we may be able to use a certain optimisation if we can assume that we will see the data in sorted order; spliterators do carry this information (whereas if we used a standard Iterator we might not be able to easily do this without some additional bookkeeping). The same might apply for cases where we can do better if we know elements are unique. The collections API also generates spliterators with the correct characteristics, in a sense (e.g. a NavigableSet would give you a spliterator that already has the sorted and unique properties present).

I’m still not sure why this name was decided upon, as opposed to, say, SplittableIterator. Perhaps this was done in the interest of terseness and because the decomposition is fairly obvious, though the name still really doesn’t look like a proper word to me. It does seem that spliterators have use cases beyond standard iterators as well, even when parallel computation is out of the picture – the characteristics and ability to dynamically switch on them seems potentially useful. That said, I still have not used a Spliterator directly in my professional work, and have no intention of doing so unless it fits the task I’m doing!

Volatile Variables (2017 Q2 Review)

It feels like Q2 went by very quickly, perhaps because I’m settling in somewhat into my team at Palantir, and many things have been relatively stable this quarter.

Looking at GitHub statistics, I merged in 29 pull requests over the past three months (one up from Q1). I think one thing that certainly has improved is the size of my pull requests, however; I’ve made a conscious effort to split up features into several reasonably-sized chunks; this includes chains of four or five PRs at times (and some of these can run into the mid to high hundreds of lines each – suggesting that we would easily be looking at a 3000-line monster again if we didn’t do this split). I didn’t check the data, but it’s quite likely I committed less code in Q2 than in Q1; some of this stems from being more involved in reviewing others’ pull requests, as well as dicey debugging (where days can be spent to figure out a very small tweak to make things work). That’s certainly not a problem though – one of the PRs I’m happiest with involved a net reduction in the size of the codebase.

Other professional goals have been going acceptably well. I’ve continued to keep on keeping on as far as interviewing is concerned. Generally technical development has chugged on, though at a bit of a slower pace; my paper reading groups have slowed down a fair bit, for various reasons (friends were travelling or otherwise unavailable), but hopefully that should pick back up in Q3. The academic side of things is a bit of a brighter spot for me as well; a paper on symbolic model checking LDLK was accepted at IJCAI 2017, and I also cleaned up version 1.3.0 that includes support for CTL*K.

I think in Q3 I would aim to merge at least thirty pull requests (I know, it’s a blunt metric). The paper reading groups should continue. I also want to continue on my “Imperial Computing Comprehensives” project; that is an extra-difficult exam paper that crosses disciplines between all compulsory first- and second-year modules at Imperial. (I think my average module mark at Imperial was probably just over 90, but I’m trying to set the paper so that I would score a low first; around a 75.)

Financially, my portfolio increased by 0.3 percent from Q1 to Q2 (would actually have decreased if not for reinvested dividends). I think I’ve actually been decently successful in not worrying excessively about the vicissitudes of the market; in early April as well as at the very end of June my portfolio took beatings of more than a thousand pounds, but I wasn’t too concerned about that. I’m somehow skeptical that the bull market will keep going, but I’m not complaining.

Focusing more on things within my sphere of influence here, notable changes in the Q2 budget involved a drop-off in transport expenditure (as I started really taking walking to work more seriously) and… not much else. There was one big luxury purchase in Q1 (laptop), as well as in Q2 (IJCAI travels). Generally, things were fairly tight though not as tight as in Q1, especially since I didn’t do any specific budget challenges (these usually take place in February). I’m planning on loosening this a little in Q3, for various reasons, including anxiety about valuations in the market, possible travel and a desire to have some better food as well. I can cost the ingredients of my lunch today at 39p!

I’ve started taking up competitive puzzle-solving in my spare time as well. These involve solving logical puzzles against the clock (both well-known ones like Sudoku, Kakuro or Slitherlink, as well as some that are written specifically for a competition – like the Odd Sums Sudoku I discussed a while back). I’m still pretty bad at these, but we’ll see how things go. Generally I find that my logical skills are decent, but my spatial reasoning is not great; I struggle a lot with loop puzzles, or toroidal sudoku (basically like normal sudoku, except regions are irregularly shaped and can wrap around). I’ve continued to practice my singing as well – those A4s are still stable; B4s and C5s are still a bit flaky.

I travelled a couple of times in Q2 – Aarhus, Palo Alto, and then Zagreb (wow, literally from A to Z – I can think of Zurich which is lexicographically later than Zagreb, though can’t think of anywhere before Aarhus). I enjoyed each trip for various reasons, though both Aarhus and Zagreb were whirlwind trips in a sense (flew in in the morning, and flew out the next morning). I’d be happy to revisit all of these places, and I think I will with quite high probability actually. All of these trips were for work though; I didn’t travel anywhere for personal recreation.

In terms of music I’ve been listening to, this quarter seemed to take on a relatively darker mood (though my Q1 review featured James Arthur’s Back from the Edge which isn’t exactly very bright either). There are parts of Lukas Graham’s 7 Years that I appreciate, including fair sections of the lyrical content. However I’ve always found the tone of the lead vocalist a little strange, and the production is awkward (there’s a part where the band is cheered, which doesn’t really make sense to me; I see the song more as one of reflection and acknowledgement of uncertainty, rather than of triumph). However, there was an acoustic piano cover of this that I enjoyed; Schuller’s voice is clean and pleasant, and I find the delivery more appropriate.

There is of course this part of the song that I identify with to some extent, though… it’s a little complicated.

I only see my goals, I don’t believe in failure

I like the ambition and drive, but it can be dangerous if said goals are poorly set. Resilience to failure is of course very important and something I’m still trying to get better at, but it’s important to respect failed attempts for what they are – and learn from them, where appropriate. I think the main lesson here is that it’s important to set good goals, especially if one intends to double down on them. (The second part is less contentious, as long as not believing is interpreted in the right way.)

Let’s push on in Q3.

The University Gatekeepers (Hotseat: MAT 2013)

Background

I didn’t have to take the Mathematics Admission Test when I applied to Imperial (though the Cambridge STEP was a part of the conditions of my offer, probably because I didn’t take FM). This was added as a part of the admission process in 2013, though seemingly only for students aiming to read Mathematics. Oxford has been running this test for fairly long, and they use it both for Mathematics as well as Computer Science. Unlike STEP, candidates don’t get a choice of questions, but instead are prescribed questions based on the programme they’re applying to.

The paper has seven questions and candidates need to answer five. All candidates have to answer question 1, which is actually ten multiple-choice questions worth 4 marks each (no penalties for guessing). There are then six 15-mark questions that follow. Candidates in Computer Science generally have two questions that the broader Mathematics population also takes – in 2013 this featured an algebra question on functional equations, and a discrete mathematics/combinatorics question. There are two further questions, one of which tends to incorporate a significant element of mathematical logic (2013’s problem revolved around the notion of common knowledge, much like the infamous Cheryl’s birthday question), and another that deals with formal languages and reasoning about similar structures (the 2013 problem focused on an inductively generated language).

I did the Computer Science stream of the paper, and managed to put together a score of 95 (40 on the first problem; 14 on Q2, 15 on Q5, 15 on Q6 and 11 on Q7) owing to a lucky (or, depending on how you see it, educated) guess on one of the multiple-choice problems. The “Mathematics-only” questions Q3 and Q4 didn’t seem any harder, though; I cleared both of them without much difficulty (whereas Q2 was somewhat nastier, in general). I lost the points on Q7 owing to a careless error; specifically, that if an expression Bw is reversed, the resultant expression is certainly not wB (well, unless w is palindromic!)

The specific paper I did is here.

Selected Problems in Depth

Problem 1: The Gauntlet

I don’t find multiple choice questions to play very well with mathematics, because it’s very often possible to reverse engineer a solution or otherwise arrive at the correct answer via an unexpected solution path. For example, consider problem G:

p_n(x) = (x - 1) + (x - 2) + \ldots + (x - n)

What is the remainder when p_n(x) is divided by p_{n-1}(x)?

Presumably, the intended solution path was to express p_n(x) in a more convenient form (noting the arithmetic progression in the constant terms) and then actually do the division. However, given the options, it sufficed to simply test it for p_3(x) and p_2(x), noting that the result of the division was negative, and that only one of the options was negative.

Nonetheless, there were certainly ample opportunities for trick questions (this paper had E, where there was a trap with an algebraic expression easily being bounded by degree 7, though it wasn’t the case that it actually was degree 7).

I got through 9 of the problems fairly smoothly, though struggled with H (failing to realise that one of the equations was actually that of a semicircle, and plowing into aggressive trigonometry-based integration). In the end I ran out of time towards the end of the integration, but I guessed the only option that featured the subtraction of the area under one of the lines, which turned out to be correct.

Problem 2: Functional Equations

This was generally fairly straightforward till the end, where the task was to find some function f that satisfied f(t) - f(1-t) = (2t - 1)^3. In hindsight it seemed fairly obvious to conclude that a good way of solving this would be to treat f as a cubic polynomial, though I struggled for a while substituting random values of t in to the functional equation to see if anything dropped out. I then went through some messy reasoning along the lines of “the difference between two points increases cubically as the (x-)distance between said points increases linearly, so the derivative is probably quadratic, thus the function is probably cubic”. I then established f(t) = At^3 + Bt^2 + Ct + D and was on my way after a modest algebraic bash.

Problem 6: Reasoning about Knowledge

This question introduces Alice, Bob and Charlie; Charlie writes a (different) number on Alice and Bob’s foreheads. Thus Alice and Bob can see the other’s number, and Charlie can see both. The question begins with a little warm-up to prompt candidates into thinking about epistemic reasoning (if Alice knows her number right off the bat, it must be 2 – because she can see Bob’s 1; for any other number she wouldn’t be able to know). Things heat up a little later on:

  • Charlie: Your numbers are between 1 and 10 inclusive.
  • Alice: I don’t know my number. Is my number a square number?
  • Charlie: If I told you, you would know your number.
  • Bob: I don’t know my number.

What is Alice’s number?

We start by noting that the squares are 1, 4 and 9. Since Alice can see Bob’s number, she already knows that her number is one of two possibilities, spaced two apart. For Charlie’s statement to make sense, exactly one of the possibilities has to be a square. This tells us that Bob’s number is 2, 3, 5 or 8. (It couldn’t be 10, because if it was Alice would know that her number is 9.)

Next, Bob can see Alice’s number, but doesn’t know his. Since he can see Alice’s number, Alice’s number A must be one for which both A + 1 and A - 1 are candidates. This gives us the answer: her number has to be 4.

This one, in my opinion, stopped at a fairly early level; things could develop a fair bit more. However, assuming proportional time weighting (and bearing in mind that I did write my thesis on epistemic logic) the 22.5 minutes candidates were given for this is reasonably challenging.

Synthesis

I found STEP to be somewhat more demanding, perhaps because it had questions that focused more on depth rather than breadth, and also because I took the computer science stream of the paper while STEP just focuses on mathematics.

The average score of all candidates on questions 1 through 5 was 44.8, with 60.6 for successful applicants. Nonetheless, I subsequently worked through problems 3 and 4 and actually found them considerably easier than 2. Interestingly, there was a candidate who scored between 85 and 89 but was nonetheless rejected; also, no one scored 100, which seems surprising (this would certainly have been possible if not for the careless slip I made on problem 2 – or maybe the marking for some aspects such as the rigor demanded in proofs was stricter than my own marking).

This was quite enjoyable, actually. I’m not sure how I would have fared had I taken this exam in the past! I’d think that my knowledge of logic, as well as the more computer-science oriented topics (discrete mathematics and formal languages) would have improved, though knowledge of say calculus or analysis probably would be slightly weaker than when I finished high school.

Playing with Blocks (Hotseat: British Informatics Olympiad R1, 2015)

Second in a series. I apologise for the temporary hiatus, had a couple of weeks travelling and then intense sessions at work (part of this was my choice, but still).

Background

I was not in the UK for high school, and studying computing/informatics wasn’t common in my high school in Singapore either. I never actually participated in the British Informatics Olympiad. This is the UK’s national high-school competition in computing, and unlike normal programming contests there’s an explicit portion involving reasoning and discrete mathematics which I like. This can get intense; problems especially in the second round can feature fairly complex reasoning – such as 2009’s second round which required understanding models of computation in FRACTRAN (and eventually deciphering a “program” that computed the Fibonacci series). For quite a few contest problems, intuitively determining a solution, or even being uncertain but firing it off anyway in hopes of getting it accepted is much easier than trying to prove that the solution is correct.

The first round is somewhat less intense, but still has plenty of opportunities for interesting problems. There are typically three tasks; the first is gentle and meant to be a warm-up, the second tests implementation (perhaps more like what I might expect on a programming exam in university) and the third tests algorithms (in a sense, more like a standard contest problem). Each question has most of the points (about 75%) allocated to implementation and the remainder to several “reasoning” questions. Partial scores are awarded for solutions that aren’t optimal or fail on some cases (much like in the IOI, though the information is not known to the students).

This was one of the easier papers I’ve done; I worked my way through the tasks very quickly, and didn’t have too much of a problem (other than a careless error in problem 2). Most of the time I score above 90 points on these, though sometimes if I don’t “get” problem 3 in an optimal way or make subtle errors on problem 2 the score will be a fair bit lower.

The specific paper I did is here.

Problem 1: Block Palindromes

Given a string, find the number of ways of decomposing it into at least two blocks such that when the blocks are reversed (but not the letters within the blocks), the arrangement of blocks looks the same. For example, BARBAR is a block palindrome, since (BAR)(BAR) reads the same forwards and backwards. Of course, every ordinary palindrome is also a block palindrome, since taking one-letter blocks is allowed.

Not too much here; the problem is readily amenable to a simple recursive implementation (try taking each block length from 1 to half the string length and see if the string ends with that prefix – if so, recurse and add that result to our total). It would be possible to use memoisation to improve performance here, since there can be repeated subproblems e.g. with AABCBAA, you would be solving for the number of solutions to BCB twice. However, given the small constraints (strings of up to 10 characters) this wasn’t needed.

This problem probably had one of the trickier reasoning questions though – if all groupings of a block palindrome contain an even number of blocks, how many groupings can there be (and why)?

I attacked this one with a proof by contradiction – that it is not possible to have any groupings

S = B_1 B_2 \ldots B_{n-1} B_n B_{n-1} \ldots B_2 B_1

for n \geq 2. This can be shown by construction:

S' = (B_1) (B_2 \ldots B_{n-1} B_n B_{n-1} \ldots B_2) (B_1)

is a valid decomposition, which has three blocks. Of course, the empty string isn’t a valid block, so we require S = B_1 B_1 and the number of groupings has to be 1. We can probably formulate an even stronger condition (for instance, that apart from B_1 itself, no prefix of B_1 is also a suffix of B_1). While the reasoning seemed sound to me, I wasn’t too confident I had the right answer here.

(Note: The question text implied that a block palindrome could be decomposed into blocks in some way, so while 0 satisfies “all groupings” it wouldn’t be a block palindrome.)

Problem 2: Battleships

Given a funky (deterministic) algorithm for generating random numbers, implement it, and use it to fill a grid with ships of certain sizes via some kind of rejection sampling. The key constraints are that the grid has a finite size, and more interestingly ships are not allowed to touch (even diagonally).

Implementing the random-generator is pretty much following instructions from the spec. However, there are several ways to implement the latter constraint. Given that optimality is generally not important in problem 2, I went for a rather nasty solution that, say, for a k-ship placed horizontally would iterate through a 3 \times (k + 2) bounding rectangle around the ship and mark all those squares as unusable. Better asymptotic complexity can probably be obtained using an R-tree (though that is probably not reasonable for contests – especially without a reference!).

There was a rather nasty “reasoning” problem about the number of ways to place ships of size 4, 3, 2 and 1 on a 5 by 5 grid – this was also where I lost the five points. I hacked my implementation to, instead of using the random number generator, simply use backtracking to count the number of possible placements. However, I forgot that my bounding rectangle system was based on booleans (in other words, a square that was unusable because of two different ships would suddenly become usable after just one was removed), and thus came up with the impressive total of 427,840 possibilities (correct answer: 1,424)!

I subsequently fixed the aforementioned bug (using integers instead) and confirmed that the answer was indeed 1,424. During the test I did try smoke-tests that verified that a 1-ship could be placed in 25 ways and a 5-ship in 10; I attempted to do two 1-ships, but decided that I would have had too much pain working through the combinatorics for that.

Problem 3: Modern Art

This problem revolved around (reasonably) efficiently determining the ith permutation of a string in terms of lexicographic ordering, possibly involving duplicate characters. For example, in a string which consists of five As, Bs, Cs and Ds – what is the 11,223,344,556th permutation?

Of course, for smaller strings and smaller numbers it would suffice to implement some kind of next-permutation generator. This is a pretty standard algorithm. We can then step through the permutations until we get what we want – and there are some test cases that would be solvable with this; you would get (I think) 11 of the 25 points available.

The key idea is that we don’t want to explore all of the permutations. From combinatorics, we can quickly calculate the number of permutations of five As, Bs, Cs and Ds:

\displaystyle {20 \choose 5} \times {15 \choose 5} \times {10 \choose 5} \times {5 \choose 5} = 11,732,745,024

Intuitively, we are close to the “end”; we can thus be pretty sure that our 11-millionth permutation should start with a D. We can thus write a recursive (or iterative) algorithm that identifies the first character based on this search.

I began by implementing a function that computes the number of ways of permuting a given number of As, Bs, Cs and Ds. This was done in a bit of a hacky way and has some (probably unnecessary) memoisation.

Interestingly, it turns out it would actually have been okay to build the entire numerator and denominator and then divide the two; perhaps 20 was chosen to avoid complications with overflow (since 21! will exceed the max value of a 64 bit integer). Anyway, the highest value the numerator could have taken on here was 1,860,480 (20 to 16 multiplied together) so I was sure I was safe this way.

We then go through the characters in lexicographic order, and for each one, consider the number of permutations that would be feasible. The general approach is something like this:

I actually finished the implementation fairly quickly, but decided to write a couple of tests; I stepped through the 12 permutations of “ABBC” as given in the spec, and also wrote a test for “AAABBBCCCDDD” that all the permutations were generated in the correct order (using numPerms to give me the range). I also of course tried a couple of edge-ish cases such as each of “A”, “B”, “C” and “D”, and a test to verify we could handle the expected scale (five of each character, retrieving the 1st as well as 11732745024th permutation).

The “reasoning” problems on this one were very easy compared to the other two questions.

Conclusion

Unfortunately meta-analysis is somewhat limited here as unlike the AIME results, the ones for the BIO aren’t public. I’m pretty sure 95 is a solid score as the contest isn’t that easy, but I won’t be surprised if there are multiple 100s (with a bit of caution I could have gotten one)!

Style during programming contests is an interesting issue. I find that many contestants especially with C++ like to use rather bespoke preprocessor routines (e.g. cin as f, cout as g). I’m actually used to some of them as well e.g. ll for long long, vpll for vector<pair<long, long>>. If one understands these it’s probably better for speed, though when debugging I find having not-unreasonable names is preferable (even if my code is written more for speed than style).

Odd Sums Sudoku

The puzzle above is from the WPF Sudoku GP, Round 4 2016 #13. To solve it, each row, column and marked 3×3 box must contain the digits 1 to 9 once each, and the solution must have, in the marked rows, maximal odd substrings summing to the given totals in the correct order.

I’ve always enjoyed a good puzzle, though I’ve never considered myself especially good at them. That said, I’ve done my fair share of sudoku, including several books where puzzles were labelled as very difficult; yet, fairly often identifying triples, or the odd X-wing, XY-wing or unique rectangle would be sufficient to blow through the puzzles. At least, I thought that clearing some “very difficult” puzzles in 8 or 10 minutes was blowing through them; I found the World Puzzle Federation’s Sudoku GP, where some of these had benchmark times along the lines of 3-4 minutes!

Furthermore, in addition to your standard “classic” puzzles which feature the standard rules (i.e. each row, column and marked 3×3 box must contain the digits from 1 to 9 once each), there are a couple of interesting variants that require understanding new rulesets and figuring out how to deal with them. After a while, as mentioned the classic puzzles largely tend to be solved with standard techniques, and the challenge there is dispatching them with extreme speed; the variants, on the other hand, tend to maintain some degree of freshness (at least as far as I’m concerned).

The above puzzle was an example of this; several of the rows are marked with additional clues, and for these rows, the largest odd substrings in the row must sum to the values indicated by the clues. For example, if a row was 178356924, the clue associated with it would be 8, 8, 9. Note that this is also generated by several other valid sequences, such as 532174689 or 174283569.

For the above puzzle, the 9, 5, 3, 8 clue is a good starting point, and indicates that we have 9, 5, 3 and (17 or 71) as substrings of the solution to that row in some order; given the position of 6, we can conclude that the 9 and the 3 must occur before the 6 (though we don’t know). We thus have 9 in the bottom left hand corner, an even number (either 2 or 4), and then a 3. Now, reading the first column clue 1, 12, 3, 9; we have 1, (57 or 75), 3 and 9 as substrings. The 9 is at the end, and 3 cannot occur before the 4 as there’s no space to fit it. Thus 3 occurs after the 4, but it has to be separated from the 9 by an even number. More simply, the second column clue 10, 9, 6 indicates that our 5 has to be preceded by an even number and then a 1. There’s only one spot left for a 7 in this lower box, so we can insert it.

We thus reach this position:

Moving on, we can consider the middle column. 3 can only be satisfied by a singular 3, so we know that the 3 in the bottom row cannot be in the middle column. It thus must be in the sixth column, with 1 and 7 taking up the eighth and ninth columns (in some order).

Now consider the rightmost column. It is 3, 10, 12; 3 can only be satisfied by a singular 3, so the odd substrings must be 3, (19 or 91) and (57 or 75). Given the position of the 2 (fourth row), it must be the case that the column has 1 and 9 in the fifth and sixth rows, an even number in the seventh, and 5 and 7 in the eighth and ninth. We thus clearly have 7 in the corner, and 5 above it.

Finally, we can actually decide on the ordering of 1 and 9 as well; consider the hint for the middle row, which ends with 8; a 9 would automatically exceed this. Thus the middle row must contain the 1 and the sixth row the 9. The hint gives us for free that we have a 7 to the left of the 1.

There are more interesting deductions we can make as well. Consider the seventh column. The clue is 16, 9 and this can be satisfied in two ways: {1357} in some order and then a singleton 9, or {79} and {135} in some order. However, we can prove that the latter construction is impossible; the middle row is an even number, and also we have already used the 1 and the 5 in the bottom right hand box so we can’t place those numbers there. Thus, the first four numbers must be {1357} in some order and there is a 9 somewhere in the bottom right box.

The clue in the first row requires a 5 at the end. Given the 10, 10, 5 clue and the 4 already placed in the sixth column, the only odd substring in the last box must be the 5. We thus must have the 5 in the first row, and now only the 3 can fit in the fourth row. We also gather that the last two columns of the first row must be even, and we have enough clues to place these (using standard Sudoku rules).

Finally, we can place the 4 in the middle right box (now that the 3 is there, there’s only one space left where it can fit). Similarly, we can place the 5 (knowing that it can’t be in the seventh column any longer).

From this point on, solving the rest of the puzzle should be reasonably straightforward. The next step would probably involve placing 1, 9, 6, 7 and 3 in the first row (in that order – using the column hints and the knowledge that we have two tens)!

For these contests, I’d say the challenge lies largely in figuring out how the ruleset can be used to help solve the puzzle (and, well, speed; you’re thrown 13-15 puzzles to solve in the space of 90 minutes and I can only usually manage about seven or eight, and for that matter the ones I do finish are the ones on the easier end of the scale!).

On the Road Again

I travelled to Denmark for a recruiting event last week. The nature of the travel (2-hour flights plus an hour or so of ground transportation on each end) was not something I was used to; admittedly most of the flights I can remember taking are pretty long-haul ones (London-Singapore, London-San Francisco are by far the most common; London-New York exists too, though is less common). I had a great time, in any case – although being awake for about 24 consecutive hours especially when slightly ill was not too enjoyable, it was nice to meet up with students and flex some speed coding and debugging muscles. I think I still coded well even when really tired, though it did lead to some confusing moments as I tried to understand what exactly “PAY THE PRICE” meant in terms of a discount (it means no discount, but given that there was already “HALF PRICE” and a sliding reduction scheme, this did confuse me quite a bit).

This is just the second trip so far this year, though I anticipate there’ll be quite a few more. The two trips I’ve made so far were for business reasons; I’ve been hammering away at a major project at work that should be drawing to a close soon so there should be scope for personal time off. There are also likely to be more work trips in the pipeline – within the scope of my major project, for company events at large as well as for recruiting purposes. Of course, there would be even more trips if I was a Forward Deployed Engineer as opposed to a dev.

I remember reading an article in the BA inflight magazine en route to Billund, and noted that in some survey, 30 percent of people would accept a lower paying job if it meant that they could travel more for work. I think I subsequently found the survey they were referencing. I certainly would be in the 70 percent of people that would not, though it’s hard to draw an accurate conclusion given the way the question is framed (what’s the baseline here? I’d think people who already travel 50 percent compared to people who don’t travel at all might respond differently to being asked if they would do this to travel “more”). The survey was also commissioned by Booking.com, which might have a vested interest in portraying people as more willing to travel (so that companies would engage travel provider services more).

Of course, in business terms there are the obvious benefits of having in-person interaction between team members and/or getting the people who know what they’re doing on the ground, to see what’s going on. They can be really useful to get things done fast; communicating over instant messaging or even via a video conference doesn’t tend to be as effective.

I would say the upside of travel as far as an individual is concerned includes the opportunity to experience new places and see new sights, though I’m not sure how much of that you could achieve in a business trip. I find that many of the past trips I’ve had had been high-octan fire-filled affairs (though there have been exceptions – and not the programming kind). Other benefits involve meeting up with existing friends and acquaintances in the area (again, the tight schedules of many of my trips in the past precludes this). The article does make reasonable points concerning extending one’s stay, though – I actually did that two years ago when I stayed on in California for an additional week after one such trip – and fluidity of plans leading to last-minute bookings.

One of the things that makes me wary of travelling too much is actually long, unpredictable delays through security and customs – this might partially be a result of the routes I typically fly on (which go through major hubs with lots of traffic). I do have strategies to mitigate this (typically, I book an aisle seat near the front, and walk at a very brisk pace once disembarking from the plane; I also tend to choose flights that arrive at relatively unpopular times). This wasn’t a problem at LCY nor in Billund Airport in Denmark; queues were very short and handled very quickly.

To some extent this can be mitigated by Registered Traveller in the UK, the ability to use the electronic terminals in the US and the biometric gates in Singapore; it’s especially useful for me since I don’t typically travel with checked baggage, making the whole process of leaving the airport and getting where I need to go much faster. I did some computations to check whether the UK Registered Traveller scheme was worthwhile, and in the end decided it was reasonable. I used an assumed number of 12 trips per year with an estimated time savings of 25 minutes per trip (a crude estimate, but there are 20 minutes between the UK Border Force’s benchmarks for non-EU and EU passport queues; going by benchmarks ePassports are probably even faster hence the + 5) to figure out that the net cost was 14 pounds per hour saved post-tax (or 24 pre-tax). It could be argued that those specific hours might have unusually high value (e.g. stress at the end of a long flight), perhaps boosting the case for it. I’ll probably travel even more with Registered Traveller in place, actually.

I guess the other thing that has recently dampened my appetite for travel (at least on the personal front) would be the depreciation in the pound; relative to many people I know my proportion of overseas expenses is actually already fairly high (because of investments; 90% of my equities and 50% of my bonds are based overseas – and I hold much larger quantities of equities than bonds). To some extent I overreacted to Brexit (my savings, even in US dollar terms, have increased from the “Remain” case model I had) and there’s certainly still scope for travel around the EU where the pound has fallen more like 10 as opposed to 16 percent, though the length and luxury of such trips is likely to be reduced.

I’ll be travelling again soon, though it’ll be another of those long flights instead. Nonetheless, at some point in the future I definitely want to revisit Denmark. This trip was focused rather closely on executing the recruiting events in question (well, there isn’t really much you can do with one day anyway; I was in Denmark for less than 24 hours) – while I did get to see a little of the city, there was certainly much more to do.