Browse Month

May 2018

On Challenges that Build

On my return flight from Singapore to London, I listened to quite a few hours of music. Two of the songs I listened to and enjoyed at least partially for similar reasons were It’s Gonna Be Me (by NSync), and I Can’t Be Mad (by Nathan Sykes). It’s a bit of a strange pairing as the former seems to be an upbeat, relaxed pop song while the latter is a fairly moody piano ballad. However, the common element I latched on to here was that both songs feature sections that are repeated multiple times, with the vocals developing additional complexity on each iteration (thinking about it this is fairly common in songs that are critically reviewed well, and also in songs I like). For example, in It’s Gonna Be Me there is a line in the chorus which is sung four times over the course of the song, and its complexity develops:

The challenges in I Can’t Be Mad have a couple of changed notes, but also (if trying to reproduce the original) demand different productions of the notes (falsetto vs not, belts, etc). There’s always a risk of adding too many embellishments, though I find expanding upon base melodies can be quite interesting. Singing these, and considering what would be reasonable for my voice (adding a closing run to the last syllable above, for instance) and what would not be (adding a +1 semitone key change after the second chorus in I Can’t Be Mad – original is already awfully hard), can be enjoyable too.

Generalising this, I quite like the idea of “increasingly complex variations on the same theme” when learning concepts and when teaching them. This already seems to happen for many concepts in mathematics. Over the course of an A-level student’s mathematics education, he/she might understand how to write a quadratic expression as a product of linear factors (e.g. converting 6x^2 - 19x - 7 into (2x-7)(3x+1)). This could first begin with expressions where inspection works feasibly. However, students should also be presented with some examples where inspection is extremely difficult or even impossible (though probably only after gaining some confidence with the cases where inspection is plausible). For general expressions, one could try to use both the quadratic formula and factor theorem to factorise something like 6x^2 - 19x - 8 into -\frac{1}{24}(-12x + \sqrt{553} + 19)(12x + \sqrt{553} - 19). However, there will be some expressions like 6x^2 - 19x + 16 where the solutions to the quadratic are not real; later, with some understanding of complex numbers, these would make sense. Students will also learn about problems which may not obviously be quadratics but can be written as such (like x^4 + 2x^2 + 1); the ability to synthesise the various techniques can then be tested with something like 7x^8 - 10x^4.

To some extent my Masters project also had this theme – linear time logic, adding knowledge, adding dynamic modalities, generalising that to full branching time logic, and then switching out the infinite traces for finite traces. I haven’t written a course or a book on a computer science topic yet, but I can imagine that there might at least be sections that follow this kind of sequence.

This pattern also occurs a fair bit in many technical interviews I’ve seen as well, where problems start easy, but additional and progressively more challenging constraints are repeatedly introduced. The purposes here could include testing for a breaking point, seeing how candidates react to problems without an obvious solution, or whether they are able to synthesise additional information to come to a solution.

I find that I often learn best by practicing on smaller examples at first, and then (attempting to) generalise their conclusions to larger models, considering when these conclusions may fail or not. Having multiple variations of progressive difficulty can be useful as they can give a sense of achievement as partial progress towards an overall goal is made. Furthermore, I find understanding how changes in the problem scenario leads to the base solution method being applicable or inapplicable to be a key part of understanding as well; there is a clear need to reason about this when considering incremental variations. Going back to It’s Gonna Be Me, for example, aiming downwards at the word ‘love’ and not conserving sufficient air or energy for it might work for the first three passes, but it’s unlikely to on the last round.

There is a risk that the method can be frustrating in that it seems like it is consistently ‘moving the goalposts’, especially if one forgets that the partial goals are partial goals (and starts to think of them as complete ends in and of themselves). The standard I’m using for understanding (ability to critically evaluate applicability in novel contexts) may be seen as a little high. I also haven’t covered how to bootstrap the method (that is, how to develop an understanding of how to attack the base problem before any variations are introduced). Nonetheless I think there are some contexts where this works well. I’ve found it to be useful in singing, mathematics and interviewing at least!

Colour Rush (Board Game: FUSE, Kane Klenko)

While I’m not sure I would go the full length to say that efficiency is the highest form of beauty, I’d certainly say that I can find efficiency to indeed be beautiful. In controlled environments, I enjoy testing my own efficiency (typing tests, programming and Sudoku competitions, competitive Sims). Many of these environments feature elements of uncertainty, meaning that managing probabilistic outcomes is key to being efficient (in Sudoku bifurcation can sometimes be faster than reasoning through complex logical chains; in a programming contest, you might luck out on some buggy or sub-optimal algorithm actually passing all test cases anyway). I’ve recently enjoyed a challenging game which has probability management at its core.

FUSE is a real-time dice game from designer Kane Klenko. Players play bomb defusal experts who are trying to defuse bombs; this is represented by rolling dice and placing them on ‘bomb’ cards to match constraints. The team rolls some number of dice, and then allocates them to the cards, incurring penalties if some of the dice cannot be allocated. The game itself has a hard time limit of 10 minutes, so quick thinking is encouraged as players get more turns (and thus more rolls of the dice).

The dice are almost standard six-sided dice; interestingly, I noticed that it was not the case that opposite faces of each die added up to seven (2 was opposite 4 and 3 opposite 5, for some reason). The dice come in five colours (red, blue, black, yellow and green), and there are five dice of each colour. Card constraints may involve numbers, colours or both, and vary widely in the difficulty; easy cards have requirements like “two dice matching in either colour or number” or “A – B = 2”, while more difficult ones can involve building a six-die pyramid where each die is a specific colour. The bombs are rated in difficulty from 1 to 6, skipping 5.

Some bombs also introduce ordering constraints, requiring players to build a tower or pyramid (and players are not allowed to build unsupported structures). There is also a dexterity element here, as the penalty for knocking over a tower or pyramid is losing all progress on that card.

I’ve only played solo or in a team of two. Playing solo, a player draws and rolls three dice at the beginning of his/her turn, and then allocates them to a pool of four active cards, possibly placing multiple dice on a single card. At any time (except towards the end of the game), there are four active cards and five cards in a ‘market’ ready to become active when an active card is completed.

I’ve been able to complete the single-player “Elite” difficulty (getting through 23 cards in 10 minutes) about half of the time if playing with a random mix of 27 cards, and have done it (once in ten attempts or so) if playing with cards drawn from the most difficult sets (i.e. all the level 6s and 4s, with all remaining cards being level 3s).

I made some simple tactical observations regarding card constraints. For example, there is a level 2 card requiring three numbers that sum to 11. If I have a free choice of a number to play there, I would play as close to 4 as possible (considering that two dice most likely sum to 7).

I think the main strategy in solo play involves ensuring that one’s cards don’t end up having conflicting demands, as far as possible. For example, consider the following set of cards.

This is an awful situation to be in, because apart from one space for a 1, all of the cards here demand high numbers to start. Four decreasing numbers only admits a 4 or higher (and even then, I find starting that card with a 4 very restrictive as it forces 3-2-1); the 15 sum only admits a 3 or higher (and then forces 6-6; I usually play one six and then on a later turn find a pair adding to 9). Misaligned numerical constraints can be painful; misaligned colour constraints might be even worse because colours are sampled without replacement (well, until card(s) are cleared or penalties are incurred). There is a level 4 card that demands four dice of the same colour, which I try not to pick up.

There are many cards that have rather specific requirements (e.g. a specific colour and number); I tend to try to carry one or maybe two of these at a time, while trying to ensure the rest of the cards cycle regularly. The penalty for a unused die involves re-rolling it, and then removing a die that matches in terms of colour or number (if possible).

Although the game only lasts ten minutes (or less, if the deck is cleared in time), I’ve found it to be highly addictive, especially difficult runs on the elite difficulty. I’d expect times to vary quite widely as it’s a dice game, though I’ve noticed that I run out of time with between one and three cards from the end most of the time. I’ve never counted, but it wouldn’t be unreasonable to average fifteen seconds for a turn, and to some extent things average out over 120 selections and rolls. As mentioned above, I think a key part of solo FUSE strategy involves managing interactions between the requirements of active cards.

Minor criticisms would include some disconnect in the difficulty of the cards. I find that most of the level 6s are less luck-dependent than some of the lower level cards, like the level 3 “any 5, a yellow die, and a green 2”. More bizarrely, the ‘6-5-4 pyramid’ card in the photo above is only level 2, though it’s considerably more restrictive than the ‘sum to 15’ card also in the photo (level 3). There might be a case for a more permissive card being rated higher if the cost of verifying correctness is higher (e.g. ‘three digit prime number’), but I don’t think adding three numbers introduces much overhead.

The ceiling for individual card difficulty also isn’t very high. The highest stacks are of height 5 (with very loose constraints) and pyramids of height 3 (using six dice). I’d imagine there would be scope to define some level 8 cards that might use seven or more dice and/or require combinations that are stricter or more complex, like a six-die pyramid where each die not placed on the ground level must equal the sum of the two dice directly underneath it, or even something like this:

I’ve played this with a friend from Imperial as well. With two players, each player has two active cards, four dice are rolled and each player must take two dice. We only played up to the standard difficulty. I’d like to play this with a bigger group, though; communication gets much trickier, and also with two I am able to maintain sufficient context in my head to compute a good allocation of four dice across two sets of two cards. With five, there are 10 active cards, which would probably be unreasonable to keep track of.

In summary, I’d highly recommend this game based on my solo/pair experiences. I’ve probably had about twenty to twenty-five playthroughs in total, and have enjoyed it thoroughly even though I generally don’t enjoy real-time games. Each run is short, but can generate an adrenaline rush, and the difficulty level is highly configurable so it should be possible to find a level that works well. I might even extend the game with some custom “hard mode” cards (this can be done by getting opaque sleeves so the backs are indistinguishable). I haven’t yet had the opportunity to play with a larger group, which should bring new and interesting challenges too.

Making Heads of Tail Risks

I remember that I was fairly anxious at the beginning of my fourth year at Imperial. I was concerned about securing work after university. Looking back, this seemed patently ridiculous; I had topped my class for the third time and already had a return offer in hand from Palantir. However, owing to sweeping government rhetoric about controlling post-study work visas at the time, I saw “not being able to get a work visa” as the primary risk then, even if it was remote. That statement in and of itself was probably correct, though the time I spent to monitor and mitigate that risk (reading up on government committee reports, and considering alternatives like a H1B1, EU blue card or doing a Tier-2 ICT after a year) was excessive.

Of course, this never materialised; and even if it did, the only likely impact would be that I’d have to fly home to Singapore in between finishing uni and starting work (I did not; though on hindsight that might have been a good thing to do).

I’m not sure when I first became aware of the concept of probability distribution functions (or, for that matter, continuous random variables). These functions are continuous, take on nonnegative values and integrate (across all variables) to 1. In the case of single variable functions, one can plot them on a two-dimensional graph; one may get results looking somewhat like the picture above, in some cases.

Areas of regions underneath the graph are proportional to the probability that a value falls in that region. For example, a uniform distribution would have a probability function that’s just a horizontal line. The graphs for the return of investments 1 and 2 in the example above follow what’s called a normal distribution; investment 3 follows a Student’s t distribution which has fatter tails.

Since areas are proportional, a simple technique for generating random values from an arbitrary distribution is called rejection sampling; if one draws a box around the distribution and throws darts randomly at it, one can take the x-coordinate of the first dart that lands underneath the function as a representative random sample.

That’s a basic mathematical introduction. If we had to rank the quality of the return profiles above (remember: right means higher returns), a lot would depend on what we were trying to do. I would personally rank investment 2 (the green curve) on top; it has a considerably higher mean return than investment 1 (blue) and adds only a small amount of variability. We can calculate what’s known as the standard deviation of a given distribution; this is a measure of how much variability there is with respect to the mean. In fact, the blue curve has a standard deviation of 0.6; this is 0.7 for the green curve.

Ranking investments 1 and 3 is more difficult; the mean of 3 is higher, but you add a lot of uncertainty. I’d probably rank them 2, 1, 3. However, there is also an argument in favour of investment 3 – if one is only interested if the returns exceed a certain level. It’s a similar line of argument where if you’d ask me to double a large sum of money (nominally) in 20 years, I’d pick a bond; 10 years, a general stock index fund, and 10 minutes, probably blackjack or aggressive forex speculation.

Whichever investment we pick, it’s possible that we may get unexpectedly awful (or excellent!) results. The standard deviation could give us some measure of what to expect, but there is still a non-zero probability that we get an extreme result. For the normal distributions (the blue and green curves), there is a 99.7% probability that a single observation will be within three standard deviations of the mean; this does also mean that there’s a 0.3% probability it does not, and about a 0.15% probability it’s lower than three standard deviations below the mean.

Tail risk refers to the risk of events that may have severe impact but are low-probability; considering them is important. Going back to the work visa situation, I think I correctly identified visa policy changes as a tail risk, though in hindsight controlling the amount of time spent mitigating them was done poorly – akin to spending $10 to insure against a 1% probability of $100 loss (provided the $100 loss wasn’t crippling – which it wouldn’t have been).

I also spent a lot of time focusing on mitigating this specific tail risk, when perhaps a better solution could be developing resilience to general tail risks that may affect my employment. The obvious routes at the time would have been continuing to do well academically and develop my skills, though others exist too – such as having a greater willingness to relocate, living below one’s means and building up an emergency fund. There are still further tail risks that the above wouldn’t address (e.g. a scenario where computers and automation are universally condemned, all countries practice strict closed-border policies and the global fiat money system collapses) but the costs in mitigating those risks seem untenably high. I haven’t read Antifragile yet (what I describe here is weaker, as it doesn’t demonstrate benefiting from low-probability events), though that’s planned to be on my reading list at some point in the future.