Browse Category

Algorithms

Is It Someone’s Birthday Everyday? Solving the Inverse Birthday Problem

The birthday paradox is a fairly well-known but unintuitive observation that although there are normally 365 days in a year (for simplicity, ignoring leap years and assuming a uniform distribution for birthdays), with as few as 23 people it is more likely than not that two of them will share a birthday. One of my teammates proposed an inverse of this problem: what is the minimum number of people are needed so that it is more likely than not that it is someone’s birthday every day?

Modelling A Specific Instance

There’s a lot to take in here, so let’s try to solve a simpler problem first. We could attempt to solve: given a specific number of people, how would we calculate the probability that it is someone’s birthday every day? If we have this method, we can reconstruct the answer to the original problem via a search algorithm.

The more general problem here would be: suppose we have N \geq 1 people. Each person draws one of K \geq 1 events with replacement – these are equally probable and independent. What is the probability that all events occur at least once? The more general introductory problem has K = 365, but of course that’s tricky to work with right off the bat, so let’s examine a couple of the smaller cases.

Algorithmic Ideation

Base Case Analysis

If K = 1, since we said that we always have at least one person, the one possible event will happen and the probability is 1. Moving on to K = 2, here N = 1 is a special case with probability of 0. For larger N, there are two specific situations we want to avoid, where every person draws the same event (remember that there were two events). Each of these happens with probability 2^{-N} so the probability of everything occurring once would be 1 - 2 \times 2^{-N} = 1 - 2^{-N + 1}.

We can now analyze K = 3 – if N \leq 2 we definitively have probability 0. For N = 3, the first person will receive a novel outcome with certainty, but the second and third person also need novel outcomes, and this happens with probability \frac{2}{3} \times \frac{1}{3} = \frac{2}{9}. It gets a little trickier with N = 4:

  • The first person to draw an event will get a novel event. This leaves 3 people to draw events, and 2 out of 3 events undrawn. We can call this state (3, 2, 3).
  • The second person to draw an event will get a novel event with probability 2/3, putting us in the state (2, 1, 3). Alternatively, he/she will draw the same event as the first person with probability 1/3, getting us in the state (2, 2, 3).
  • If we’re in (2, 1, 3), there’s a 1/3 probability that the third person draws the last event (in which case we are done), and a 2/3 probability he doesn’t, putting us in (1, 1, 3).
  • If we’re in (1, 1, 3) we have one person who needs to draw the last event with probability 1/3.
  • If we’re in (2, 2, 3), the third person will get a novel event with probability 2/3, putting us in (1, 1, 3). He might draw the same event as the first two people, in which case we definitely failed.

More generally, we can model the problem using a state machine (a technique which I discussed two and a half years ago). The states of this machine can be written as 3-tuples (p, q, r) which means we have p people to draw events, there are r events of which q still need to be drawn. Notice that q \leq r, and that for our problem, we have r = K and we generally want to find the probability of success for the state (N, K, K).

For K = 3, N = 4 we can draw this as follows.

Recurrence Relations

We can define V(p, q, r) as the probability that transitions from the state (p, q, r) will eventually end up in a SUCCESS state, defined as (p, 0, r) for all values of p and r (as that means all events were drawn). We make some observations on simple cases:

  • V(p, 0, r) = 1. We are already in a success state by definition.
  • V(0, q \geq 1, r) = 0. If everyone has drawn an event and there are still events that haven’t occurred, there is no chance for us to draw those events.

That’s all we need, and we can look at the recursive case. In the state (p, q, r), there is a \frac{q}{r} probability that we draw a new outcome, putting us in the state (p-1, q-1, r). There is also a 1 - \frac{q}{r} probability that that doesn’t happen – in that case we’re in the state (p-1, q, r). It thus follows that

V(p, q, r) = \dfrac{q}{r} \times V(p-1, q-1, r) + \left( 1 - \dfrac{q}{r} \right) \times V(p-1, q, r)

Notice that these states are well-formed, since we handled the p=0 and q=0 base cases earlier. It is possible to optimise this by adding some more sophisticated base cases, but this still gives us a clear enough method for working out the probability. For example,

  • V(2, 1, 3) =\frac{1}{3} \times V(1, 0, 3) + \frac{2}{3} \times V(1, 1, 3) = \frac{1}{3} \times 1 + \frac{2}{3} \times \frac{1}{3} = \frac{1}{3} + \frac{2}{9} = \frac{5}{9}
  • V(2, 2, 3) =\frac{2}{3} \times V(1, 1, 3) + \frac{2}{3} \times V(1, 2, 3) = \frac{2}{3} \times \frac{1}{3} + \frac{1}{3} \times 0 = \frac{2}{9}
  • V(4, 3, 3) = V(3, 2, 3) = \frac{2}{3} \times V(2, 1, 3) + \frac{2}{3} \times V(2, 2, 3) = \frac{4}{9}

 

Solving The Specific Instance

The answer to our original problem, for a given number of people N is then V(N, 365, 365). Working this out by hand is likely to be painful, so we should avoid that. It’s easy to code something simple up that is based on the recurrence relation above:

private static double findSuccessProbability(long remainingPeople, long remainingEvents, long totalEvents) {
    if (remainingEvents == 0) {
        return 1.0;
    }
    if (remainingPeople == 0) {
        return 0.0;
    }

    double probabilityOfNewEvent = ((double) remainingEvents) / totalEvents;
    double successProbabilityOnNewEvent = findSuccessProbability(remainingPeople - 1, remainingEvents - 1, totalEvents);
    double successProbabilityOnOldEvent = findSuccessProbability(remainingPeople - 1, remainingEvents, totalEvents);

    return probabilityOfNewEvent * successProbabilityOnNewEvent
            + (1 - probabilityOfNewEvent) * successProbabilityOnOldEvent;
}

We can test this and verify that indeed V(4, 3, 3) = \frac{4}{9}. However, if we try to call this with, say (500, 365, 365) the program doesn’t seem to give the answer quickly.

One solution here is to use memoisation. The naive version written above ends up computing the answer for the same state potentially exponentially many times, but we can cache the results of previous computations and use them. This could be done with a map with the 3-tuple as a cache key; alternatively, a bottom-up dynamic programming solution can be written, since there’s a clear order in which we process the subproblems. In any case, these methods should get the runtime down from exponential to O(NK).

Reconstructing The General Solution

We want to find the smallest N such that V(N, 365, 365) > 0.5. We could start from N = 365 and then keep trying larger values of N: since K is constant this is actually not that bad as we can reuse the results of computations from previous attempts when trying new values of N.

public static void main(String[] args) {
    long numPeople = 365;
    double successProbability = findSuccessProbability(numPeople, 365, 365);
    while (successProbability <= 0.5) {
        numPeople++;
        successProbability = findSuccessProbability(numPeople, 365, 365);
    }
    System.out.println(numPeople);
}

This gives us our answer: 2287. We have V(2286, 365, 365) = 0.499414 \ldots while V(2287, 365, 365) = 0.500371 \ldots.

This approach looks a bit primitive – and normally we’d use a modified binary search where we first search for an upper bound via something that grows exponentially, and then binary search up to our bound. This works, since V(p, q, r) increases as p increases for constant q, r.

However, for this specific problem and more generally for recurrences where we end up reducing parameters by 1, we still need to burn through almost all of the smaller subproblems we skip over by isolating the bound, so it won’t actually give us that much here.

Usually when working with recurrences it is nice to find a closed-form solution for our expression (i.e. we have a general expression for V(p, q, r) that doesn’t itself reference V), though I didn’t really see how to make progress with this one.

Lessons from a Purple Dragon: Exploring Game Search Spaces

I think I first played the first Spyro the Dragon game when I was about seven years old. I’m not sure what attracted me to the game. It might have been the idea of collecting bright, shiny gems, the rather gentle learning curve (unlike the Crash Bandicoot, Contra or Metal Slug games that I remember having access to on the PS1 at that time) or perhaps the exploration required to complete a 100% clear of the game.

More recently, a collection of all three games in remastered form was released on PC. I never actually got round to playing Spyro 3, for some reason, and the first two games were pretty enjoyable, so I purchased it. It was available on Steam for £34.99, though I searched elsewhere and managed to procure it for just £13.10.

In Spyro 1, the titular purple dragon has flame breath, is able to charge at enemies with his horns, and can jump and glide around (though he cannot fly). Levels in Spyro 1 generally feature a mostly linear path from a level’s starting point to its exit, with enemies and obstacles along the path; figuring out how to traverse this path can be interesting, in that one usually needs to understand how enemy attacks work, and/or how Spyro can navigate relevant obstacles, typically through judicious platforming.

However, reaching each level’s exit is rarely the end goal. Typically, levels contain a number of gems (and sometimes dragon eggs as well), and collecting these requires exploration beyond the main route. Although I certainly wasn’t familiar with the abstract concept at that time, when I first played these levels as a seven-year-old I would usually perform a depth-first search. I would head down an interesting path until it reached a dead end or looped back to somewhere I’d already explored, and then continue to a new path that hadn’t been searched yet.

Depth first search is naturally a reasonable way to explore a game world. Since players can’t teleport freely, breadth first search is generally not an option. Iterative deepening-style approaches could make sense if paths are uni-directional or if there are frequently very long routes that don’t need to be explored on the way from one area of interest to another, as it is usually possible to exit and re-enter levels from the beginning. However, this isn’t generally true in Spyro level designs. Thus, for Spyro I don’t think this core process has changed very much. What keeps Spyro interesting for me tends to not be about overcoming the obstacles on a given path (with some exceptions), but instead about identifying legitimate paths that can be plausibly explored. A simple example is in one of the areas in World 1, Town Square; there is an optional area accessed by gliding down a staircase and around a wall. This eluded me for a long time when I was seven, as clearing most of the levels (in the sense of reaching the exit, not 100% completion) can be done with gliding in mostly straight lines.

Typically, the game will give you hints that such hidden areas exist. Most obviously, each level has a number of gems to collect, and this number is known to the player – so shortfalls typically indicate that there are still hidden areas to be found (provided the player has meticulously collected everything in the areas they have already explored). It is also fairly common for some gems to be visible from one of the main paths, even if it is not made obvious how to reach them. With the Town Square example, one of the dragons in the area does actually give you a big hint on what to do, though I wasn’t aware of this as I skipped cutscenes then.

I usually had to resort to walkthroughs or online guides to find these answers in the past. I’m pretty confident it was quite different from the way I’d approach these problems now. I think I mostly just tried a ton of different approaches and saw what worked. However, I now use deductive reasoning more heavily – perhaps experience with participating in logic puzzle contests has helped. Instead of simply exploring paths that come to light and trying things that look encouraging, I tend to apply a perhaps more principled approach to identifying suitable routes to explore once the obvious methods have been tried. I would consider the usual platforming techniques that may be appropriate and test them against the level setup (e.g. Is there a high point I could glide from? Could there be some non-obvious platforms that are actually safe to land on? If there is/are supercharge ramps that increase Spyro’s running and jumping speed, can I use these? Can I combine them?). I’ve been able to solve quite a number of levels that I wasn’t able to in the past.

Some of this reminds me of a book I read earlier this year, Problem Solving 101: A Simple Book For Smart People by Ken Watanabe. The techniques introduced there for making decisions on how to approach problems are applicable to clearing a Spyro level – for various root causes/problems what one should do varies, and it is probably more expedient to think through what needs to be considered in a principled way than scurry off aggressively pursuing whatever solution comes to mind. I don’t think I’ve ever been faced with a level sufficiently difficult to need drawing a full-blown logic tree out, but even mapping out such a tree in my mind helps a lot.

Algorithmic Modelling – Touhou Project 11 (Subterranean Animism)

The Touhou Project is a series of “bullet hell” shoot-em-up games. In these games, the player controls a character within a 2D plane and needs to dodge large quantities of bullets. These games tend to be fairly difficult, testing players’ reflexes and in some cases logic as well (for example, many patterns are aimed at or relative to the player’s position; misdirecting such patterns can be useful).

I wouldn’t say my reflexes are very good. Nonetheless, good progress can be made using careful resource management; players are given tools in the form of lives (extra chances after getting hit) and bombs (single-use abilities that clear the screen and deal damage to enemies). The eleventh installment of the series is called Subterranean Animism (SA), and I’m choosing to look at it for this post because it is widely regarded as the hardest game in the series to clear on normal difficulty. For most of these games (on normal), I can just sit down, play the game, dodge bullets and win. There were two exceptions – the fifteenth entry Legacy of Lunatic Kingdom (but even then that only took about five attempts), and SA. SA required a nontrivial amount of planning – I had to learn some of the specific patterns, and also chose a shot type I wouldn’t normally pick.

I generally play Touhou games on normal with an aim to complete the game on a single credit; this is called a “1 Credit Clear” or 1CC. I’m generally somewhere in between difficulties; I’m fairly comfortable with Normal in most cases, but Hard is hard (the game also has an even harder Lunatic mode). I’ve successfully completed 1CCs of most of the games in the series on Normal, and a few of the easier ones (7, 8, 10) on Hard. SA was the toughest game in the series for me to 1CC; it is also the last one I did, at least from installments 6 through 16.

Resource Management

Touhou games usually start the player with two spare lives; this is true in SA as well. However, the bomb mechanic is different from other games, which give the character a fixed number of bombs per life. In SA, players sacrifice some of their shot power when using a bomb. A character’s shot power usually ranges from 0.00 to 4.00; this is increased by collecting powerups when fighting stages or bosses. Firing off a bomb costs 1.00 shot power (and cannot be done if one is below 1.00). This can be frustrating, as some patterns become more than proportionally harder if the player’s shot power is low. When a character is hit, she (the games feature an all-female cast) will drop powerup items such that shot power will be reset to at least 2.25 (higher if shot power was at least 3.00). There is an exception – if it is the character’s last life, a full powerup item will drop that sets shot power to maximum.

The game also has mechanics for earning additional lives. In SA, boss enemies have a staged health-bar with multiple patterns; if the player defeats a pattern within the time limit and is not hit, a life fragment will drop; five life fragments result in an extra life. Bombs are allowed.

A Touhou game is divided into six stages; typically stages 1 through 3 are mostly a non-event for me. That said, for SA, the boss of Stage 3 has a few fairly nasty attacks. Most of my aforementioned “blind” or casual 1CCs involve racking up large stocks of lives and bombs on these stages, and then utilising these aggressively in the later stages. We can see this on SA, as well as on what is often regarded as one of the easier entries in the series, Imperishable Night (IN); the first death on SA is at the end of stage 4 while that on IN is at the end of stage 5. That said, I’m actually already failing to dodge patterns as early as stage 3 or 4. It’s important to be willing to use bombs to deal with difficult patterns, as they are much easier to recover (by subsequently picking up powerup items) in SA. This becomes even more important in other games like IN, where bombs that are unused when a player is hit just go away.

Character Selection

Touhou games usually give the player a choice of multiple player characters, and sometimes for each character different weaponry. Typically, different characters have different movement speed and possibly some other advantages or disadvantages, like having a smaller hit-box or extra bombs. In terms of weaponry, players may select from different normal shots and bombs, which usually have balanced trade-offs. For example, one may pick a homing shot which does less damage but can hit enemies anywhere on the screen, or a shot that only shoots straight ahead but does more damage.

Earlier, I mentioned being able to sit down and just play the game; in most cases this involves the main character of the series, called Reimu, who usually (and in SA) has relatively slower movement and a small hit box. I also normally use her homing shots for a first playthrough, even though I usually prefer straight shots after I get more comfortable with the game. These don’t quite exist in SA.

Apart from slightly different shooting and movement mechanics, many Touhou games also feature a medium to late stage boss (often on Stage 4) which adapts her patterns to the player’s character selection. This is on full display in SA as well; the Stage 4 (out of 6) boss has a relatively easy warm-up battle that is static, before reading the player character’s mind and creating patterns from that (which differ depending on the character and shot type).

Most of the time, the different variants of patterns the boss uses are quite balanced. However, this isn’t the case in SA and thus influenced my selection. Although I find ReimuA (with straight shots) is best equipped to handle the Stage 5 and 6 bosses, the Stage 4 fight one has if one makes this choice is extremely difficult, I’d say possibly even harder than the later bosses. Pictured above is Double Black Death Butterfly; it isn’t apparent from the picture, but some of the butterfly bullets are rotating inwards (and so one needs to dodge bullets possibly coming from behind as well). I thus picked ReimuB (which has a weakly homing shot, a relatively easy Stage 4 fight and a special ability to gather powerups from anywhere on the screen) for my 1CC.

Learning the Patterns

Of course, even with careful resource management it’s unlikely that one can perform a 1CC if one’s dodging skills are too far below the bar. While some of the patterns are random and/or based mainly on reflexes, others have a certain trick to them that makes the pattern a lot easier once figured out. With experience, one can figure out common elements and ways to deal with them (for example, a stream of bullets fired at the character’s position; move slowly, but to change directions make a sudden quick movement to open up a gap in the stream) – this drives most of the “sight-read” clears.

In a sense, good resource management is less critical (consider that one can completely ignore the resource system if one can reliably dodge every single pattern in the game) if one can dodge the patterns. That said, it’s actually possible to clear one these games even if one is quite poor at dodging them, if one makes good use of the resources one has.

Algorithmic Modelling – Achievement Hunting (Slay the Spire, Part 2)

Many modern video games feature an achievements system. Achievements are typically awarded for completing specific tasks; typically achieving all of these requires doing substantially more than just what is required to finish the game. In many cases, it’s not even possible to complete all of the achievements on one playthrough.

In a previous post, I wrote about Slay the Spire, an RPG where combat is driven by a deckbuilding mechanic. The game also features an achievements system. I finished getting 100% completion on these achievements this week – this took quite a long time. If I wanted to start from the beginning and accomplish these as quickly as possible, I would need to plan carefully. The main issue here is that achievements are not independent. For example, there are chains of achievements where the final achievement subsumes all previous ones (e.g. “have 99 block, have 999 block” or “unlock ascension mode, finish ascension 10, finish ascension 20”). Furthermore, some achievements can be achieved using the same strategy (e.g. “play 25 cards in a single turn” and “defeat a boss before it takes a turn”, which can both be achieved by building an infinite combo). Other achievements can often be significant handicaps (e.g. “beat the game with a deck that has no uncommons or rares”, “beat the game with only 1 relic”) and I thus think it’s a lot harder to combine these with other achievements.

The game is played in distinct runs, and a majority of the achievements involve performing specific feats on individual runs. We can model this by identifying strategies that unlock one or more achievements. If we try to use the fewest number of distinct approaches, this becomes what’s called a set cover problem. There are achievements A = \lbrace 1, 2, \ldots N \rbrace and strategies S = \lbrace S_1, S_2, \ldots S_M \rbrace with each S_i being non-empty and a subset of A. We also require that \bigcup_{i=1}^M S_i = A, otherwise not all achievements are achievable. The goal is to identify some set of strategies we need to deploy S' \subseteq S such that \bigcup_{s' \in S'} s' = A. Trivially, picking everything is possible, but it’s very unlikely that that’s going to be minimal.

This problem is known to be NP-hard; the decision version (does there exist a solution using k strategies or fewer?) is NP-complete. Clearly, we can obtain a solution that is worst-case exponential in |S| \times |A| by iterating through each possible subset of S, checking if it satisfies the criteria and remembering the best solution we’ve seen so far. There’s a little dynamic programming trick we can do to memoise partial achievement states (e.g. I can get to the state where I have the first 8 achievements and no others using two runs with one strategy but three with another; I’ll always pick the first). This achieves a solution exponential in |A| but linear in |S|, which is an improvement. We can also try to pre-process the input for A and S a little, by eliminating any achievements that are subsumed by harder achievements, and by eliminating any strategies that are strictly dominated by others.

If this is unacceptable, we can either try to use approximation algorithms (for example, a standard ‘greedy algorithm’ which picks the subset with the most outstanding items doesn’t always fare too poorly). We can also attempt to leverage constraint solving systems; these don’t deal with the exponentiality of the problem directly, but they often have good heuristics to deal with this.

That said, this isn’t a very good model, as some strategies are riskier to implement than others. For example, it may be theoretically possible to combine “beat the game with one relic” and “beat the game without uncommon or rare cards”, but that seems unlikely. For example, if each of these individually has a 1% chance of being completed but attempting them together has just a 0.1% chance, doing the achievements separately will take an expected 200 runs while the combined approach has an expectation of 1000.

A natural extension to this is to consider the weighted version of the problem. We change the formulation slightly, so each strategy has a weight w_s and we seek to minimise the total weight of the strategies we pick. If we assign a strategy a given probability of success p, since runs are (mostly) independent, the expected number of runs it takes to attain a successful outcome would be 1/p following standard geometric distribution results. The dynamic programming based algorithm still works if we keep track of weights instead of number of sets.

This is better, but there are still a few issues with the model. Firstly, partial successes haven’t been considered in this model; we might try a risky strategy for three achievements and fail, but still complete two of them. We could try to account for these by figuring out some kind of state transition between achievement states – it wouldn’t be a set cover problem any more. We would be building some kind of a discrete-time Markov chain on-the-fly.

Modelling this as a set cover problem also seems to imply that our algorithm is offline; that is, it formulates a strategy and then executes it. However, Slay the Spire has a significant luck element, especially if one is trying for specific achievements, and thus an offline algorithm doesn’t work so well. For example, one achievement requires “play 10 shivs on a single turn” and another “have an enemy reach 99 poison”; the cards and relics that support these are quite different, and depending on what cards are offered as the game progresses, one achievement or the other could become easy or nigh-impossible. If both achievements remain to be unlocked, it seems suboptimal to commit early on.

Algorithmic Modelling – The Game: Extreme

I had a weekend break in Zurich to meet up with a friend who works there. We often play board games when we meet, so I brought Hanabi and FUSE (very compact games, suitable for travelling – especially since I almost always travel HLO) but my friend was thinking of getting something different. We thus headed down to a game shop, and picked up The Game: Extreme (Board Game Geek).

The rules are fairly simple. There is a deck of 98 cards – one copy each of the integers from 2 to 99, inclusive. Players will have some number of cards in their hand (eight for a solo game, seven for 2 players), and play them to the table; the game ends in a loss if a player must play, but is unable to (or if all cards are played, which is a win). It is a cooperative game; players all win or lose together. The rulebook also mentions a half-win if players finish with ten cards or fewer remaining.

There are four communal piles that players can play cards to. On two of these, cards played should be numerically increasing; on the other two, they should be numerically decreasing. However, one may also go against the direction of a pile – this can only be done if the jump is exactly 10. Thus, for example, if an increasing pile has a 57 on top, one may play a 59 (increasing) or a 47 (ten less), but may not play a 49 (not increasing, and not ten less). Of course, if a player has both the 59 and 49, the player may play the 59, after which the 49 becomes legal.

One can probably write some search-based algorithm to figure out what the best possible outcome is for a given hand and individual pile, or even set of piles given some fitness function. There’s also a straightforward dynamic programming optimisation, and for certain classes of fitness functions there might be better principles one can leverage.

Players must typically play two cards on their turn, though they may play more (for example, if they have a chain of cards that involves several reverse jumps) while there are still cards in the deck; once that is emptied, they only need to play one. Once they have played all of their cards, they draw back to their hand size and play passes to the next player.

The Extreme version adds special effects to some of the number cards. These may either restrict the current player’s turn (requiring that they stop immediately, play exactly three cards, or immediately cover up the card that was played) or affect all players while they are exposed (requiring players keep quiet, play all the cards in their turn to the same pile, not do reverse jumps or only draw 1 card at the end of their turn). Apart from the stop card, all of these make the rules for playing more restrictive (and thus I’d say this is somewhat harder than the base game).

The solo game doesn’t generally seem too difficult as one can plan and execute sequences of reverse jumps without interference from others. The keep-quiet special effect no longer really matters, and the stop cards actually become positive as they allow for drawing cards back up earlier. I’m not sure how one might formulate an algorithm for determining optimal moves, but I can generally see the following principles:

  • Play the minimum number of cards needed per turn, and then draw back up. This lets you see more options, which is useful. Normally in a multiplayer game you may want to play more than the minimum to ensure that no one interferes with your plan, but in single player no one will anyway.
  • Plan and aggressively use reverse jumps. These often get thwarted in multiplayer games (you might draw a card that allows a reverse jump, but someone might need to play on the relevant pile before it gets to your turn).

Playing with more than one player gets trickier, as communication between players is permitted but precise numerical information is not. I would say things that imply precise numerical information (e.g. “I can jump this pile”) are also bad. There are edge cases which are probably acceptable (e.g. “it makes no difference to me; which of these would you prefer I play to?” with an up-65 and down-67 suggesting I hold the 66). Still, talk of “small/large jumps” and actively saying “stop” after interesting cards are played is very useful.

Cooperating well involves looking ahead to other players’ turns as well; it’s not always optimal to, on each player’s turn, individually minimise “damage” to the piles. There have been times where being forced to jump a pile by 30 or so, I’ve encouraged my teammate to play more cards there since it would be destroyed on my turn anyway. We only played with two players, but I imagine the dynamics get more interesting with larger groups.

The Game: Extreme is a very light game; while there is some minor logic involved it’s not particularly complex. It’s fairly fun and witnessing successful chains can be enjoyable; the special effect cards can also cause annoying obstacles (while blocking reverse jumps is probably the most damaging, the keep-quiet one is interesting as it changes the game for a while). It’s certainly a fun thing to play while passing time and discussing other things, and hard enough to provide a good challenge without being frustrating.

Algorithmic Modelling – Alphabear 2

Alphabear 2 is a word game where players are given a two-dimensional grid of tiles. Some of these tiles show letters, along with timers; others are concealed and must first be exposed by using an adjacent letter in a word. Players use letter tiles (anywhere on the grid) to form a word. When a player makes a word, he/she scores points equivalent to the sum of the timers on the tiles that are used. The timers on all tiles that aren’t used then count down by 1; tiles that reach zero become unusable stone blocks.

Usually, this means that you want to form long words that incorporate tiles that have shorter timers; for example, there is a ten letter word on the above board (PEDESTALED), but it doesn’t use the F. After playing this word, the timers on the remaining letters will count down by 1; the P will now have a timer of 2, the first E 10, and so on.

However, the game does have various mechanics that mean this isn’t always the best strategy. For example, one needs to ensure that there are actually enough letter tiles to work with, and that the distribution of letter tiles allows reasonable words to be formed. The game also has players choose bears for each mission which have special abilities such as awarding additional points for certain words. Actually, in hindsight FEASTED might have been a better choice than DEADLIFTS, as I was using a bear that gives a point bonus for words ending in ED.

In addition to scoring points by making words, players also receive bonus points for the largest fully cleared rectangular area. The game identifies this for you. I’m not sure how this is actually implemented, though there are large enough boards (15 by 15?) that it’s probably not the naive algorithm of generating and testing all rectangles; where N is the width of the board that would be O(N^6). There is a DP-based approach where one can keep track of the number of stone blocks that slashes this to O(N^4) which would probably be enough. With more clever approaches it’s possible to knock the time complexity down to O(N^2) – and that’s probably optimal as any less would involve not actually reading some parts of the grid.

Most of the time, this isn’t something to worry about too much. I’m able to clear most levels with no stones at all. However, if we need to take a stone, ideally it should be as far away from the center of the board as possible (note that only the largest rectangular area counts).

Defining optimal play in general is difficult, owing to the spatial reasoning required by the largest clear area bonus, uncertainty of tiles that are yet to be revealed and sheer breadth of options when faced with many open tiles.

Before starting a round, the player has to select bears to use. Optimal decisions here largely depend on the specific challenge being played. A bear that scores bonus points for six-letter words isn’t particularly useful on a tight board where one has only a few open tiles at a time; some bears add time to timed boards, which isn’t useful if the board has no time limit. Furthermore, players select up to three bears each round; these bears can have useful synergies. In particular, I like a combination of “summon three Zs” and “score triple points for words including JQXZ (not stacking, unfortunately)”; “summon ING” or “summon ED” goes quite well with these two as well. I’ve used these to complete a mission deemed “impossible” given the strength of my bears at the time.

We can perhaps begin by focusing more narrowly on optimal play of an Alphabear endgame (where all tiles have been revealed), ignoring special abilities bears may have for now and assuming that the board has a solution. Here is an example:

From this point on, there is no more uncertainty as to what letters we have to use. Interestingly, this board shows that a greedy algorithm where one tries to play the longest possible words doesn’t work. One could begin WORKSHOPPED, leaving RAII, but this is actually an unwinnable state. There are several two-turn solutions, like ROADWORKS + HIPPIE, or WORSHIPPED + KORAI. (Obviously, this is very high-level play; I think I played SKIPPER with this board. That leaves RODIAHWO, for which I’d probably have played something like RADIO + HOW, though HAIRDO + WO is better.)

Given a multiset of pairs of letters and timers L = (l,t)^+, we want to return a suitably ordered list of words S = \left[ l^+ \right] , that satisfies the following properties:

  • legitimacy: every word s \in S is included in the game’s dictionary.
  • resourcing: words do not use tiles that aren’t in L.
  • timing: for each pair (l, t) \in L, l is used in a word before or on position t.

The above set of properties is not correctly specified without a refinement to deal with duplicate letters. There are several ways to deal with this; assuming low cardinality, one approach could be to recognise all duplicate instances of letters as separate letters, and suitably expand the dictionary to allow this. Our list is then L = \left[ (A,3), (B_1,1), (B_2,1), (E, 1) \right], and the dictionary recognises both B_1 E and B_2 E.

I don’t have a good way of finding an optimal solution; I suspect the problem is actually NP-complete; it feels like SAT or 3SAT may be reducible to this problem, though I haven’t worked through the details. Nonetheless, there are several search heuristics we can try. Clearly, on a given turn we must use everything which now has a timer of 1. Also, although this board is an example of why you don’t want to be greedy, in general it probably makes sense to do some kind of ordered backtracking, where one expands the most promising nodes first. I can imagine an A* search could actually work well, though finding an admissible heuristic could be hard.

There’s a fourth property we want as well:

  • optimality: the score produced by the words is maximal.

Going back to our sample board, WORSHIPPED and KORAI will score one more point than ROADWORKS and HIPPIE, because there is one fewer timer tick. In general, we want to minimise the number of timer ticks by playing longer words upfront where possible, as this reduces the number of timers that are still ticking. Once one has devised a feasible solution, it may make sense to perform some iterative optimisation by seeing if it’s possible to bring letters forward. Of course, that approach may get us stuck in a local optimum. I could see some kind of iterative hill-climbing algorithm from the top-n admissible solutions (as found by our ordered backtracking) yielding decent results.

Unfortunately my vocabulary and anagram skills are far from good enough to handle the required complexity, especially under time pressure. Most of the time, successfully playing every letter on the board is enough to complete the mission’s objectives. Interestingly, it looks like the game wasn’t really programmed to deal properly with successfully using every letter but failing the mission. The game claims that there’ll be an additional reward on the reward screen when identifying the board as clear; yet, there is no reward screen as the mission was failed.

Algorithmic Modelling – Codenames

Codenames is a board game designed by Vlaada Chvatil which tests communication. The game is played with two teams; each team splits itself into clue-givers and guessers.

There is a board of 25 cards, each with one (or more) words on them. Of these cards, 9 are owned by the first team, 8 owned by the second team, 7 are neutral and one is the ‘assassin’.

The clue-givers attempt to communicate to the guessers which cards are owned by their team. This is done by giving a one-word clue followed by the number of cards N corresponding to the clue. The rules establish some bounds on allowed associations (for example, ‘sounds like’ clues are not allowed).

I don’t know how much time went into selecting the words to appear on individual cards, but there are certainly many words in the deck that can be interpreted in many ways, which makes the game fun. For example, I can think of quite a number of ways to clue BOND (COVALENT for unambiguity; DURATION, ASSET or DEBT for the debt; JAMES, AGENT or SEVEN or even at a stretch something like CASINO for the character; FRIEND or FRIENDSHIP; STREET or TUBE; CONTRACT, PROMISE or WORD for the idea of a promise; ADHERE or CONNECT, among others). Which one I might pick would depend on the other words on the board.

Guessers then identify up to N+1 cards they think their team owns. This can be based on the entire history of clues given, not just the previous clue. These cards are guessed one at a time; a team is only allowed to make further guesses if they guess ‘correctly’ one of their team’s cards. Teams may refrain from making all guesses.

The game ends either when all cards owned by a team are revealed (that team wins), or if a team reveals the assassin (that team loses).

In practice, clue-givers will need to consider all cards, not just the ones their team owns. The penalty for ‘failure’ varies; the assassin is an instant loss, and revealing a card owned by the other team is also bad. For example, with the cards APPLE, BANANA and GRAPE it would be very tempting to declare (FRUIT, 3) as a clue; yet, if KIWI is owned by the other team (or worse, is the assassin) it might be dangerous.

However, if the other team has already revealed that KIWI is theirs (e.g. with another clue, maybe SOUTH or BIRD) then the FRUIT clue becomes safe. Thus, pre-computing a strategy at the beginning of the game (e.g. by partitioning the eight or nine clues owned into several logical groups while avoiding other words) may not be optimal.

I tend to consider making an effort to win to be a key part of games, and thus what moves may be considered good also often depends on the state of the game. For example, if the opposing team has just one card left, I will give a broader clue that may have more tenuous links in a ‘do or die’ effort to finish on this turn. A less extreme example would be attempting weaker links if behind, or perhaps playing slightly more conservatively if ahead.

Mathematically modelling Codenames can be tough. We can try modelling the game state as a tuple (O, E, N, x, h, f_O, f_E) where O, E, N are sets of the remaining clue words our own team has, words the enemy team has, and the neutral words, x is the assassin, h = (t,w,n)^+ is the history of the game, and f_O and f_E are preference functions of the form h, w, n, O, E, N, x \rightarrow P, returning an ordered list O \cup E \cup N \cup \lbrace x \rbrace. (t, w, n) is a clue meaning that a given team gave a clue word and hinted that some number of cards was relevant. This already abstracts two difficult parts away – ordering the clues, and determining how many of the top preferences to pick.

I think the preference functions need to take into account previous clues from both teams; if a previous clue could clearly have corresponded to two words and I picked the wrong one, it might be fairly high on the list even if unrelated. Similarly if this scenario happens to the opposing team, I would probably avoid the word that would blatantly be theirs.

The notion of degree of confidence also isn’t captured as well in our ordering; going back to the fruit example, having a clear ordering would imply that clues for less than 4 items could reliably result in correct guesses (if we knew that APPLE was by a small margin the best pick, we could safely and confidently give (FRUIT, 1) to clue it, which seems wrong). One could imagine modelling these with selection probabilities, though things would be even more complex.

The above representation still seems computationally difficult to work with. The evolution of the preference function as one moves from one state to another is unclear (so in that representation it is fully general), making lookahead difficult. It doesn’t seem like a greedy choice is always best; for example, given eight clues that are reasonably divisible into four pairs, a clue for 4 words taking one element from each pair might be a bad idea if the other words can’t easily be linked.

We can proceed by simplifying the preference functions; a simple approach could be that for each w, n each team has a persistent preference function that returns an ordered list of O_0 \cup E_0 \cup N_0 \cup \lbrace x \rbrace. The preference functions during the game then return the subsequence of that list that contains strictly the words still left in the game. This of course doesn’t take into account past information or clues from the other team.

With this, we can attempt to solve the game using standard techniques; assuming that the vocabulary of clues is bounded (let’s say it must be from the Linux dictionary), a game state is winning for the current team if there exists some word for which the preference function returns everything in O as a prefix. A state is losing if all moves in that state produce a state which is winning for the opposing team.

We can generalise this to a simple probabilistic model as well; the preference ‘functions’ instead return a discrete random variable that indicates either guessing some word or passing. A simplified model could then, at the start of the game, assign weights to each candidate indicating the relative probability that candidate would be selected. These can be normalized to account for a pass probability. As words are removed from the board, the probability of the remaining words being selected scales (we can imagine a simple rejection-sampling where we discard words that aren’t actually on the board any more).

The algorithm for the probability that we get a win from a given state is then slightly more complex (though I think still reasonably covered by standard techniques).

Algorithmic Problem-Solving: On Pigs and Poison

A teammate asked me a computer science / algorithm question over lunch. It took me somewhere between 30 to 40 minutes of thought and discussion with teammates to solve the problem, inclusive of writing about 20 lines of Java; I can see it being a kind of ‘brain teaser’ interview question (and thus generally best avoided). That said, genuinely not having heard this problem before, I think it was interesting to see the thought process I went through to get to the final answer.

You have 1,000 buckets of water, 1 of which is poisoned. It is known that the poison if ingested will kill a pig within 15 minutes. You have one hour to determine which bucket contains the poison. What is the minimum number of pigs that you need, and how would you do it? (Eventually, of course, you want to generalise your answer.)

After some initial “lateral thinking”-style questions (does the poison only kill pigs? is the death timing exact?), I determined that it was intended to be a genuine algorithms problem. As a computer scientist, I then thought of binary encodings. This gives a quick solution to the case where you have just 15 minutes to determine which bucket is poisoned – if you convert each number to a 10-bit binary number (so the first bucket is 0000000000, the second is 0000000001, and so on) and give the first bucket to no pigs, the second to pig number 10, the third to pig number 9, the fourth to pig numbers 9 and 10 and so on, the final state of the pigs 15 minutes later allows you to determine the poisoned bucket. If we used this method and only pigs 5 and 7 died, then the poisoned bucket has binary representation 0000101000, which corresponds to bucket number 40. In any case, we know the answer is bounded above by 10.

I then considered ways of extending this solution given that we can run four trials instead of one. I did consider partial experiments, but didn’t see a way this would be helpful. One quick possibility we came up with was to linearly split the buckets among a number of pigs to cut down the search space – though this risks losing one pig at each iteration. Still, we determined that 6 pigs would work:

  • On trial 1, you have 1000 buckets and 6 pigs – divide into 7 groups of at most 143.
  • On trial 2, you have 143 buckets and possibly 5 pigs – divide into 6 groups of at most 24.
  • On trial 3, you have 24 buckets and possibly 4 pigs – divide into 5 groups of at most 5.
  • On trial 4, you have 5 buckets and possibly 3 pigs – the binary method can distinguish 8 buckets.

Yet, this same method wouldn’t work with 5 pigs. We were ready to settle on 6 as the answer, but the question asker revealed that it was possible to do better than 6. The idea of imbalanced groups then came up – the ‘no risk’ group of buckets should be larger, because in that case we could guarantee that we would still have all our pigs if we needed to investigate that group.

I guess I haven’t done algorithm interviews for a good while, and it thus took me to this point before I thought of setting up a recurrence relation. Define W(n, p) as the maximum number of buckets distinguishable with n trials and p pigs remaining. I then set up this relation:

W(n, p) = \begin{cases} 2^p & n = 1 \\ 1 & p = 0 \\ W(n-1, p) + pW(n-1,p-1) & \text{otherwise} \end{cases}

The last case comes about because we can safely allocate a group of W(n-1, p) buckets to be given to no pigs, since with one fewer trial we can distinguish between them. For each of the pigs present, we need to prepare to continue the investigation with one fewer trial and one fewer pig, hence pW(n-1,p-1).

This is more efficient, but doesn’t quite get us there yet. I noted that p(4, 5) was somewhere in the 800s, which established that strategies based on risking one pig in every round apart from the last would be insufficient. Thinking a little more about it, I realised that in the extreme case, you could always have a special bucket each round that you gave to all pigs – if they all died that round, we would know that bucket was poisoned. However, you couldn’t have more than one of these per round – if they all died, you wouldn’t know which bucket was the cause.

More generally, you could split the groups of buckets into groups that are given to zero, one, two… up to p pigs, where p is the number of pigs still left. The groups given to smaller numbers of pigs could likely be larger, to account for the additional resources available in later rounds. The following recurrence relation then drops out:

W(n, p) = \begin{cases} 2^p & n = 1 \\ 1 & p = 0 \\ \sum_{i=0}^p \binom{p}{i} W(n-1, p-i) & \text{otherwise} \end{cases}

This actually has a closed form solution. I didn’t notice it when looking at the recurrence, though it became apparent when I coded up a program to calculate the required W(4, 5) = 3125. I was surprised to see how much additional discriminatory power five pigs had, though four wouldn’t be sufficient. Interestingly, five pigs and three periods were sufficient. The general closed form solution is simply W(n, p) = (n + 1)^p and our argument provides a method for actually identifying the relevant bucket.

Finally, an interesting challenge here was to establish that it wasn’t possible to achieve this with four pigs (so that 5 would be the minimum). One way to do this was to reason in terms of information theory – a pig either survives all the way or dies after a given round, and then it stays dead. This is n + 1 many possibilities, and there are p pigs, so it seems that we can’t do better than (n+1)^p. Clearly 5^4 = 625 < 1000 so four pigs isn’t enough; also, this expression matches the value obtained by our recurrence-based method, suggesting that that is indeed optimal.

Static Single Assignments (January Haskell Tests 2018 – Part 2)

Second in a two-part series; see Constant Propagation (January Haskell Tests 2018 Part 1) for the first part, which covered the assessed portion of the test.

In January this year, Imperial College’s first-year computing students were tasked with implementing a tool to handle constant propagation for statements in static single assignment (SSA) form. In particular, this means that each variable is only assigned to once, and there are no compound expressions. This is useful for optimising compilers to simplify the machine code that a program compiles down to.

Of course, being required to write everything in SSA form is painful; as much as I like immutable objects, mandating their use can be costly (consider a large set, for instance, that periodically receives updates). Banning compound statements can also be troublesome; hence, SSA compilation is typically automated.

The test concluded with a challenge tasking students to implement support for converting a more general program into SSA form. The students had been working with a simple imperative language with statements to assign values to integer variables (possibly doing mathematical operations), conditional statements (if-then-else) and do-while loops.

This problem naturally decomposes into three sub-tasks: supporting SSA translation for basic blocks, conditionals and then loops. The first problem can also readily be split into two sub-problems: ensuring variables are immutable and then handling compound expressions.

Basic Blocks

Immutable Variables

I opted to begin with immutability here, as this would allow for easy testing. This is done by creating multiple versions of a variable. For example,

a = 10;
b = a + 2;
b = a + b + 8;
c = a + b;
a = c + 3;

could become

a1 = 10;
b1 = a1 + 2;
b2 = a1 + b1 + 8;
c1 = a1 + b2;
a2 = c1 + 3;

Essentially, one could implement this by keeping track of the latest version of each variable used, and incrementing the version whenever processing an assignment. Whenever handling the right-hand side of an assignment, one needs to be able to know how to reference the latest version of a variable.

-- Usage Map for handling new instances of variables
type UsageMap = [(Id, Int)]

-- Retrieves a fresh identifier for the given raw variable.
getIdentifier :: Id -> UsageMap -> (Id, Int, UsageMap)
getIdentifier "$return" map
  = ("$return", 0, map)
getIdentifier id []
  = (id ++ "0", 0, [(id, 1)])
getIdentifier id (binding:bindings)
  | id == bindingId 
    = (id ++ show(bindingCount), bindingCount, (id, bindingCount + 1):bindings)
  | otherwise       
    = (returnedId, returnedCount, binding:returnedBindings)
  where
    (bindingId, bindingCount) = binding
    (returnedId, returnedCount, returnedBindings) = getIdentifier id bindings

I’m pretty sure I wouldn’t have bothered to write a specific UsageMap type when I was still in university, though working in industry seems to have made me think more carefully about typing.

Compound Statements

The version of SSA in the paper also doesn’t allow compound statements (e.g. a = ((b * c) + 5) / d). However, this could be handled reasonably as arithmetic expressions can be represented in tree form.

We can create temporary variables for each non-leaf node of the expression tree. For example, we define k1 = b * c, k2 = k1 + 5 and k3 = k2 / d. Some care is needed in cases where the tree is a bit more balanced (we need to know that the first argument of our plus above is k1).

Also, of course, k is only used for illustration above, as real functions could have a value actually called k! It makes sense to have a counter separate from the rest of the UsageMap, though I relied on some limits on the variable names that were mentioned in the specification. I’m not sure if I would have worked this out in first year!

Conditional Expressions

One needs to consider which versions of a variable are in scope. Let’s consider the following block:

a = 7
b = a * 3
if (b > 20) {
  a = b + 3
  b = 0
} else {
  a = b - 3
  b = 1
}
return a + b

After reaching the end of the if block, we can’t just use the UsageMap we have obtained from that to pass into the else block (since on an execution where we’re going in to the else block, the if block must never have run). Similarly, after reaching the end of else we can’t just use the UsageMap from that (we don’t know if the else block has run).

It is probably easier to visualise this block of code as a tree. Each node represents a possible linear execution path that we may run.

The versions of variables each of the blocks read upon branching should be the version before we entered that block. Thus, we should reference the state of the UsageMap from before we entered the if block when entering the else block. We actually want something like this.

When returning, we somehow need to pick one of the final values for variables that were possibly modified. This is difficult to express, but we can approximate it with a phi function, where \phi(a, b) nondeterministically returns either a or b. We can thus simply pick a4 = \phi(a2, a3) and b4 = \phi(b2, b3), and add these statements. Finally, we would want to return a4 + b4. Some care needs to be taken with the implementation (for example, one must handle the case where a variable is only possibly modified correctly). This should, for the above block of code, yield:

a1 = 7 
b1 = a1 * 3 
if (b1 > 20) { 
  a2 = b1 + 3
  b2 = 0
} else { 
  a3 = b1 - 3
  b3 = 1 
} 
a4 = PHI(a2, a3)
b4 = PHI(b2, b3)
return a4 + b4

During the implementation I was running out of time, which unfortunately led to this monster of a helper function. It could probably be cleaned up by perhaps making a type for ‘return values of a traversal’. It’s also probably reasonable to do some kind of higher-order function on the mod-set of variables (which I computed elsewhere).

-- Compute values to be used for PHI for one variable at a time
plantIfPhi :: UsageMap -> UsageMap -> UsageMap -> UsageMap -> [Id] -> (Block, UsageMap)
plantIfPhi original afterIf afterElse final []
  = ([], final)
plantIfPhi original afterIf afterElse accumulator (id:ids)
  = ([Assign newId (Phi (Var varAfterIf) (Var varAfterElse))] ++ otherPhis,
     final)
  where
    oldVersion = lookupWithFailure id original
    versionAfterIf = lookupWithFailure id afterIf
    versionAfterElse = lookupWithFailure id afterElse
    realVersionAfterElse 
      = if versionAfterIf == versionAfterElse
        then oldVersion
        else versionAfterElse
    varAfterIf = id ++ show(versionAfterIf)
    varAfterElse = id ++ show(realVersionAfterElse)
    (newId, _, newMap) = getIdentifier id accumulator
    (otherPhis, final) 
      = plantIfPhi original afterIf afterElse newMap ids

Do-While Loops

Finally we also need to support loops. We could use a call graph structure, similar to what we had for the if-then-else conditional, except that there is a cycle.

a = a + 2
b = 0
do {
  a = a + 1
  b = 5
} while (a < 10)

At first glance, this looks tricky. We don’t know the number of times the loop will execute, though we do know it’s at least once.

Which value of a are we reading in the loop body? Well, this is either the value from the initial set-up, or the value the previous loop left at its end. We could thus try to introduce phi-functions:

a2 = a1 + 2
b1 = 0
do {
  a3 = PHI(a2, a4)
  b2 = PHI(b1, b3)
  a4 = a3 + 1
  b3 = 5
} while (a4 < 10)

This program is impossible to execute, of course; the definitions are cyclic. However, we’re not actually trying to execute it. We’re only using this as an intermediate representation to try to statically optimise our code. Of course, if the two arguments to a phi-function are the same, we can eliminate it, mark that value as a constant if known, and then use the information about it being a constant to drive further optimisations.

Synthesis

I think this could be a fairly challenging question in and of itself for first-year computing students, though some degree of hand-holding would likely be required (in particular, the tree traversal for splitting out intermediate assignments can be quite fiddly to work with). It would also be worth introducing the concept of basic blocks early on, as reasoning about them is more useful in handling conditions and loops. It was a rather solid and satisfying conclusion to the original 2017 test.

Balloon Battle (Hotseat: UKIEPC 2017, Part 1)

Overview

The United Kingdom and Ireland Programming Contest (UKIEPC) 2017 took place a few weeks ago. I used to participate semi-actively and had a rather annoyingly consistent habit of finishing in third place at Imperial. I haven’t done contest programming for quite a good amount of time, though some of my experience from interviewing candidates at Palantir might have helped.

Teams are given five hours to solve somewhere from eleven to thirteen problems. These problems involve submitting a program (the source will do) to an online judge, which typically runs said program against multiple test cases and evaluates the output of the user’s program in some way. This can entail simply checking strings, or alternatively checking if the given solution achieves some task (consider problem D in this set).

This time round I managed to figure out 9 problems, rather coincidentally with a total “penalty time” of 999 minutes. When teams are ranked, the number of problems solved is compared first; to break ties, teams are given penalty time, where solving a problem T minutes since the start of the contest, with F failed attempts, scores T + 20F penalty time. The penalty for failed attempts does not accrue until the relevant problem is actually solved. It’s still in one’s interest to finish solving any questions – even though it may give you arbitrary amounts of penalty time, it will not hurt your ranking. This score would have placed second at Imperial (maybe because some of the members of the usual team number 2 have graduated?) and, at time of writing, 51st out of 522 overall.

Problems

This half of my write-up (I intend to discuss this in two parts) covers what I think are the easier seven problems in the contest, in the order I tackled them. Looking at the metadata on Codeforces, there is a pretty clear drop-off after this. Interestingly, F was actually considered the hardest problem of the ones I’ve listed here (in that fewest teams solved it), but I was able to place it third in difficulty; I struggled more with D than some of the later ones, even, though mainly for implementation reasons.

C: Cued Up

Wrong answer at 9 minutes; correct answer at 18 minutes (10 minutes spent total).

Given a description of the number of snooker balls still on a table, determine the maximum number of possible points remaining to be scored. Initially any ball can be hit (it isn’t known whether the red or coloured balls are “on”), but thereafter players can’t pot 2 consecutive reds, and if a non-red is potted and reds remain on the table, the next ball potted must be red. Assume that only one ball is potted per shot.

This is mainly a straightforward implementation – the main snag is that if you have only red balls, then regardless of how many there are the right answer is 1. I think I messed up the first time because I fixed this edge case (which is the last of the sample tests), but accidentally ended up printing “1” before the correct answer on all of the other tests…

I: I Work All Day

Correct answer at 13 minutes (4 minutes spent).

This was basically decoding the problem statement and figuring out that “what’s left over after repeated subtraction” refers to the modulo operator. I probably could have got this one out faster if I was still comfortable with C++.

J: Just a Minim

Correct answer at 17 minutes (4 minutes spent).

Figure out the length of a tune given the length of note values. Again, raw implementation. I’m surprised this even took 4 minutes, though that might well have been because I went through all the sample tests, and also did the “hacky” approach of hardcoding the strings. There was a slightly faster to write up solution that involved special casing 0, and then writing 1/x for the others. I also might have wasted some time because I was concerned about floating point rounding errors and so calculated things in terms of sixteenths. (That’s not needed because these numbers have exact binary representations, and the problem limits would be very unlikely to result in a loss of precision.)

F: Flipping Coins

Correct answer at 26 minutes (8 minutes spent).

You have n coins, all initially tails up, and k coin flips which you are obliged to exercise – but you can choose which coin to flip. At the end of the game you keep all coins that are heads up. What’s the expected number of coins you can keep (with optimal play)?

This one is figuring out the right strategy (pretty simple – always flip something that’s tail side up if you can) and then writing a recurrence relation. There are overlapping subproblems, so a quick bit of memoisation is needed, but it is not hard to implement. I’m surprised how much candidates appeared to struggle with this problem. I guess it requires a little bit more familiarity with thinking in terms of probabilities, which has tended to be one of my stronger points at contests like these.

D: Deranging Hat

Wrong answers at 50, 53, 61 minutes. Correct answer at 89 minutes (63 minutes total spent).

A sorting network consists of comparators; given two indices i and j (importantly, in that order), if i is greater than j, then the letters at those indices are swapped. Given a string, build the sorting network that would “sort” the traditionally sorted version of that string back into the original string. For example, given the string “dude”, you would need to transform “ddeu” to “dude”, perhaps by comparing 4 and 2 and swapping them, and then 3 and 4 (and then swapping them). The sorting network can’t be more than ~10X the length of the string, in the worst case.

It took me quite a while to figure out what the problem statement meant. I came up with a reasonable idea (swap each letter into its right place, iterating through the string once), but was needlessly worried about performance and started doing a lot of fiddly book-keeping with Maps of SortedSets; there was a difficult-to-trace bug in there. I then gave up some time after 61 minutes and reimplemented things with a much simpler temporary char-array solution, which worked.

A: Alien Sunset

Correct answer at 104 minutes (15 minutes spent).

Given day lengths on multiple planets and times of sunrise/sunset, find the first time within a given time window where it is dark everywhere.

I avoided this at first because it looked like an Interval Tree kind of problem. Though seeing that many people solved it, I had a closer look and it looked like writing something in O(planets * allowed time window) would work. Thus, simply start from 1 and test each possible time interval to see if it is indeed dark everywhere, and if it is return. There was a bit of a snag with some planets having daylight hours crossing zero, but nothing too difficult to manage.

E: Education

Correct answer at 140 minutes (36 minutes spent).

A school has N departments, each with some number of students Si; they need to move to a new set of buildings, which each have capacity Ti and cost Ci (but they cannot share buildings). Minimise the total cost of all buildings, while fitting all of the students in (if possible).

I figured to sort the departments in descending order of size and then thought of some kind of dynamic programming based solution. I then realised I could do better; a greedy solution which picked the cheapest available building could actually work, if one was considering departments in descending order of size. This led rather quickly to one possible solution: sort the departments in descending order of size, and then populate a priority queue (highest priority = lowest cost) with permissible buildings. As larger departments are housed, smaller buildings can be added to the permissible pool, though we still want to ensure that large but cheap buildings would be used first. If the queue ever ran dry, the assignment would be impossible.

Five problems remain that are somewhat more difficult (at least in my opinion).

Thoughts

There are certain skills that I’m still reasonably comfortable with, like mathematics in terms of computation. There are others like computational geometry that I don’t remember too well (as we’ll see in part 2). The first part of these contests is typically a speedy rush to get credit for as many problems as possible; usually somewhere between the 5 and 7 problem mark things slow down a bit. The second part for me tends to be more interesting, though I find that the focus changes from speed to actually thinking hard about algorithms – and very often fretting over implementation as well.

  • 1
  • 2