Browse Category

Algorithms

Algorithmic Modelling – Alphabear 2

Alphabear 2 is a word game where players are given a two-dimensional grid of tiles. Some of these tiles show letters, along with timers; others are concealed and must first be exposed by using an adjacent letter in a word. Players use letter tiles (anywhere on the grid) to form a word. When a player makes a word, he/she scores points equivalent to the sum of the timers on the tiles that are used. The timers on all tiles that aren’t used then count down by 1; tiles that reach zero become unusable stone blocks.

Usually, this means that you want to form long words that incorporate tiles that have shorter timers; for example, there is a ten letter word on the above board (PEDESTALED), but it doesn’t use the F. After playing this word, the timers on the remaining letters will count down by 1; the P will now have a timer of 2, the first E 10, and so on.

However, the game does have various mechanics that mean this isn’t always the best strategy. For example, one needs to ensure that there are actually enough letter tiles to work with, and that the distribution of letter tiles allows reasonable words to be formed. The game also has players choose bears for each mission which have special abilities such as awarding additional points for certain words. Actually, in hindsight FEASTED might have been a better choice than DEADLIFTS, as I was using a bear that gives a point bonus for words ending in ED.

In addition to scoring points by making words, players also receive bonus points for the largest fully cleared rectangular area. The game identifies this for you. I’m not sure how this is actually implemented, though there are large enough boards (15 by 15?) that it’s probably not the naive algorithm of generating and testing all rectangles; where N is the width of the board that would be O(N^6). There is a DP-based approach where one can keep track of the number of stone blocks that slashes this to O(N^4) which would probably be enough. With more clever approaches it’s possible to knock the time complexity down to O(N^2) – and that’s probably optimal as any less would involve not actually reading some parts of the grid.

Most of the time, this isn’t something to worry about too much. I’m able to clear most levels with no stones at all. However, if we need to take a stone, ideally it should be as far away from the center of the board as possible (note that only the largest rectangular area counts).

Defining optimal play in general is difficult, owing to the spatial reasoning required by the largest clear area bonus, uncertainty of tiles that are yet to be revealed and sheer breadth of options when faced with many open tiles.

Before starting a round, the player has to select bears to use. Optimal decisions here largely depend on the specific challenge being played. A bear that scores bonus points for six-letter words isn’t particularly useful on a tight board where one has only a few open tiles at a time; some bears add time to timed boards, which isn’t useful if the board has no time limit. Furthermore, players select up to three bears each round; these bears can have useful synergies. In particular, I like a combination of “summon three Zs” and “score triple points for words including JQXZ (not stacking, unfortunately)”; “summon ING” or “summon ED” goes quite well with these two as well. I’ve used these to complete a mission deemed “impossible” given the strength of my bears at the time.

We can perhaps begin by focusing more narrowly on optimal play of an Alphabear endgame (where all tiles have been revealed), ignoring special abilities bears may have for now and assuming that the board has a solution. Here is an example:

From this point on, there is no more uncertainty as to what letters we have to use. Interestingly, this board shows that a greedy algorithm where one tries to play the longest possible words doesn’t work. One could begin WORKSHOPPED, leaving RAII, but this is actually an unwinnable state. There are several two-turn solutions, like ROADWORKS + HIPPIE, or WORSHIPPED + KORAI. (Obviously, this is very high-level play; I think I played SKIPPER with this board. That leaves RODIAHWO, for which I’d probably have played something like RADIO + HOW, though HAIRDO + WO is better.)

Given a multiset of pairs of letters and timers L = (l,t)^+, we want to return a suitably ordered list of words S = \left[ l^+ \right] , that satisfies the following properties:

  • legitimacy: every word s \in S is included in the game’s dictionary.
  • resourcing: words do not use tiles that aren’t in L.
  • timing: for each pair (l, t) \in L, l is used in a word before or on position t.

The above set of properties is not correctly specified without a refinement to deal with duplicate letters. There are several ways to deal with this; assuming low cardinality, one approach could be to recognise all duplicate instances of letters as separate letters, and suitably expand the dictionary to allow this. Our list is then L = \left[ (A,3), (B_1,1), (B_2,1), (E, 1) \right], and the dictionary recognises both B_1 E and B_2 E.

I don’t have a good way of finding an optimal solution; I suspect the problem is actually NP-complete; it feels like SAT or 3SAT may be reducible to this problem, though I haven’t worked through the details. Nonetheless, there are several search heuristics we can try. Clearly, on a given turn we must use everything which now has a timer of 1. Also, although this board is an example of why you don’t want to be greedy, in general it probably makes sense to do some kind of ordered backtracking, where one expands the most promising nodes first. I can imagine an A* search could actually work well, though finding an admissible heuristic could be hard.

There’s a fourth property we want as well:

  • optimality: the score produced by the words is maximal.

Going back to our sample board, WORSHIPPED and KORAI will score one more point than ROADWORKS and HIPPIE, because there is one fewer timer tick. In general, we want to minimise the number of timer ticks by playing longer words upfront where possible, as this reduces the number of timers that are still ticking. Once one has devised a feasible solution, it may make sense to perform some iterative optimisation by seeing if it’s possible to bring letters forward. Of course, that approach may get us stuck in a local optimum. I could see some kind of iterative hill-climbing algorithm from the top-n admissible solutions (as found by our ordered backtracking) yielding decent results.

Unfortunately my vocabulary and anagram skills are far from good enough to handle the required complexity, especially under time pressure. Most of the time, successfully playing every letter on the board is enough to complete the mission’s objectives. Interestingly, it looks like the game wasn’t really programmed to deal properly with successfully using every letter but failing the mission. The game claims that there’ll be an additional reward on the reward screen when identifying the board as clear; yet, there is no reward screen as the mission was failed.

Algorithmic Modelling – Codenames

Codenames is a board game designed by Vlaada Chvatil which tests communication. The game is played with two teams; each team splits itself into clue-givers and guessers.

There is a board of 25 cards, each with one (or more) words on them. Of these cards, 9 are owned by the first team, 8 owned by the second team, 7 are neutral and one is the ‘assassin’.

The clue-givers attempt to communicate to the guessers which cards are owned by their team. This is done by giving a one-word clue followed by the number of cards N corresponding to the clue. The rules establish some bounds on allowed associations (for example, ‘sounds like’ clues are not allowed).

I don’t know how much time went into selecting the words to appear on individual cards, but there are certainly many words in the deck that can be interpreted in many ways, which makes the game fun. For example, I can think of quite a number of ways to clue BOND (COVALENT for unambiguity; DURATION, ASSET or DEBT for the debt; JAMES, AGENT or SEVEN or even at a stretch something like CASINO for the character; FRIEND or FRIENDSHIP; STREET or TUBE; CONTRACT, PROMISE or WORD for the idea of a promise; ADHERE or CONNECT, among others). Which one I might pick would depend on the other words on the board.

Guessers then identify up to N+1 cards they think their team owns. This can be based on the entire history of clues given, not just the previous clue. These cards are guessed one at a time; a team is only allowed to make further guesses if they guess ‘correctly’ one of their team’s cards. Teams may refrain from making all guesses.

The game ends either when all cards owned by a team are revealed (that team wins), or if a team reveals the assassin (that team loses).

In practice, clue-givers will need to consider all cards, not just the ones their team owns. The penalty for ‘failure’ varies; the assassin is an instant loss, and revealing a card owned by the other team is also bad. For example, with the cards APPLE, BANANA and GRAPE it would be very tempting to declare (FRUIT, 3) as a clue; yet, if KIWI is owned by the other team (or worse, is the assassin) it might be dangerous.

However, if the other team has already revealed that KIWI is theirs (e.g. with another clue, maybe SOUTH or BIRD) then the FRUIT clue becomes safe. Thus, pre-computing a strategy at the beginning of the game (e.g. by partitioning the eight or nine clues owned into several logical groups while avoiding other words) may not be optimal.

I tend to consider making an effort to win to be a key part of games, and thus what moves may be considered good also often depends on the state of the game. For example, if the opposing team has just one card left, I will give a broader clue that may have more tenuous links in a ‘do or die’ effort to finish on this turn. A less extreme example would be attempting weaker links if behind, or perhaps playing slightly more conservatively if ahead.

Mathematically modelling Codenames can be tough. We can try modelling the game state as a tuple (O, E, N, x, h, f_O, f_E) where O, E, N are sets of the remaining clue words our own team has, words the enemy team has, and the neutral words, x is the assassin, h = (t,w,n)^+ is the history of the game, and f_O and f_E are preference functions of the form h, w, n, O, E, N, x \rightarrow P, returning an ordered list O \cup E \cup N \cup \lbrace x \rbrace. (t, w, n) is a clue meaning that a given team gave a clue word and hinted that some number of cards was relevant. This already abstracts two difficult parts away – ordering the clues, and determining how many of the top preferences to pick.

I think the preference functions need to take into account previous clues from both teams; if a previous clue could clearly have corresponded to two words and I picked the wrong one, it might be fairly high on the list even if unrelated. Similarly if this scenario happens to the opposing team, I would probably avoid the word that would blatantly be theirs.

The notion of degree of confidence also isn’t captured as well in our ordering; going back to the fruit example, having a clear ordering would imply that clues for less than 4 items could reliably result in correct guesses (if we knew that APPLE was by a small margin the best pick, we could safely and confidently give (FRUIT, 1) to clue it, which seems wrong). One could imagine modelling these with selection probabilities, though things would be even more complex.

The above representation still seems computationally difficult to work with. The evolution of the preference function as one moves from one state to another is unclear (so in that representation it is fully general), making lookahead difficult. It doesn’t seem like a greedy choice is always best; for example, given eight clues that are reasonably divisible into four pairs, a clue for 4 words taking one element from each pair might be a bad idea if the other words can’t easily be linked.

We can proceed by simplifying the preference functions; a simple approach could be that for each w, n each team has a persistent preference function that returns an ordered list of O_0 \cup E_0 \cup N_0 \cup \lbrace x \rbrace. The preference functions during the game then return the subsequence of that list that contains strictly the words still left in the game. This of course doesn’t take into account past information or clues from the other team.

With this, we can attempt to solve the game using standard techniques; assuming that the vocabulary of clues is bounded (let’s say it must be from the Linux dictionary), a game state is winning for the current team if there exists some word for which the preference function returns everything in O as a prefix. A state is losing if all moves in that state produce a state which is winning for the opposing team.

We can generalise this to a simple probabilistic model as well; the preference ‘functions’ instead return a discrete random variable that indicates either guessing some word or passing. A simplified model could then, at the start of the game, assign weights to each candidate indicating the relative probability that candidate would be selected. These can be normalized to account for a pass probability. As words are removed from the board, the probability of the remaining words being selected scales (we can imagine a simple rejection-sampling where we discard words that aren’t actually on the board any more).

The algorithm for the probability that we get a win from a given state is then slightly more complex (though I think still reasonably covered by standard techniques).

Algorithmic Problem-Solving: On Pigs and Poison

A teammate asked me a computer science / algorithm question over lunch. It took me somewhere between 30 to 40 minutes of thought and discussion with teammates to solve the problem, inclusive of writing about 20 lines of Java; I can see it being a kind of ‘brain teaser’ interview question (and thus generally best avoided). That said, genuinely not having heard this problem before, I think it was interesting to see the thought process I went through to get to the final answer.

You have 1,000 buckets of water, 1 of which is poisoned. It is known that the poison if ingested will kill a pig within 15 minutes. You have one hour to determine which bucket contains the poison. What is the minimum number of pigs that you need, and how would you do it? (Eventually, of course, you want to generalise your answer.)

After some initial “lateral thinking”-style questions (does the poison only kill pigs? is the death timing exact?), I determined that it was intended to be a genuine algorithms problem. As a computer scientist, I then thought of binary encodings. This gives a quick solution to the case where you have just 15 minutes to determine which bucket is poisoned – if you convert each number to a 10-bit binary number (so the first bucket is 0000000000, the second is 0000000001, and so on) and give the first bucket to no pigs, the second to pig number 10, the third to pig number 9, the fourth to pig numbers 9 and 10 and so on, the final state of the pigs 15 minutes later allows you to determine the poisoned bucket. If we used this method and only pigs 5 and 7 died, then the poisoned bucket has binary representation 0000101000, which corresponds to bucket number 40. In any case, we know the answer is bounded above by 10.

I then considered ways of extending this solution given that we can run four trials instead of one. I did consider partial experiments, but didn’t see a way this would be helpful. One quick possibility we came up with was to linearly split the buckets among a number of pigs to cut down the search space – though this risks losing one pig at each iteration. Still, we determined that 6 pigs would work:

  • On trial 1, you have 1000 buckets and 6 pigs – divide into 7 groups of at most 143.
  • On trial 2, you have 143 buckets and possibly 5 pigs – divide into 6 groups of at most 24.
  • On trial 3, you have 24 buckets and possibly 4 pigs – divide into 5 groups of at most 5.
  • On trial 4, you have 5 buckets and possibly 3 pigs – the binary method can distinguish 8 buckets.

Yet, this same method wouldn’t work with 5 pigs. We were ready to settle on 6 as the answer, but the question asker revealed that it was possible to do better than 6. The idea of imbalanced groups then came up – the ‘no risk’ group of buckets should be larger, because in that case we could guarantee that we would still have all our pigs if we needed to investigate that group.

I guess I haven’t done algorithm interviews for a good while, and it thus took me to this point before I thought of setting up a recurrence relation. Define W(n, p) as the maximum number of buckets distinguishable with n trials and p pigs remaining. I then set up this relation:

W(n, p) = \begin{cases} 2^p & n = 1 \\ 1 & p = 0 \\ W(n-1, p) + pW(n-1,p-1) & \text{otherwise} \end{cases}

The last case comes about because we can safely allocate a group of W(n-1, p) buckets to be given to no pigs, since with one fewer trial we can distinguish between them. For each of the pigs present, we need to prepare to continue the investigation with one fewer trial and one fewer pig, hence pW(n-1,p-1).

This is more efficient, but doesn’t quite get us there yet. I noted that p(4, 5) was somewhere in the 800s, which established that strategies based on risking one pig in every round apart from the last would be insufficient. Thinking a little more about it, I realised that in the extreme case, you could always have a special bucket each round that you gave to all pigs – if they all died that round, we would know that bucket was poisoned. However, you couldn’t have more than one of these per round – if they all died, you wouldn’t know which bucket was the cause.

More generally, you could split the groups of buckets into groups that are given to zero, one, two… up to p pigs, where p is the number of pigs still left. The groups given to smaller numbers of pigs could likely be larger, to account for the additional resources available in later rounds. The following recurrence relation then drops out:

W(n, p) = \begin{cases} 2^p & n = 1 \\ 1 & p = 0 \\ \sum_{i=0}^p \binom{p}{i} W(n-1, p-i) & \text{otherwise} \end{cases}

This actually has a closed form solution. I didn’t notice it when looking at the recurrence, though it became apparent when I coded up a program to calculate the required W(4, 5) = 3125. I was surprised to see how much additional discriminatory power five pigs had, though four wouldn’t be sufficient. Interestingly, five pigs and three periods were sufficient. The general closed form solution is simply W(n, p) = (n + 1)^p and our argument provides a method for actually identifying the relevant bucket.

Finally, an interesting challenge here was to establish that it wasn’t possible to achieve this with four pigs (so that 5 would be the minimum). One way to do this was to reason in terms of information theory – a pig either survives all the way or dies after a given round, and then it stays dead. This is n + 1 many possibilities, and there are p pigs, so it seems that we can’t do better than (n+1)^p. Clearly 5^4 = 625 < 1000 so four pigs isn’t enough; also, this expression matches the value obtained by our recurrence-based method, suggesting that that is indeed optimal.

Static Single Assignments (January Haskell Tests 2018 – Part 2)

Second in a two-part series; see Constant Propagation (January Haskell Tests 2018 Part 1) for the first part, which covered the assessed portion of the test.

In January this year, Imperial College’s first-year computing students were tasked with implementing a tool to handle constant propagation for statements in static single assignment (SSA) form. In particular, this means that each variable is only assigned to once, and there are no compound expressions. This is useful for optimising compilers to simplify the machine code that a program compiles down to.

Of course, being required to write everything in SSA form is painful; as much as I like immutable objects, mandating their use can be costly (consider a large set, for instance, that periodically receives updates). Banning compound statements can also be troublesome; hence, SSA compilation is typically automated.

The test concluded with a challenge tasking students to implement support for converting a more general program into SSA form. The students had been working with a simple imperative language with statements to assign values to integer variables (possibly doing mathematical operations), conditional statements (if-then-else) and do-while loops.

This problem naturally decomposes into three sub-tasks: supporting SSA translation for basic blocks, conditionals and then loops. The first problem can also readily be split into two sub-problems: ensuring variables are immutable and then handling compound expressions.

Basic Blocks

Immutable Variables

I opted to begin with immutability here, as this would allow for easy testing. This is done by creating multiple versions of a variable. For example,

could become

Essentially, one could implement this by keeping track of the latest version of each variable used, and incrementing the version whenever processing an assignment. Whenever handling the right-hand side of an assignment, one needs to be able to know how to reference the latest version of a variable.

I’m pretty sure I wouldn’t have bothered to write a specific UsageMap type when I was still in university, though working in industry seems to have made me think more carefully about typing.

Compound Statements

The version of SSA in the paper also doesn’t allow compound statements (e.g. a = ((b * c) + 5) / d). However, this could be handled reasonably as arithmetic expressions can be represented in tree form.

We can create temporary variables for each non-leaf node of the expression tree. For example, we define k1 = b * c, k2 = k1 + 5 and k3 = k2 / d. Some care is needed in cases where the tree is a bit more balanced (we need to know that the first argument of our plus above is k1).

Also, of course, k is only used for illustration above, as real functions could have a value actually called k! It makes sense to have a counter separate from the rest of the UsageMap, though I relied on some limits on the variable names that were mentioned in the specification. I’m not sure if I would have worked this out in first year!

Conditional Expressions

One needs to consider which versions of a variable are in scope. Let’s consider the following block:

After reaching the end of the if block, we can’t just use the UsageMap we have obtained from that to pass into the else block (since on an execution where we’re going in to the else block, the if block must never have run). Similarly, after reaching the end of else we can’t just use the UsageMap from that (we don’t know if the else block has run).

It is probably easier to visualise this block of code as a tree. Each node represents a possible linear execution path that we may run.

The versions of variables each of the blocks read upon branching should be the version before we entered that block. Thus, we should reference the state of the UsageMap from before we entered the if block when entering the else block. We actually want something like this.

When returning, we somehow need to pick one of the final values for variables that were possibly modified. This is difficult to express, but we can approximate it with a phi function, where \phi(a, b) nondeterministically returns either a or b. We can thus simply pick a4 = \phi(a2, a3) and b4 = \phi(b2, b3), and add these statements. Finally, we would want to return a4 + b4. Some care needs to be taken with the implementation (for example, one must handle the case where a variable is only possibly modified correctly). This should, for the above block of code, yield:

During the implementation I was running out of time, which unfortunately led to this monster of a helper function. It could probably be cleaned up by perhaps making a type for ‘return values of a traversal’. It’s also probably reasonable to do some kind of higher-order function on the mod-set of variables (which I computed elsewhere).

Do-While Loops

Finally we also need to support loops. We could use a call graph structure, similar to what we had for the if-then-else conditional, except that there is a cycle.

At first glance, this looks tricky. We don’t know the number of times the loop will execute, though we do know it’s at least once.

Which value of a are we reading in the loop body? Well, this is either the value from the initial set-up, or the value the previous loop left at its end. We could thus try to introduce phi-functions:

This program is impossible to execute, of course; the definitions are cyclic. However, we’re not actually trying to execute it. We’re only using this as an intermediate representation to try to statically optimise our code. Of course, if the two arguments to a phi-function are the same, we can eliminate it, mark that value as a constant if known, and then use the information about it being a constant to drive further optimisations.

Synthesis

I think this could be a fairly challenging question in and of itself for first-year computing students, though some degree of hand-holding would likely be required (in particular, the tree traversal for splitting out intermediate assignments can be quite fiddly to work with). It would also be worth introducing the concept of basic blocks early on, as reasoning about them is more useful in handling conditions and loops. It was a rather solid and satisfying conclusion to the original 2017 test.

Balloon Battle (Hotseat: UKIEPC 2017, Part 1)

Overview

The United Kingdom and Ireland Programming Contest (UKIEPC) 2017 took place a few weeks ago. I used to participate semi-actively and had a rather annoyingly consistent habit of finishing in third place at Imperial. I haven’t done contest programming for quite a good amount of time, though some of my experience from interviewing candidates at Palantir might have helped.

Teams are given five hours to solve somewhere from eleven to thirteen problems. These problems involve submitting a program (the source will do) to an online judge, which typically runs said program against multiple test cases and evaluates the output of the user’s program in some way. This can entail simply checking strings, or alternatively checking if the given solution achieves some task (consider problem D in this set).

This time round I managed to figure out 9 problems, rather coincidentally with a total “penalty time” of 999 minutes. When teams are ranked, the number of problems solved is compared first; to break ties, teams are given penalty time, where solving a problem T minutes since the start of the contest, with F failed attempts, scores T + 20F penalty time. The penalty for failed attempts does not accrue until the relevant problem is actually solved. It’s still in one’s interest to finish solving any questions – even though it may give you arbitrary amounts of penalty time, it will not hurt your ranking. This score would have placed second at Imperial (maybe because some of the members of the usual team number 2 have graduated?) and, at time of writing, 51st out of 522 overall.

Problems

This half of my write-up (I intend to discuss this in two parts) covers what I think are the easier seven problems in the contest, in the order I tackled them. Looking at the metadata on Codeforces, there is a pretty clear drop-off after this. Interestingly, F was actually considered the hardest problem of the ones I’ve listed here (in that fewest teams solved it), but I was able to place it third in difficulty; I struggled more with D than some of the later ones, even, though mainly for implementation reasons.

C: Cued Up

Wrong answer at 9 minutes; correct answer at 18 minutes (10 minutes spent total).

Given a description of the number of snooker balls still on a table, determine the maximum number of possible points remaining to be scored. Initially any ball can be hit (it isn’t known whether the red or coloured balls are “on”), but thereafter players can’t pot 2 consecutive reds, and if a non-red is potted and reds remain on the table, the next ball potted must be red. Assume that only one ball is potted per shot.

This is mainly a straightforward implementation – the main snag is that if you have only red balls, then regardless of how many there are the right answer is 1. I think I messed up the first time because I fixed this edge case (which is the last of the sample tests), but accidentally ended up printing “1” before the correct answer on all of the other tests…

I: I Work All Day

Correct answer at 13 minutes (4 minutes spent).

This was basically decoding the problem statement and figuring out that “what’s left over after repeated subtraction” refers to the modulo operator. I probably could have got this one out faster if I was still comfortable with C++.

J: Just a Minim

Correct answer at 17 minutes (4 minutes spent).

Figure out the length of a tune given the length of note values. Again, raw implementation. I’m surprised this even took 4 minutes, though that might well have been because I went through all the sample tests, and also did the “hacky” approach of hardcoding the strings. There was a slightly faster to write up solution that involved special casing 0, and then writing 1/x for the others. I also might have wasted some time because I was concerned about floating point rounding errors and so calculated things in terms of sixteenths. (That’s not needed because these numbers have exact binary representations, and the problem limits would be very unlikely to result in a loss of precision.)

F: Flipping Coins

Correct answer at 26 minutes (8 minutes spent).

You have n coins, all initially tails up, and k coin flips which you are obliged to exercise – but you can choose which coin to flip. At the end of the game you keep all coins that are heads up. What’s the expected number of coins you can keep (with optimal play)?

This one is figuring out the right strategy (pretty simple – always flip something that’s tail side up if you can) and then writing a recurrence relation. There are overlapping subproblems, so a quick bit of memoisation is needed, but it is not hard to implement. I’m surprised how much candidates appeared to struggle with this problem. I guess it requires a little bit more familiarity with thinking in terms of probabilities, which has tended to be one of my stronger points at contests like these.

D: Deranging Hat

Wrong answers at 50, 53, 61 minutes. Correct answer at 89 minutes (63 minutes total spent).

A sorting network consists of comparators; given two indices i and j (importantly, in that order), if i is greater than j, then the letters at those indices are swapped. Given a string, build the sorting network that would “sort” the traditionally sorted version of that string back into the original string. For example, given the string “dude”, you would need to transform “ddeu” to “dude”, perhaps by comparing 4 and 2 and swapping them, and then 3 and 4 (and then swapping them). The sorting network can’t be more than ~10X the length of the string, in the worst case.

It took me quite a while to figure out what the problem statement meant. I came up with a reasonable idea (swap each letter into its right place, iterating through the string once), but was needlessly worried about performance and started doing a lot of fiddly book-keeping with Maps of SortedSets; there was a difficult-to-trace bug in there. I then gave up some time after 61 minutes and reimplemented things with a much simpler temporary char-array solution, which worked.

A: Alien Sunset

Correct answer at 104 minutes (15 minutes spent).

Given day lengths on multiple planets and times of sunrise/sunset, find the first time within a given time window where it is dark everywhere.

I avoided this at first because it looked like an Interval Tree kind of problem. Though seeing that many people solved it, I had a closer look and it looked like writing something in O(planets * allowed time window) would work. Thus, simply start from 1 and test each possible time interval to see if it is indeed dark everywhere, and if it is return. There was a bit of a snag with some planets having daylight hours crossing zero, but nothing too difficult to manage.

E: Education

Correct answer at 140 minutes (36 minutes spent).

A school has N departments, each with some number of students Si; they need to move to a new set of buildings, which each have capacity Ti and cost Ci (but they cannot share buildings). Minimise the total cost of all buildings, while fitting all of the students in (if possible).

I figured to sort the departments in descending order of size and then thought of some kind of dynamic programming based solution. I then realised I could do better; a greedy solution which picked the cheapest available building could actually work, if one was considering departments in descending order of size. This led rather quickly to one possible solution: sort the departments in descending order of size, and then populate a priority queue (highest priority = lowest cost) with permissible buildings. As larger departments are housed, smaller buildings can be added to the permissible pool, though we still want to ensure that large but cheap buildings would be used first. If the queue ever ran dry, the assignment would be impossible.

Five problems remain that are somewhat more difficult (at least in my opinion).

Thoughts

There are certain skills that I’m still reasonably comfortable with, like mathematics in terms of computation. There are others like computational geometry that I don’t remember too well (as we’ll see in part 2). The first part of these contests is typically a speedy rush to get credit for as many problems as possible; usually somewhere between the 5 and 7 problem mark things slow down a bit. The second part for me tends to be more interesting, though I find that the focus changes from speed to actually thinking hard about algorithms – and very often fretting over implementation as well.

Playing with Blocks (Hotseat: British Informatics Olympiad R1, 2015)

Second in a series. I apologise for the temporary hiatus, had a couple of weeks travelling and then intense sessions at work (part of this was my choice, but still).

Background

I was not in the UK for high school, and studying computing/informatics wasn’t common in my high school in Singapore either. I never actually participated in the British Informatics Olympiad. This is the UK’s national high-school competition in computing, and unlike normal programming contests there’s an explicit portion involving reasoning and discrete mathematics which I like. This can get intense; problems especially in the second round can feature fairly complex reasoning – such as 2009’s second round which required understanding models of computation in FRACTRAN (and eventually deciphering a “program” that computed the Fibonacci series). For quite a few contest problems, intuitively determining a solution, or even being uncertain but firing it off anyway in hopes of getting it accepted is much easier than trying to prove that the solution is correct.

The first round is somewhat less intense, but still has plenty of opportunities for interesting problems. There are typically three tasks; the first is gentle and meant to be a warm-up, the second tests implementation (perhaps more like what I might expect on a programming exam in university) and the third tests algorithms (in a sense, more like a standard contest problem). Each question has most of the points (about 75%) allocated to implementation and the remainder to several “reasoning” questions. Partial scores are awarded for solutions that aren’t optimal or fail on some cases (much like in the IOI, though the information is not known to the students).

This was one of the easier papers I’ve done; I worked my way through the tasks very quickly, and didn’t have too much of a problem (other than a careless error in problem 2). Most of the time I score above 90 points on these, though sometimes if I don’t “get” problem 3 in an optimal way or make subtle errors on problem 2 the score will be a fair bit lower.

The specific paper I did is here.

Problem 1: Block Palindromes

Given a string, find the number of ways of decomposing it into at least two blocks such that when the blocks are reversed (but not the letters within the blocks), the arrangement of blocks looks the same. For example, BARBAR is a block palindrome, since (BAR)(BAR) reads the same forwards and backwards. Of course, every ordinary palindrome is also a block palindrome, since taking one-letter blocks is allowed.

Not too much here; the problem is readily amenable to a simple recursive implementation (try taking each block length from 1 to half the string length and see if the string ends with that prefix – if so, recurse and add that result to our total). It would be possible to use memoisation to improve performance here, since there can be repeated subproblems e.g. with AABCBAA, you would be solving for the number of solutions to BCB twice. However, given the small constraints (strings of up to 10 characters) this wasn’t needed.

This problem probably had one of the trickier reasoning questions though – if all groupings of a block palindrome contain an even number of blocks, how many groupings can there be (and why)?

I attacked this one with a proof by contradiction – that it is not possible to have any groupings

S = B_1 B_2 \ldots B_{n-1} B_n B_{n-1} \ldots B_2 B_1

for n \geq 2. This can be shown by construction:

S' = (B_1) (B_2 \ldots B_{n-1} B_n B_{n-1} \ldots B_2) (B_1)

is a valid decomposition, which has three blocks. Of course, the empty string isn’t a valid block, so we require S = B_1 B_1 and the number of groupings has to be 1. We can probably formulate an even stronger condition (for instance, that apart from B_1 itself, no prefix of B_1 is also a suffix of B_1). While the reasoning seemed sound to me, I wasn’t too confident I had the right answer here.

(Note: The question text implied that a block palindrome could be decomposed into blocks in some way, so while 0 satisfies “all groupings” it wouldn’t be a block palindrome.)

Problem 2: Battleships

Given a funky (deterministic) algorithm for generating random numbers, implement it, and use it to fill a grid with ships of certain sizes via some kind of rejection sampling. The key constraints are that the grid has a finite size, and more interestingly ships are not allowed to touch (even diagonally).

Implementing the random-generator is pretty much following instructions from the spec. However, there are several ways to implement the latter constraint. Given that optimality is generally not important in problem 2, I went for a rather nasty solution that, say, for a k-ship placed horizontally would iterate through a 3 \times (k + 2) bounding rectangle around the ship and mark all those squares as unusable. Better asymptotic complexity can probably be obtained using an R-tree (though that is probably not reasonable for contests – especially without a reference!).

There was a rather nasty “reasoning” problem about the number of ways to place ships of size 4, 3, 2 and 1 on a 5 by 5 grid – this was also where I lost the five points. I hacked my implementation to, instead of using the random number generator, simply use backtracking to count the number of possible placements. However, I forgot that my bounding rectangle system was based on booleans (in other words, a square that was unusable because of two different ships would suddenly become usable after just one was removed), and thus came up with the impressive total of 427,840 possibilities (correct answer: 1,424)!

I subsequently fixed the aforementioned bug (using integers instead) and confirmed that the answer was indeed 1,424. During the test I did try smoke-tests that verified that a 1-ship could be placed in 25 ways and a 5-ship in 10; I attempted to do two 1-ships, but decided that I would have had too much pain working through the combinatorics for that.

Problem 3: Modern Art

This problem revolved around (reasonably) efficiently determining the ith permutation of a string in terms of lexicographic ordering, possibly involving duplicate characters. For example, in a string which consists of five As, Bs, Cs and Ds – what is the 11,223,344,556th permutation?

Of course, for smaller strings and smaller numbers it would suffice to implement some kind of next-permutation generator. This is a pretty standard algorithm. We can then step through the permutations until we get what we want – and there are some test cases that would be solvable with this; you would get (I think) 11 of the 25 points available.

The key idea is that we don’t want to explore all of the permutations. From combinatorics, we can quickly calculate the number of permutations of five As, Bs, Cs and Ds:

\displaystyle {20 \choose 5} \times {15 \choose 5} \times {10 \choose 5} \times {5 \choose 5} = 11,732,745,024

Intuitively, we are close to the “end”; we can thus be pretty sure that our 11-millionth permutation should start with a D. We can thus write a recursive (or iterative) algorithm that identifies the first character based on this search.

I began by implementing a function that computes the number of ways of permuting a given number of As, Bs, Cs and Ds. This was done in a bit of a hacky way and has some (probably unnecessary) memoisation.

Interestingly, it turns out it would actually have been okay to build the entire numerator and denominator and then divide the two; perhaps 20 was chosen to avoid complications with overflow (since 21! will exceed the max value of a 64 bit integer). Anyway, the highest value the numerator could have taken on here was 1,860,480 (20 to 16 multiplied together) so I was sure I was safe this way.

We then go through the characters in lexicographic order, and for each one, consider the number of permutations that would be feasible. The general approach is something like this:

I actually finished the implementation fairly quickly, but decided to write a couple of tests; I stepped through the 12 permutations of “ABBC” as given in the spec, and also wrote a test for “AAABBBCCCDDD” that all the permutations were generated in the correct order (using numPerms to give me the range). I also of course tried a couple of edge-ish cases such as each of “A”, “B”, “C” and “D”, and a test to verify we could handle the expected scale (five of each character, retrieving the 1st as well as 11732745024th permutation).

The “reasoning” problems on this one were very easy compared to the other two questions.

Conclusion

Unfortunately meta-analysis is somewhat limited here as unlike the AIME results, the ones for the BIO aren’t public. I’m pretty sure 95 is a solid score as the contest isn’t that easy, but I won’t be surprised if there are multiple 100s (with a bit of caution I could have gotten one)!

Style during programming contests is an interesting issue. I find that many contestants especially with C++ like to use rather bespoke preprocessor routines (e.g. cin as f, cout as g). I’m actually used to some of them as well e.g. ll for long long, vpll for vector<pair<long, long>>. If one understands these it’s probably better for speed, though when debugging I find having not-unreasonable names is preferable (even if my code is written more for speed than style).

Odd Sums Sudoku

The puzzle above is from the WPF Sudoku GP, Round 4 2016 #13. To solve it, each row, column and marked 3×3 box must contain the digits 1 to 9 once each, and the solution must have, in the marked rows, maximal odd substrings summing to the given totals in the correct order.

I’ve always enjoyed a good puzzle, though I’ve never considered myself especially good at them. That said, I’ve done my fair share of sudoku, including several books where puzzles were labelled as very difficult; yet, fairly often identifying triples, or the odd X-wing, XY-wing or unique rectangle would be sufficient to blow through the puzzles. At least, I thought that clearing some “very difficult” puzzles in 8 or 10 minutes was blowing through them; I found the World Puzzle Federation’s Sudoku GP, where some of these had benchmark times along the lines of 3-4 minutes!

Furthermore, in addition to your standard “classic” puzzles which feature the standard rules (i.e. each row, column and marked 3×3 box must contain the digits from 1 to 9 once each), there are a couple of interesting variants that require understanding new rulesets and figuring out how to deal with them. After a while, as mentioned the classic puzzles largely tend to be solved with standard techniques, and the challenge there is dispatching them with extreme speed; the variants, on the other hand, tend to maintain some degree of freshness (at least as far as I’m concerned).

The above puzzle was an example of this; several of the rows are marked with additional clues, and for these rows, the largest odd substrings in the row must sum to the values indicated by the clues. For example, if a row was 178356924, the clue associated with it would be 8, 8, 9. Note that this is also generated by several other valid sequences, such as 532174689 or 174283569.

For the above puzzle, the 9, 5, 3, 8 clue is a good starting point, and indicates that we have 9, 5, 3 and (17 or 71) as substrings of the solution to that row in some order; given the position of 6, we can conclude that the 9 and the 3 must occur before the 6 (though we don’t know). We thus have 9 in the bottom left hand corner, an even number (either 2 or 4), and then a 3. Now, reading the first column clue 1, 12, 3, 9; we have 1, (57 or 75), 3 and 9 as substrings. The 9 is at the end, and 3 cannot occur before the 4 as there’s no space to fit it. Thus 3 occurs after the 4, but it has to be separated from the 9 by an even number. More simply, the second column clue 10, 9, 6 indicates that our 5 has to be preceded by an even number and then a 1. There’s only one spot left for a 7 in this lower box, so we can insert it.

We thus reach this position:

Moving on, we can consider the middle column. 3 can only be satisfied by a singular 3, so we know that the 3 in the bottom row cannot be in the middle column. It thus must be in the sixth column, with 1 and 7 taking up the eighth and ninth columns (in some order).

Now consider the rightmost column. It is 3, 10, 12; 3 can only be satisfied by a singular 3, so the odd substrings must be 3, (19 or 91) and (57 or 75). Given the position of the 2 (fourth row), it must be the case that the column has 1 and 9 in the fifth and sixth rows, an even number in the seventh, and 5 and 7 in the eighth and ninth. We thus clearly have 7 in the corner, and 5 above it.

Finally, we can actually decide on the ordering of 1 and 9 as well; consider the hint for the middle row, which ends with 8; a 9 would automatically exceed this. Thus the middle row must contain the 1 and the sixth row the 9. The hint gives us for free that we have a 7 to the left of the 1.

There are more interesting deductions we can make as well. Consider the seventh column. The clue is 16, 9 and this can be satisfied in two ways: {1357} in some order and then a singleton 9, or {79} and {135} in some order. However, we can prove that the latter construction is impossible; the middle row is an even number, and also we have already used the 1 and the 5 in the bottom right hand box so we can’t place those numbers there. Thus, the first four numbers must be {1357} in some order and there is a 9 somewhere in the bottom right box.

The clue in the first row requires a 5 at the end. Given the 10, 10, 5 clue and the 4 already placed in the sixth column, the only odd substring in the last box must be the 5. We thus must have the 5 in the first row, and now only the 3 can fit in the fourth row. We also gather that the last two columns of the first row must be even, and we have enough clues to place these (using standard Sudoku rules).

Finally, we can place the 4 in the middle right box (now that the 3 is there, there’s only one space left where it can fit). Similarly, we can place the 5 (knowing that it can’t be in the seventh column any longer).

From this point on, solving the rest of the puzzle should be reasonably straightforward. The next step would probably involve placing 1, 9, 6, 7 and 3 in the first row (in that order – using the column hints and the knowledge that we have two tens)!

For these contests, I’d say the challenge lies largely in figuring out how the ruleset can be used to help solve the puzzle (and, well, speed; you’re thrown 13-15 puzzles to solve in the space of 90 minutes and I can only usually manage about seven or eight, and for that matter the ones I do finish are the ones on the easier end of the scale!).

On Hints in Puzzles

I played a game of Paperback with a couple of friends from Imperial yesterday. The game involves players making words from letter cards in their hands, and using them to buy additional, more powerful letter cards and/or cards worth victory points. It’s pretty much a deckbuilder crossed with a word game; while anagramming skills are indeed important, they aren’t always necessary (for instance, I managed to form the word WORK to buy the second highest tier victory point card) and certainly aren’t sufficient (keeping one’s deck efficient is certainly important).

In any case, the game reached a point where one of my classmates seemed to be agonising over what was presumably a tricky hand. The game does have a number of cards that use digraphs such as ES, TH or ED, which can make searching for words a little trickier. The common vowel (a vowel that players are all allowed to use in their words in addition to the cards in their hand) was an O, and the hand in question was [IT], [R], [B], [U] and two wilds. I immediately saw a solution when he sought help (BURrITO or BURrITOs; none of the cards had abilities dependent on word length) and let him know that a ‘perfect’ solution for his hand existed, though we decided to let him figure it out for himself. This started to drag on, though, and giving him the answer directly seemed excessive; I thus decided to drop hints that would hopefully lead him to the solution.

Technically, the search space that he would have had at this point would have been 5! + 6! \times 26 + {7 \choose 2} \times 5! \times 26^2 which is about 1.7 million possibilities, though of course one does not simply iterate through them; combinations like OqRzBITU would probably not even be considered directly, especially since the distribution of letters in English certainly isn’t memoryless (the Q being a good example; it is rarely followed by any character other than U). I gave a few hints, in this order:

  1. The word is a common word that a person in middle school would likely know.
  2. The word can be made with just one of the blanks.
  3. The blank you use matches one of the letters already on the table.
  4. You can eat it.

We could argue that the point of a hint could be to reduce the search space being considered by disqualifying some solutions, and/or serve as a way to encourage the algorithm to explore cases more likely to be correct first. Interestingly, it seems that clue 1 was neither of the above, though. It didn’t really cut down the search space much, and as far as I know it’s not easy to use this information to organise one’s search. Its point was mainly to reassure my classmate that the solution wasn’t too esoteric and he would know it was a valid solution upon considering it. I didn’t think I had played any particularly uncommon words earlier in the game (the rarest might have just been TENSOR or something), though I think I might have played Anagrams with him before, where I know I’ve called out some more… interesting words.

The next two clues do actually substantially reduce the search space in a sense though; the second clue slashes it to just 6! \times 26 = 18720 and the third further cuts it down to 6! \times 6 = 4320. I would think that after the third hint, the search should be pretty fast; I would probably try doubling up on the R and T first as they’re common.

Nonetheless, hint number 4 seemed to be the hint he responded best to, and it falls into the second category (encouraging the search to explore cases more likely to be correct). I’m not sure how humans solve anagrams, especially when blanks are involved; I personally seem to look at letters, intuitively construct a word that looks like it incorporates the key letters present, and then verify that the word is constructible (in a sense, it is a generate-and-test approach). Given the point values of the tiles, I saw including the B, U and IT cards as being the most important; thinking of words that use these patterns resulted in BITUMEN and BURRITO fairly quickly, the latter of which was feasible.

I noticed that I tend to bias towards using the constraint letters near the beginning of the word though; this might be a cognitive issue with trying to satisfy the constraints quickly, in an unusual form of the human brain often being weak at long-term planning. This wasn’t really the case with the burrito example, but another friend had a hand of ten cards (I think) at some point with an [M], [IS], [T], [R], [D] and a couple of blanks. With the common O and knowing that the M, IS and D were higher-value, I saw MISDOeR and struggled to incorporate the T, finding it difficult to back out of starting the word with a MIS- prefix. It turns out it might have been okay in that I could have found MISsORTeD, there were easier solutions such as aMORTISeD or MODeRnIST.

On hindsight, though, my hinting might not have been ideal. This is because BURrITO and its plural form weren’t the only solutions; there was OBITUaRy, which some people might find easier to see (though I didn’t actually see this during the game). I think clue 1 would probably already be invalid; I knew the word ‘obituary’ in primary school, since it was a section in the newspapers that I would browse through in the mornings, but I’m not sure it would be fair to expect middle school students in general to be familiar with the term. Clue 2 would certainly be incorrect.

On the Theoretical Complexity of Efficient Testing

Complexity was one of the courses I took at Imperial that I found to be especially intellectually stimulating. I would say it was probably the most challenging course of the degree for me at least (excluding the individual project; the second half of Intelligent Data and Probabilistic Inference probably comes close, also bearing in mind the bar being set at 90). I would say the course has certainly helped me reason about algorithms and problems in more interesting and divergent ways, and has also been useful in figuring out when to stop trying to devise excessively clever algorithms (by understanding when that is impossible).

The course starts by looking at decision problems (that is, problems with a yes/no) answer, and whether they fit into the class called P (decidable in polynomial time). Examples of problems in P include finding the maximum element in an array or list, and finding the shortest path between a pair of nodes in a graph (this is breadth-first search for unweighted, and Dijkstra’s algorithm for weighted, both of which run in polynomial time).

We then move on to NP (nondeterministically decidable in polynomial time) – effectively, this means that it is possible to verify a “proof” of the problem being decided as a yes in polynomial time. Well-known problems in NP include the Hamiltonian path problem (given a graph, is there a path passing through each node once?) – this one is in NP because a “proof” that the answer is yes can be given in the form of a path. We can check that the edges match simply by walking down that path, and the path is a permutation of the nodes. Note that we don’t need to be able to prove negative answers in polynomial time – there is a separate class called co-NP for cases where we can prove these. (NP \cap co-NP is a strict subset of both NP and co-NP, and includes P as well as a couple of other problems like integer factorisation.)

(The course also covered the underlying theory behind P and NP, concerning Turing machines – I won’t go into detail here.)

One of the biggest parts of the course involved the concept of many-one reduction; a problem V reduces to a problem W if we can translate an instance of V to an instance of W, within certain constraints. For example, the problem of determining whether all integers in a set are positive reduces to that of determining whether all integers are negative under a polynomial time constraint; the first problem (positive) holds for some set S if and only if the second holds for T, which is S with all elements multiplied by -1. In a sense, we’re showing that if we can solve W, then we can solve V as well.

We then defined the hardest problems in a given class to be the problems that are complete for that class; that is, every problem in that class reduces to a complete problem (and yes, the complete problems reduce to themselves). For NP, the constraint on the reduction is that it can be carried out in polynomial time. This makes sense as NP would not be closed under reductions allowing more generous bounds (for example, if V reduced to W under some exponential time procedure and W was in NP, that doesn’t mean V is in NP; however, if V reduces to W under a polynomial time procedure and W is in NP, then so is V). This holds because if we had a guess for V, we could “translate” it using our reduction to a guess for W which can be verified in polynomial time.

The Hamiltonian path problem (hereafter HP) I introduced earlier is NP-complete, and can be used to show that other problems are NP-complete (by reducing HP to the other problem). This works because from our definition of NP-completeness, any problem in NP can be reduced to HP, and then reduced to our problem. This involves potentially composing two polynomial time mappings – but, the composition of polynomials is polynomial too. Note that it’s not just a sum, because the “output” of the first mapping, which is the input to the second mapping, could be bigger than the initial input (though only polynomially so!).

(For showing HP itself is NP-complete, there is a reduction of SAT – the problem of determining whether a logical formula in conjunctive normal form – to HP, and SAT itself can be proven NP-complete from the definition of NP over Turing machines.)

The inspiration for this post came from some preparatory work I was doing for a reading group with James, on algorithms for test-suite minimisation. The authors comment that the problem of finding a minimal representative subset for a test-suite is NP-complete. As a starting point, removing some of the details, their specific problem (involving mapping of tests to requirements shown by said tests) given is easily reformulated as the Minimum Set Cover problem; that is, given some possible subsets C of a bigger set S, find the smallest subset of C, such that the union of the sets we put in our subset itself is S.

Strictly speaking, this isn’t a decision problem, but we can put it into a decision form: given some possible subsets C of a bigger set S, is there a subset of C, such that the union of the sets we put in our subset itself is S with size not greater than k? Solving the original problem can then be performed by a binary search on k, at each step testing this decision problem for a different value of k. Let’s call this problem Set Cover (Decision) or SC(D).

We could proceed with trying to show HP reduces to SC(D), though I find it easier to do things a bit differently – we can pick any problem that’s NP-complete (so I’ll pick SAT), and we can also exploit transitivity of reductions as mentioned above. I’ll thus use two intermediate steps:

  1. Reduce SAT to 3SAT (the problem of determining whether a formula in conjunctive normal form where each clause has three literals is satisfiable)
  2. Reduce 3SAT to Vertex Cover (VC – given a graph and some positive integer k, is there a set of nodes, size up to k, such that every edge has at least one endpoint in the set?)
  3. Reduce VC to SC(D).

Step 3 is probably the easiest now – given an instance of VC, we map each edge to be an element in S, and each node to be a candidate subset containing all of the edges that are adjacent to it. That mapping requires us to iterate through the graph perhaps a few times, but that’s definitely polynomial in the input. Comparing the definitions it’s pretty clear that this should work (that said, if you were actually doing a complexity course you might need a more rigorous proof).

Step 1 relies on the use of logical equivalences – the idea being that we can always rewrite a clause that doesn’t use three literals into one or more clauses that do. The one and two cases simply involve repeating one of the literals that’s already there; if we have more than three, we can introduce auxiliary variables to chain them together:

Consider that the 3-SAT version is satisfiable if and only if the SAT version is satisfiable. If any one of the original variables can be set to true, then we can choose suitable values of X and Y (plus other auxiliary variables in larger cases) to satisfy all of the other clauses, “rippling out” from some variable that was set to true. However, if none of them are, we’ll run into a conflict somewhere.

Finally, step 2 is somewhat trickier. At a high level, the idea is that we want to enforce some way of selecting a truth value for each variable, that would satisfy at least one of the literals in a clause. We can then construct triangles for each clause, somehow wanting to have our cover only include two of the nodes in a triangle (corresponding to ‘false’ literals). The unselected literal must then be “true” in some sense, but we cannot have multiple conflicting literals in different clauses being true. To handle this, we introduce an additional constraint linking the literals we pick as true across clauses, yielding a construction like this:

The formula is satisfiable if and only if we can build a vertex cover picking two of the nodes in each clause, and one truth value for each variable. So k = 2C plus V, where C is the number of clauses and V the number of variables.

I think the course and especially exams on it were difficult because these constructions require nontrivial insight, which can be really difficult under pressure. Having a large set of known starting points would certainly have helped; this kind of three-stage derivation would probably not be needed in an exam, though constructions of a similar level of difficulty as step 2 were certainly present.

Another Look at Dynamic Programming

Whilst on the tube today, I overheard a mother teaching her child how to count, using a method likely to be extremely familiar to many – fingers. The child counted correctly from one to ten, and then the mother added her hands too and asked the child to count how many fingers there were now.

“One, two, three -“

And so on, till twenty. The mother then attempted to explain that it would have been faster if the child continued from ten, rather than starting again. Although it wasn’t really an example of the concept, the words dynamic programming immediately shot to the front of my mind. I initially found this to be a rather confusing concept to grasp (let’s say that up to high school programming contests, if a problem wasn’t solvable by exhaustive search or greedy algorithms I’d likely have struggled), so I figured a post on it might be worthwhile.

(This isn’t really an example of DP; I’d say it’s closer to divide and conquer plus the use of a cache. We’ve cached the answer that the child has ten fingers, and identified the problem as being taking the sum of the child’s and parent’s fingers. Note that because of the possibility of amputation or polydactyly, the subproblems are not the same – and, specifically, saying 2 * 10 = 20 isn’t generally correct.)

Essentially, the key idea behind dynamic programming (DP) is that we save time by not re-doing work that we’ve already done, by remembering the results to intermediate steps. Of course, this tends to mean that there’s a space overhead. This is generally useful in cases where a problem is too large to solve, yet it can be decomposed into smaller pieces, and importantly we can combine optimal solutions to these smaller pieces, to get a solution that is optimal for the original problem. (More formally, this is known as optimal substructure.)

Furthermore, we want to get some significant benefit out of actually remembering the answers (in practice, we want to use our previous work multiple times; this manifests in the form of overlapping subproblems). This is what would distinguish an approach as being a DP-based one, as opposed to divide and conquer.

Of course, the fingers example is trivial. There are many other natural examples (the ones that come to mind first for me include knapsack problems and route-planning), though I’m not sure I directly apply DP that much in a natural context (although quite a few days have tasklists that could be done solving an ordered constrained TSP, the last time I used the Held-Karp algorithm was probably for my third year project). It certainly does see many applications that are relevant to daily life (error correction in search queries / autocorrect via Levenshtein distance; not sure how they are actually implemented but routing applications like Citymapper and Google Maps are likely to involve such algorithms as well).

In terms of implementation, the cache-based “top-down” solution was what I learned first, and to me at least was intuitively easier to understand. When you encounter a subproblem, you check a lookup table to see if you’ve done the problem before; if you have, you just take the answer from that. If you haven’t, solve the problem the hard way (this may involve more subproblems – when solving these, it’s important to look at the table again), and then (important!) store the answer you obtained back in the table.

The alternative “bottom-up” method involves generating solutions to smaller subproblems, and using these to build up the solution to a bigger problem. I’d probably first actually used a method along these lines when introduced to the Fibonacci sequence (probably in year 4 or so) – I remember being asked to compute F_{13} and did something like “1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, uh… 144, 233”. (This is linear time. It’s possible to do better via matrix exponentiation, or Binet’s formula – bonus points if you pair the exponentiation with a fancy multiplication algorithm like Karatsuba or even Schonhage-Strassen.)

From a computing point of view there can be both benefits and disadvantages to this versus the “top-down” method.

  • Ease of understanding and/or code readability are likely to depend on the problem; for Fibonacci I would prefer bottom-up, but I usually find the top-down case to be more approachable (it’s more intuitive to me at least to think “here’s how I decompose this problem” as opposed to “here’s how I build a solution from smaller solutions”).
  • The top-down approach might be able to solve some problems without necessarily computing all smaller subproblems that a bottom-up solution counting up from 0 or 1 might deal with. You can, of course, implement this in a bottom-up solution… provided you know how to compute the required subproblems in a way that isn’t itself too costly. With a top-down approach you get this “avoidance” for free.
  • As an extension of the previous point: for bottom-up you’ll need to figure out a safe ordering to go through the subproblems (you can’t have a solution depending on something that hasn’t been computed yet). This is easy in most cases (*cough* Fibonacci), but can be extremely difficult in others (chess transposition tables come to mind; problems with online input, many base cases and a massive domain).
  • Recursive implementations (which tend to be top-down, though could plausibly be in either direction; it’s possible to maintain your own call stack on the heap, or pass some kind of lookup table around) incur the overhead of function calls, and can cause stack overflows for large problems.
  • Handling limited memory (there are many 2D array problems for which only the last row of results needs to be kept; alternatively with Fibonacci we only need the last two results) tends to be more naturally expressed with the bottom up method (though of course, you can clean the top-down cache). This is probably because you’ll have defined an order for solving the subproblems, which may not be as immediately clear with the top-down method.

Note that although this is a powerful tool, there are quite a number of cases where you don’t actually need to consider all of the ways of decomposing a problem into subproblems. A well-known example would be the activity selection problem; given a set of mutually exclusive activities with start and end times, find the largest set of activities I can participate in. I can solve this optimally by sorting events by their ending time, and aggressively picking events to fill my schedule where feasible. The key differentiator here is what’s known as the greedy choice property; that making an optimal choice at each step gives us the overall optimal solution.

In practice anyway it’s highly unlikely that I’d weight my activities equally, so we then get to the weighted activity selection problem, and the greedy method no longer works (but we can still use dynamic programming – as before, sort the activities by their ending time E, and for each activity, pick the better of not attending it, or attending it and behaving optimally before the start time of said activity).

  • 1
  • 2