Browse Category

Software

Static Single Assignments (January Haskell Tests 2018 – Part 2)

Second in a two-part series; see Constant Propagation (January Haskell Tests 2018 – Part 1) for the first part, which covered the assessed portion of the test.

In January this year, Imperial College’s first-year computing students were tasked with implementing a tool to handle constant propagation for statements in static single assignment (SSA) form. In particular, this means that each variable is only assigned to once, and there are no compound expressions. This is useful for optimising compilers to simplify the machine code that a program compiles down to.

Of course, being required to write everything in SSA form is painful; as much as I like immutable objects, mandating their use can be costly (consider a large set, for instance, that periodically receives updates). Banning compound statements can also be troublesome; hence, SSA compilation is typically automated.

The test concluded with a challenge tasking students to implement support for converting a more general program into SSA form. The students had been working with a simple imperative language with statements to assign values to integer variables (possibly doing mathematical operations), conditional statements (if-then-else) and do-while loops.

This problem naturally decomposes into three sub-tasks: supporting SSA translation for basic blocks, conditionals and then loops. The first problem can also readily be split into two sub-problems: ensuring variables are immutable and then handling compound expressions.

Basic Blocks

Immutable Variables

I opted to begin with immutability here, as this would allow for easy testing. This is done by creating multiple versions of a variable. For example,

could become

Essentially, one could implement this by keeping track of the latest version of each variable used, and incrementing the version whenever processing an assignment. Whenever handling the right-hand side of an assignment, one needs to be able to know how to reference the latest version of a variable.

I’m pretty sure I wouldn’t have bothered to write a specific UsageMap type when I was still in university, though working in industry seems to have made me think more carefully about typing.

Compound Statements

The version of SSA in the paper also doesn’t allow compound statements (e.g. a = ((b * c) + 5) / d). However, this could be handled reasonably as arithmetic expressions can be represented in tree form.

We can create temporary variables for each non-leaf node of the expression tree. For example, we define k1 = b * c, k2 = k1 + 5 and k3 = k2 / d. Some care is needed in cases where the tree is a bit more balanced (we need to know that the first argument of our plus above is k1).

Also, of course, k is only used for illustration above, as real functions could have a value actually called k! It makes sense to have a counter separate from the rest of the UsageMap, though I relied on some limits on the variable names that were mentioned in the specification. I’m not sure if I would have worked this out in first year!

Conditional Expressions

One needs to consider which versions of a variable are in scope. Let’s consider the following block:

After reaching the end of the if block, we can’t just use the UsageMap we have obtained from that to pass into the else block (since on an execution where we’re going in to the else block, the if block must never have run). Similarly, after reaching the end of else we can’t just use the UsageMap from that (we don’t know if the else block has run).

It is probably easier to visualise this block of code as a tree. Each node represents a possible linear execution path that we may run.

The versions of variables each of the blocks read upon branching should be the version before we entered that block. Thus, we should reference the state of the UsageMap from before we entered the if block when entering the else block. We actually want something like this.

When returning, we somehow need to pick one of the final values for variables that were possibly modified. This is difficult to express, but we can approximate it with a phi function, where \phi(a, b) nondeterministically returns either a or b. We can thus simply pick a4 = \phi(a2, a3) and b4 = \phi(b2, b3), and add these statements. Finally, we would want to return a4 + b4. Some care needs to be taken with the implementation (for example, one must handle the case where a variable is only possibly modified correctly). This should, for the above block of code, yield:

During the implementation I was running out of time, which unfortunately led to this monster of a helper function. It could probably be cleaned up by perhaps making a type for ‘return values of a traversal’. It’s also probably reasonable to do some kind of higher-order function on the mod-set of variables (which I computed elsewhere).

Do-While Loops

Finally we also need to support loops. We could use a call graph structure, similar to what we had for the if-then-else conditional, except that there is a cycle.

At first glance, this looks tricky. We don’t know the number of times the loop will execute, though we do know it’s at least once.

Which value of a are we reading in the loop body? Well, this is either the value from the initial set-up, or the value the previous loop left at its end. We could thus try to introduce phi-functions:

This program is impossible to execute, of course; the definitions are cyclic. However, we’re not actually trying to execute it. We’re only using this as an intermediate representation to try to statically optimise our code. Of course, if the two arguments to a phi-function are the same, we can eliminate it, mark that value as a constant if known, and then use the information about it being a constant to drive further optimisations.

Synthesis

I think this could be a fairly challenging question in and of itself for first-year computing students, though some degree of hand-holding would likely be required (in particular, the tree traversal for splitting out intermediate assignments can be quite fiddly to work with). It would also be worth introducing the concept of basic blocks early on, as reasoning about them is more useful in handling conditions and loops. It was a rather solid and satisfying conclusion to the original 2017 test.

Constant Propagation (January Haskell Tests 2018 – Part 1)

The latest iteration of the January Haskell Tests is out (see Dr Tony Field’s page, and my post on the 2017 test). I think the last time I wrote any Haskell was eleven months ago when taking the 2017 test for a spin; nonetheless I was able to pick things up quickly. I guess the syntax (outside of monads) is pretty intuitive.

This test in some ways felt a fair bit easier than the last, at least in terms of the assessed portion (the last time round, it took me up to just past the two-hour mark to clear that; this time round, I was done with the questions that were actually worth marks by 1:15). However, the ‘bonus’ unassessed Part 4 seemed hard; finishing it in time looked to be a pretty tall order. Part 4 itself could be decomposed into three fairly chunky parts, and could even be a reasonable basis for a separate test!

The test dealt with optimising code that was already in static single assignment (SSA) form. I’d covered this topic before in a course on software reliability (within the context of static program verification), though I don’t remember covering it within the context of (possibly optimising) compilers. SSA form is fairly restrictive:

  • Each variable can only be assigned to once.
  • Operator arguments can only comprise constants and direct variable references.

For example, the assignment  x = 4 * y + 7 * z is clearly not in SSA form; but it can be rewritten as  a = 4*y; b = 7*z; x = a + b where each statement is. In most programming languages variables can be assigned to multiple times; as part of conversion, we tend to add suffixes to variables for versioning. A more involved example follows:

SSA also allows for the concept of a phi function, which has two expressions, and non-deterministically returns one or the other. This is useful for performing optimisations regarding control flow. For example, consider this function:

We can rewrite this using a phi function; after an if-block, the final value of a variable could be the value at the end of either branch.

Although one had to work with phi functions as early as Part 2 (the function written at the end of part 2 should be able to simplify the above function significantly), thinking about the purpose of phi functions certainly wasn’t important until Part 4.

Part 1 involved writing an interpreter to execute statements in a (very) simple subset of C, where code has already been parsed into tokens. I think this was intended to set a baseline for students who were struggling with the material; I knocked through the 14 marks worth (which is enough for a pass; that’s 47%) in just 17 minutes.

Things got somewhat tougher in part 2. The opening was easy, with two 3-mark questions inviting candidates to implement simple constant folding optimisations (in particular, that  0 + x = x and \phi(x, x) = x , along with calculating out results) and then determining the value of an expression given knowledge of some variable values.

The tricky part then began, with a task to implement constant propagation and folding to actual code blocks in SSA form. Conceptually, this is not difficult – once the value of a variable is known, it can be substituted into all subsequent occurrences. A fix-point approach fits naturally, since knowing something about some variables can lead to knowledge about more variables. Nonetheless, I struggled for a while with implementing the suggested worklist algorithm; the aforementioned fix-point approach was suggested too, and would probably have been more elegant, if harder to implement. I finished this part at 1:01.

Part 3 involved removing phi functions if it was not possible to clean them up as part of the SSA optimisation, by pushing the variable assignments back into code. It seems a decision was made to state the rules for these explicitly in the specification, for some reason (to be fair, as Tony notes it is straightforward), which seemed to make for a pretty tame last assessed part. There was some tricky thinking required here, nonetheless; I think one has to be very careful with handling multiple phis!

Finally, Part 4 tasked students with translating a function to SSA form. This itself is weighty enough that I think another test could easily be written on it. This part can be decomposed into three (maybe four) separate challenges:

  1. Translate basic blocks to SSA form correctly.
    1. Maintain correct information about versioning of variables.
    2. Decompose compound statements (like x = 3 * y + 5 * z).
  2. Add support for conditionals.
  3. Add support for loops (do-while was a part of the simple C subset used).

Squeezing all of that into the remainder of the time after finishing the first three parts would have been tough; I finished the first two full tasks (basic blocks at 2:07 and conditionals at 2:46) and squeezed out a comment on what to do for the last sub-task. The last part of task 2 was also written rather hackily as I was racing the clock. For a quick preview:

To be fair, I could probably have churned all three main and three bonus parts out in Java within the time-limit; it’s nonetheless useful to engage a different set of programming skills.

Last year, I wrote that I enjoyed revisiting Haskell; I enjoyed this year’s task too. I think I actually enjoyed this installment more, probably because of the outsize difficulty in Part 4! Although I haven’t written professional Haskell, I often find myself writing functional-style code in Java, and I’d still give it an enthusiastic thumbs-up for a choice of first language to teach at university. There are some influences from professional development coming in here, too; a year ago I probably wouldn’t have batted an eye at using types like a Haskell [(Str, [Int])] or Java Map<String, List<Integer>>, while I’ll tend to give structures types more quickly. My tolerance for long functions (without good reason, in non performance-critical scenarios) has also certainly decreased.

This post is Part 1… I’ll take a more in depth look at Part 4 of the exam in Part 2.

Balloon Battle (Hotseat: UKIEPC 2017, Part 1)

Overview

The United Kingdom and Ireland Programming Contest (UKIEPC) 2017 took place a few weeks ago. I used to participate semi-actively and had a rather annoyingly consistent habit of finishing in third place at Imperial. I haven’t done contest programming for quite a good amount of time, though some of my experience from interviewing candidates at Palantir might have helped.

Teams are given five hours to solve somewhere from eleven to thirteen problems. These problems involve submitting a program (the source will do) to an online judge, which typically runs said program against multiple test cases and evaluates the output of the user’s program in some way. This can entail simply checking strings, or alternatively checking if the given solution achieves some task (consider problem D in this set).

This time round I managed to figure out 9 problems, rather coincidentally with a total “penalty time” of 999 minutes. When teams are ranked, the number of problems solved is compared first; to break ties, teams are given penalty time, where solving a problem T minutes since the start of the contest, with F failed attempts, scores T + 20F penalty time. The penalty for failed attempts does not accrue until the relevant problem is actually solved. It’s still in one’s interest to finish solving any questions – even though it may give you arbitrary amounts of penalty time, it will not hurt your ranking. This score would have placed second at Imperial (maybe because some of the members of the usual team number 2 have graduated?) and, at time of writing, 51st out of 522 overall.

Problems

This half of my write-up (I intend to discuss this in two parts) covers what I think are the easier seven problems in the contest, in the order I tackled them. Looking at the metadata on Codeforces, there is a pretty clear drop-off after this. Interestingly, F was actually considered the hardest problem of the ones I’ve listed here (in that fewest teams solved it), but I was able to place it third in difficulty; I struggled more with D than some of the later ones, even, though mainly for implementation reasons.

C: Cued Up

Wrong answer at 9 minutes; correct answer at 18 minutes (10 minutes spent total).

Given a description of the number of snooker balls still on a table, determine the maximum number of possible points remaining to be scored. Initially any ball can be hit (it isn’t known whether the red or coloured balls are “on”), but thereafter players can’t pot 2 consecutive reds, and if a non-red is potted and reds remain on the table, the next ball potted must be red. Assume that only one ball is potted per shot.

This is mainly a straightforward implementation – the main snag is that if you have only red balls, then regardless of how many there are the right answer is 1. I think I messed up the first time because I fixed this edge case (which is the last of the sample tests), but accidentally ended up printing “1” before the correct answer on all of the other tests…

I: I Work All Day

Correct answer at 13 minutes (4 minutes spent).

This was basically decoding the problem statement and figuring out that “what’s left over after repeated subtraction” refers to the modulo operator. I probably could have got this one out faster if I was still comfortable with C++.

J: Just a Minim

Correct answer at 17 minutes (4 minutes spent).

Figure out the length of a tune given the length of note values. Again, raw implementation. I’m surprised this even took 4 minutes, though that might well have been because I went through all the sample tests, and also did the “hacky” approach of hardcoding the strings. There was a slightly faster to write up solution that involved special casing 0, and then writing 1/x for the others. I also might have wasted some time because I was concerned about floating point rounding errors and so calculated things in terms of sixteenths. (That’s not needed because these numbers have exact binary representations, and the problem limits would be very unlikely to result in a loss of precision.)

F: Flipping Coins

Correct answer at 26 minutes (8 minutes spent).

You have n coins, all initially tails up, and k coin flips which you are obliged to exercise – but you can choose which coin to flip. At the end of the game you keep all coins that are heads up. What’s the expected number of coins you can keep (with optimal play)?

This one is figuring out the right strategy (pretty simple – always flip something that’s tail side up if you can) and then writing a recurrence relation. There are overlapping subproblems, so a quick bit of memoisation is needed, but it is not hard to implement. I’m surprised how much candidates appeared to struggle with this problem. I guess it requires a little bit more familiarity with thinking in terms of probabilities, which has tended to be one of my stronger points at contests like these.

D: Deranging Hat

Wrong answers at 50, 53, 61 minutes. Correct answer at 89 minutes (63 minutes total spent).

A sorting network consists of comparators; given two indices i and j (importantly, in that order), if i is greater than j, then the letters at those indices are swapped. Given a string, build the sorting network that would “sort” the traditionally sorted version of that string back into the original string. For example, given the string “dude”, you would need to transform “ddeu” to “dude”, perhaps by comparing 4 and 2 and swapping them, and then 3 and 4 (and then swapping them). The sorting network can’t be more than ~10X the length of the string, in the worst case.

It took me quite a while to figure out what the problem statement meant. I came up with a reasonable idea (swap each letter into its right place, iterating through the string once), but was needlessly worried about performance and started doing a lot of fiddly book-keeping with Maps of SortedSets; there was a difficult-to-trace bug in there. I then gave up some time after 61 minutes and reimplemented things with a much simpler temporary char-array solution, which worked.

A: Alien Sunset

Correct answer at 104 minutes (15 minutes spent).

Given day lengths on multiple planets and times of sunrise/sunset, find the first time within a given time window where it is dark everywhere.

I avoided this at first because it looked like an Interval Tree kind of problem. Though seeing that many people solved it, I had a closer look and it looked like writing something in O(planets * allowed time window) would work. Thus, simply start from 1 and test each possible time interval to see if it is indeed dark everywhere, and if it is return. There was a bit of a snag with some planets having daylight hours crossing zero, but nothing too difficult to manage.

E: Education

Correct answer at 140 minutes (36 minutes spent).

A school has N departments, each with some number of students Si; they need to move to a new set of buildings, which each have capacity Ti and cost Ci (but they cannot share buildings). Minimise the total cost of all buildings, while fitting all of the students in (if possible).

I figured to sort the departments in descending order of size and then thought of some kind of dynamic programming based solution. I then realised I could do better; a greedy solution which picked the cheapest available building could actually work, if one was considering departments in descending order of size. This led rather quickly to one possible solution: sort the departments in descending order of size, and then populate a priority queue (highest priority = lowest cost) with permissible buildings. As larger departments are housed, smaller buildings can be added to the permissible pool, though we still want to ensure that large but cheap buildings would be used first. If the queue ever ran dry, the assignment would be impossible.

Five problems remain that are somewhat more difficult (at least in my opinion).

Thoughts

There are certain skills that I’m still reasonably comfortable with, like mathematics in terms of computation. There are others like computational geometry that I don’t remember too well (as we’ll see in part 2). The first part of these contests is typically a speedy rush to get credit for as many problems as possible; usually somewhere between the 5 and 7 problem mark things slow down a bit. The second part for me tends to be more interesting, though I find that the focus changes from speed to actually thinking hard about algorithms – and very often fretting over implementation as well.

Readability and Performance

I find myself faced with tradeoffs between readability and performance fairly often. This happens both in terms of programming (where, apart from in some specific situations, the former usually isn’t too bad) and in English.

In programming, code that is readable is usually desirable. This often makes it easier to maintain, especially for other developers who subsequently need to understand it. For example, let’s suppose we have a Java hash map, and we need to return an Optional indicating the value with a key that matches a given Predicate, if it exists. However, if there are multiple keys matching the Predicate, we want to throw an exception (even if the values are the same).

I would probably write something like this:

However, in terms of performance we could clearly improve. Using limit does allow us to shortcircuit (so we don’t look at more elements than we need to). However, creating stream objects could still add significant overhead, especially if the predicate is simple.

The code below should accomplish the same function, but more quickly.

This is still fairly readable, in my opinion, though there is a bit more state to keep track of. However, if the predicate is costly, we could do even better by taking our first approach and parallelising it, essentially framing the problem as a map-reduction. (The end result is likely to be slightly less readable. It might be substantially less so for people unfamiliar with concurrency).

Of course, it is not always the case that readability and performance are negatively correlated. Optimising compilers implement many techniques that transform a program with relatively readable source code into one that is efficient for a machine. This is often true, though, when an entirely different (and usually more complex) algorithm and/or implementation is selected because of performance.

It goes without saying that readability is also subjective (tabs versus spaces is a starting point). Readability does not always imply maintainability as well. For example, deeply nested structures where each layer focuses on one specific functionality may be readable, but may prove less maintainable given that they may produce longer, more convoluted stack-traces when exceptions occur.

Defining “performance” in natural languages is tricky. We could consider performance to be efficiency of communication, assuming a perfect ability to comprehend. Well-written compilers implement the source language specification faithfully. They should thus be able to understand valid source, even if it’s not in a human readable form. This happens in practice; processes like JavaScript minification are commonplace.

Of course, English does not quite have a specification. Nonetheless, I think efficiency under perfect comprehension is probably a reasonably close analogue to software performance. For example, if I was asked to explain the logical argument rule of modus tollens, I would probably say something like:

Suppose we have if A, then B. Suppose not B. Then, we have not A (since if we had A, then we would have B).

That explanation would probably make sense to readers vaguely familiar with formal logic and/or philosophy. Again, I could conceivably further increase “efficiency”, by removing the bracketed part. I could even boil things down further:

If A, then B. Not B. Thus not A.

Of course, if the person I was talking to was unfamiliar with logic and, let’s say, a friend, I would proceed very differently. This would, in a sense, be prioritising readability:

It’s a way of making an argument in logic. The idea is that if we know that if A is true, then B is true, and we know B is false, then we can say that A must also be false. Let’s use a concrete example. Let’s say that if it rains, I’ll wear a hoodie, and I’m not wearing a hoodie. Deducing that it must not be raining would be an example of modus tollens.

That said, there is also plenty of subjectivity in terms of readability for natural languages. There have been many attempts to compute estimates or measures of readability. These often use formulae that accept word length and syllable distributions and/or lookup tables. However, many of these fail to account for, among other things, the level of abstraction in text. Going back to our three explanations of modus tollens, they have approximate Flesch-Kincaid grade levels of 0.4, -2.6 and 5.5 respectively. However, I’m fairly confident that most three or four year olds (grade -2.6) would not quite understand the concept of modus tollens after reading “if A then B, not B, thus not A”. Even if 11 to 12 year olds (grade 5.5) were split into groups and given the three explanations, I suspect that the third would still be the most likely to help the students understand the concept.

I actually find the second explanation the most readable, owing to domain knowledge. In just nine words, it gives me a precise and accurate description of what the logical rule is. It might be appropriate for a general audience. It probably would not be for an audience familiar with neither logic nor philosophy.

The first is good in that it explains why not A is reasonable. I could see myself appreciating it if the subject matter was somewhat more complex (e.g. results in modal and temporal logic).

The third features a concrete example, which is useful when learning. It also explicitly sets out some assumptions common in logic, such as “suppose X” meaning “suppose X is true“. However, I find it to be less readable, as I need to read a longer text to extract the main conclusion of the paragraph.

Earlier, I highlighted that readability and maintainability of software and source code are not one and the same. Let’s consider the notion of maintainability. If spoken (especially extemporaneously), I’m not sure this is necessary at all. If written, that could refer to the difficulty of proofreading or returning to the work when trying to revise it, perhaps?

In the software case, there are times when my hand has been forced and I’ve thus opted for a complicated, less readable implementation. Outside of word games like Taboo and exams or application processes with strict word limits, I don’t think I’ve had to for English. Thus, in general I would consider my target audience and aim to present it in what I expect would be the most efficient way possible, while still retaining the required readability.

Portmanteau (Spliterator)

For the most part, classes in the standard Java libraries consist of fairly normal words (e.g. a ScheduledExecutorService is an ExecutorService – a service that manages the execution of tasks – which allows for scheduling). There are exceptions (and I don’t mean the Java ones), of course, such as the OMGVMCID.

Java 8, specifically JSR-335 brought about a bunch of new classes, including some of my Guava favourites (the function package, with Supplier and friends, and Optional) and Streams. Most of these, again, have fairly normal-sounding names; there are a few that might sound a bit hacked together like BiConsumer but I think the meaning is generally intuitive (a consumer of two arguments). There is one that was initially fairly puzzling, though – “Spliterator” (rendered as one word). The obvious reading of that is as a portmanteau of “split” and “iterator”, but I wasn’t sure at first how useful it would be. Of course I have written code to partition collections or other iterables, but this was usually done on an ad-hoc basis.

The API seems designed around easily scheduling parallel computation. The key method here is trySplit(). To quote the Java API docs:

If this spliterator can be partitioned, returns a Spliterator covering elements, that will, upon return from this method, not be covered by this Spliterator.

I’ll go over this in a bit more detail.

  • If the spliterator can currently be split, then trySplit() returns a new spliterator, and this “takes over” some of the elements in our existing spliterator. Our existing spliterator, if traversed directly after this method returns (e.g. with tryAdvance()), will not have these elements.
  • Otherwise, trySplit() returns null. Our existing spliterator retains its elements.

For example, this is one way of doing a parallel file deletion, which starts from one spliterator and splits it as necessary.

Viewing this in relation to parallel programming, there are obvious similarities to fork-join based workflows, especially in cases where the work involves a map or filter, or doing some reduction that’s commutative and associative. A traditional way of writing the above wouldn’t be too different; you might write some logic that splits the files into batches (which now lives in the spliterator), and then similarly dispatch deletion jobs to an ExecutorService.

From the above, it seems that a one notable difference is where responsibility for how the stream should be split lives. The Spliterator has some responsibility for deciding whether it should split itself. To some extent this makes sense; in cases where we want batches to have rather specific sizes (e.g. if performance is highly sensitive to elements being processed in specific batch sizes), we can guard trySplit() with suitable calls to the estimateSize() or getExactSizeIfKnown() methods. This also can be useful for allowing us to avoid fiddly edge cases (where our size bounds pass, but we must work with less than a full batch of elements, for instance).

Spliterators are also useful because they include characteristics that dictate, among other things, how the elements are to be traversed. For example (though this doesn’t quite apply to the file deletion example) we may be able to use a certain optimisation if we can assume that we will see the data in sorted order; spliterators do carry this information (whereas if we used a standard Iterator we might not be able to easily do this without some additional bookkeeping). The same might apply for cases where we can do better if we know elements are unique. The collections API also generates spliterators with the correct characteristics, in a sense (e.g. a NavigableSet would give you a spliterator that already has the sorted and unique properties present).

I’m still not sure why this name was decided upon, as opposed to, say, SplittableIterator. Perhaps this was done in the interest of terseness and because the decomposition is fairly obvious, though the name still really doesn’t look like a proper word to me. It does seem that spliterators have use cases beyond standard iterators as well, even when parallel computation is out of the picture – the characteristics and ability to dynamically switch on them seems potentially useful. That said, I still have not used a Spliterator directly in my professional work, and have no intention of doing so unless it fits the task I’m doing!

Iterative Dichotomiser (January Haskell Tests 2017)

I probably can’t really explain this much better than the source:

The January tests are on-line Haskell programming tests sat under examination conditions by first-year undergraduate students at  Imperial College at the end of a six-week introductory programming course.

Dr. Tony Field has been publishing them for quite a few years now (the linked page goes back to 2009). I still to some extent remember my first year Haskell courses, somewhat impressed by the rationale for choosing Haskell even though my understanding at the time was rather clouded. I do remember a specific instance where Tony polled the class on past programming experience, noting hands for C++ and Java (I raised my hand for both), and then tossing in Haskell (a few people; probably because Imperial did say to prepare to study Haskell beforehand!). Besides this, I think  having to worry less about persistent state (or race conditions, though I don’t think we covered concurrency at Imperial until second year) and being closer to the notion of mathematical functions, which students should already have been exposed to, would also have helped.

This year’s test (2017) covered decision trees, culminating in a question inviting candidates to implement the information gain version of ID3 when building a decision tree from a dataset. It wasn’t too difficult as a whole, as Tony acknowledged on his page; despite probably last having written Haskell about a year ago when I attempted the 2016 test, I finished comfortably in 2 hours and 12 minutes (the time limit is 3 hours). I think this test as a whole required some fairly careful work, but didn’t demand anything in terms of special insight even at the very end (as some past tests have done). The elegance of my answers would probably leave a fair bit to be desired, though; I found I was building recursive traversals of the given datatypes very frequently.

That said, I found the first part somewhat more difficult than in the past. Typically Part I was extremely straightforward (rather awkwardly, there used to be a question asking students to implement lookUp almost every year); not so much this time. In particular, there was a rather fiddly function to write that involved navigating some data structures and building a frequency table; the spec featured a lot of type definitions that reminded me a bit of some experiments with Object Calisthenics (in particular, the “wrap primitives and strings in classes” rule). That said, completing Part I alone would already have let you pass (with a 47; the pass mark is 40). I think the frequency table was harder than anything in Part II, actually, which had a few, fairly straightforward tree functions to write.

Moving on, part III effectively introduced students to the Strategy pattern (in terms of an attribute selector for deciding which attribute to split the dataset on). Apparently, it was possible to solve partitionData with a suitable one-liner, though I unfortunately didn’t come up with anything along those lines, and went with a rather “direct” approach of dividing the rows by the element and removing the relevant attributes. Part IV introduced the concepts of entropy and information gain, and thus invited students to implement ID3; given the title of the paper I wasn’t too surprised to see this at the end.

I found it fairly interesting to revisit Haskell, and it was nice to see that I was still able to work with the language after not touching it for a year or so. While it might be fairly unlikely that I would work with functional languages in professional terms, concepts and reasoning that are more apparent in functional languages do certainly apply even when I work in Java or C++, whether in the obvious sense (streams/map/filter etc.) or less so (inductive/algebraic datatypes).

On the Practical Complexity of Efficient Testing

This is a part 2 to the previous post on the theoretical complexity of efficient testing. Recall that we modelled tests as being used to verify that code satisfied some requirements, and then modelled the problem of efficient verification as finding the smallest set of tests that covered all of the requirements.

Although the decision problem is NP-complete, we can still put forth a decent attempt at solving it. We can rewrite the set-covering problem as an integer linear programming problem (define an indicator variable indicating whether each test was included in the test set or not, and define a constraint for each requirement, indicating that at least one of the tests that satisfies it is true; we then need to minimise the sum of all of the indicator variables). There are many solvers such as GLPK or CBC that can solve even fairly large instances of these problems. Similarly, we can also reformula set cover as boolean satisfiability; there are many solvers that can handle large formulae with many variables as well.

That said, although we can minimise the number of tests being used, it’s not entirely certain that we should, for several reasons. For example, suppose we wanted to test a function that returns all instances of characters occurring exactly two times in a string. Well, this is one possible implementation – and I’d be fairly confident in saying that you can’t really do better than linear time (you can’t avoid parts of the string in general, though there are some cases where you can shortcircuit e.g. if you have examined a portion of the string where every allowable character has appeared at least three times).

The first problem would obviously be whether the number of tests is even a good metric. I’ve written a few tests for the method above:

I’d certainly prefer having the three tests which each test something specific, as opposed to the single canFindPairs() test (in fact, if I came across the latter in a code review I would push back on it). The main problem here is that one way of reducing the number of tests is simply to merge existing tests or run large integration tests only, which is generally not great. Incidentally, this could lead to an extended heuristic, where we weight test methods by number of assertions.

But let’s suppose tests have disjoint assertions, and we don’t attempt to game the system in the way described above. The next issue is then how we define requirements. One possibility is to give methods well-defined postconditions and check that tests verify these, but this is unlikely to scale to large systems.

A common method, then, is to use code coverage as a proxy (this can be measured automatically via tracing of test calls). Line coverage, including adjusting for conditionals could be a good starting point. However, this isn’t really a good metric either – the three tests introduced above or the single canFindPairs() test actually achieve 100 percent coverage, by most definitions:

  • We have an input that violates the precondition, and two that pass it (line 2).
  • We do exercise the body of the for loop with the “aa” and “aaa” tests (lines 5-6).
  • We have both true and false outputs in the filter construction (line 10). This might not even be considered to be a requirement for the line to be covered.

Yet, if someone submitted the above tests only for findPairs() and I did a code review, I would ask them to add more testing. I’d probably expect at least the following:

Furthermore, the above method is not actually correct if going beyond UTF-16, so if (but only if) that would be likely given the context of the application involved I would ask for a test featuring that as well.

Thus, by optimising for code coverage and eliminating tests based on that, we run the risk of weakening our tests to the point where they couldn’t catch legitimate faults. For example, a test using characters outside of UTF-16 as described above would be unlikely to improve coverage at all, and thus might be pruned (thus allowing our implementation to pass even though it wouldn’t work). Approaches for evaluating whether this is worthwhile can include having developers plant faults in code, seeing if test suites after pruning can still catch them, or automatically mutating implementations (e.g. interchanging operations, changing the order of lines of code etc.) and seeing if test suites behave differently before and after pruning.

Coverage is still probably one of the least worst metrics in my opinion – I can’t really think of a good way of improving on it cheaply and scalably. Furthermore, studies have shown that in spite of line coverage being a kind of blunt instrument, it is nonetheless able to in several practical cases achieve decent reductions in test suite sizes without harming fault detection too much; nonetheless, the most aggressive solutions (such as using integer linear programming) seem to overfit to some extent, performing more than commensurately less well at detecting faults that were introduced.

Another Look at Dynamic Programming

Whilst on the tube today, I overheard a mother teaching her child how to count, using a method likely to be extremely familiar to many – fingers. The child counted correctly from one to ten, and then the mother added her hands too and asked the child to count how many fingers there were now.

“One, two, three -“

And so on, till twenty. The mother then attempted to explain that it would have been faster if the child continued from ten, rather than starting again. Although it wasn’t really an example of the concept, the words dynamic programming immediately shot to the front of my mind. I initially found this to be a rather confusing concept to grasp (let’s say that up to high school programming contests, if a problem wasn’t solvable by exhaustive search or greedy algorithms I’d likely have struggled), so I figured a post on it might be worthwhile.

(This isn’t really an example of DP; I’d say it’s closer to divide and conquer plus the use of a cache. We’ve cached the answer that the child has ten fingers, and identified the problem as being taking the sum of the child’s and parent’s fingers. Note that because of the possibility of amputation or polydactyly, the subproblems are not the same – and, specifically, saying 2 * 10 = 20 isn’t generally correct.)

Essentially, the key idea behind dynamic programming (DP) is that we save time by not re-doing work that we’ve already done, by remembering the results to intermediate steps. Of course, this tends to mean that there’s a space overhead. This is generally useful in cases where a problem is too large to solve, yet it can be decomposed into smaller pieces, and importantly we can combine optimal solutions to these smaller pieces, to get a solution that is optimal for the original problem. (More formally, this is known as optimal substructure.)

Furthermore, we want to get some significant benefit out of actually remembering the answers (in practice, we want to use our previous work multiple times; this manifests in the form of overlapping subproblems). This is what would distinguish an approach as being a DP-based one, as opposed to divide and conquer.

Of course, the fingers example is trivial. There are many other natural examples (the ones that come to mind first for me include knapsack problems and route-planning), though I’m not sure I directly apply DP that much in a natural context (although quite a few days have tasklists that could be done solving an ordered constrained TSP, the last time I used the Held-Karp algorithm was probably for my third year project). It certainly does see many applications that are relevant to daily life (error correction in search queries / autocorrect via Levenshtein distance; not sure how they are actually implemented but routing applications like Citymapper and Google Maps are likely to involve such algorithms as well).

In terms of implementation, the cache-based “top-down” solution was what I learned first, and to me at least was intuitively easier to understand. When you encounter a subproblem, you check a lookup table to see if you’ve done the problem before; if you have, you just take the answer from that. If you haven’t, solve the problem the hard way (this may involve more subproblems – when solving these, it’s important to look at the table again), and then (important!) store the answer you obtained back in the table.

The alternative “bottom-up” method involves generating solutions to smaller subproblems, and using these to build up the solution to a bigger problem. I’d probably first actually used a method along these lines when introduced to the Fibonacci sequence (probably in year 4 or so) – I remember being asked to compute F_{13} and did something like “1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, uh… 144, 233”. (This is linear time. It’s possible to do better via matrix exponentiation, or Binet’s formula – bonus points if you pair the exponentiation with a fancy multiplication algorithm like Karatsuba or even Schonhage-Strassen.)

From a computing point of view there can be both benefits and disadvantages to this versus the “top-down” method.

  • Ease of understanding and/or code readability are likely to depend on the problem; for Fibonacci I would prefer bottom-up, but I usually find the top-down case to be more approachable (it’s more intuitive to me at least to think “here’s how I decompose this problem” as opposed to “here’s how I build a solution from smaller solutions”).
  • The top-down approach might be able to solve some problems without necessarily computing all smaller subproblems that a bottom-up solution counting up from 0 or 1 might deal with. You can, of course, implement this in a bottom-up solution… provided you know how to compute the required subproblems in a way that isn’t itself too costly. With a top-down approach you get this “avoidance” for free.
  • As an extension of the previous point: for bottom-up you’ll need to figure out a safe ordering to go through the subproblems (you can’t have a solution depending on something that hasn’t been computed yet). This is easy in most cases (*cough* Fibonacci), but can be extremely difficult in others (chess transposition tables come to mind; problems with online input, many base cases and a massive domain).
  • Recursive implementations (which tend to be top-down, though could plausibly be in either direction; it’s possible to maintain your own call stack on the heap, or pass some kind of lookup table around) incur the overhead of function calls, and can cause stack overflows for large problems.
  • Handling limited memory (there are many 2D array problems for which only the last row of results needs to be kept; alternatively with Fibonacci we only need the last two results) tends to be more naturally expressed with the bottom up method (though of course, you can clean the top-down cache). This is probably because you’ll have defined an order for solving the subproblems, which may not be as immediately clear with the top-down method.

Note that although this is a powerful tool, there are quite a number of cases where you don’t actually need to consider all of the ways of decomposing a problem into subproblems. A well-known example would be the activity selection problem; given a set of mutually exclusive activities with start and end times, find the largest set of activities I can participate in. I can solve this optimally by sorting events by their ending time, and aggressively picking events to fill my schedule where feasible. The key differentiator here is what’s known as the greedy choice property; that making an optimal choice at each step gives us the overall optimal solution.

In practice anyway it’s highly unlikely that I’d weight my activities equally, so we then get to the weighted activity selection problem, and the greedy method no longer works (but we can still use dynamic programming – as before, sort the activities by their ending time E, and for each activity, pick the better of not attending it, or attending it and behaving optimally before the start time of said activity).

Sets of Sets

If I’m asked to think of a real-life example of a set of sets, I think the first example I would come up with was about job lots of items being sold. Interestingly, it actually took a while – the ideas I came up with before that, which were companies I have investments in (note: mutual funds and ETFs own sets of companies), or messages to my friends were really unions of multiple sets.

How would we model a set of sets in Java? Well, that’s not too hard, since a Set is a type after all:

That said, we’ve got to be careful. If we continue the above with this:

What should the output of lines 7, 8 and 9 be? Are they equivalent, even? Well, there are some possible arguments in each direction:

  • They should return true, because the reference to the set thingsBeingSold is still there.
  • They should return false, because the value of thingsBeingSold isn’t the same as the value inserted.

I would argue that the first point is stronger, and would have been my answer when I first learnt about Java. However, the actual answer is that lines 8 and 9 should return true while line 7 generally returns false, though there are a few special cases:

  • It will return true if thingsBeingSold.hashCode() before line 6 has the same hash as thingsBeingSold.hashCode() after line 6 i.e. there is a hash collision, or
  • if the Item created in line 6 equals either of the Items created in lines 3 and 4 (unlikely).

The operation of line 8 is relatively simple: we check each element in the set to see if it is equal, and assuming our set’s equals() method does what it’s supposed to do we will always find the match. We can potentially improve on this with parallelStream(). Line 9 does a reference-equality check on the elements instead and would probably be faster as we don’t have to actually inspect the set contents. In this case, it’s also going to be true. However, these are linear time operations, at least. The point of using hash-based structures here, among other things, would be to knock the time complexity of lookups down to amortised constant.

In a sense, we find the object only if we fortuitously (!) happen to be looking for the same hash code. Java HashSets effectively are a key-set based view of a HashMap, and when we look up objects we actually investigate them by their hashes first, only bothering to check the values if we have a match on the hashes. This allows for some fairly interesting optimisations, actually; I previously thought that HashMaps use buckets and store an array of linked lists of mappings + information, resizing appropriately when the ratio of elements to buckets exceeded a load factor. This was correct up to Java 7, but in Java 8 it appears there’s a fun new optimisation that builds red-black trees sorted by hash codes (while objects themselves might not be Comparable, their hashes are ints, which certainly are) if some buckets get too large relative to others in an attempt to cut the worst-case complexity from linear to logarithmic. Still, when we inspect nodes, we look at their hashes first.

Note that this isn’t just a problem with HashSets; you can get into similar debacles with TreeSets, because the underlying red-black trees aren’t going to reorder themselves. In general, the Javadoc for Set does warn you that mutable objects shouldn’t be changed in ways that would affect their equals() method after they’ve been put into a set. HashSet and TreeSet are allowed to behave the way they do because of subsequent invariants between equals() and hashCode(), and comparison-based sets are supposed to use comparators that are consistent with equals() i.e. a.equals(b) iff a.compareTo(b) == 0.

One way to get round this is of course to ensure that the types used in these sets are immutable, or at least the parts that are used in equals() and related methods are immutable. If that’s infeasible, a good solution could be to delete the set that is to be changed from the set, and add it back after changes; add is idempotent and so the set will (correctly) stay as a set. Wiring this up seems rather messy, though; it looks like we need to listen for state changes (so some kind of publish/subscribe mechanism), where the objects being stored publish both their old and new state so that we can modify the set accordingly.