Browse Month

February 2017

Diagnosing a Faulty Disk

Recently, I had a lot of difficulty booting into Windows on my home laptop. More specifically, I could boot, but the OS would very quickly become unresponsive. However, I had no such issues booting into Ubuntu (and I’m typing this from that). I tried booting into Safe Mode and disabling auxiliary services and devices, but that was to no avail.

Nonetheless, I could use Ubuntu to perform some analysis. I noticed rather unpleasant-looking logs in the Linux message buffers (a.k.a. dmesg).

[   20.588310] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[   20.588311] ata6.00: irq_stat 0x40000001
[   20.588312] ata6.00: failed command: READ DMA
[   20.588315] ata6.00: cmd c8/00:08:40:28:1c/00:00:00:00:00/e0 tag 22 dma 4096 in
[   20.588315]          res 51/40:08:40:28:1c/00:00:00:00:00/e0 Emask 0x9 (media error)
[   20.588316] ata6.00: status: { DRDY ERR }
[   20.588316] ata6.00: error: { UNC }
[   20.619619] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs'
[   20.647929] ata6.00: configured for UDMA/133
[   20.647935] sd 5:0:0:0: [sdb]  
[   20.647936] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[   20.647937] sd 5:0:0:0: [sdb]  
[   20.647938] Sense Key : Medium Error [current] [descriptor]
[   20.647939] Descriptor sense data with sense descriptors (in hex):
[   20.647942]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[   20.647943]         00 1c 28 40 
[   20.647944] sd 5:0:0:0: [sdb]  
[   20.647945] Add. Sense: Unrecovered read error - auto reallocate failed
[   20.647946] sd 5:0:0:0: [sdb] CDB: 
[   20.647948] Read(10): 28 00 00 1c 28 40 00 00 08 00
[   20.647949] end_request: I/O error, dev sdb, sector 1845312
[   20.647955] ata6: EH complete

Hmm, UNC stands for uncorrectable; not great. These errors were easily reproduced. Upon finding that I could nonetheless seem to view most of the contents of the file-system when mounting the SSD in Linux, I immediately took a backup of the important data to an external hard drive, though this was probably objectively unnecessary (really important data is backed up in the cloud).

My first instinct was then to carry out a self-test of the drive, using its SMART (self-monitoring, analysis and reporting technology) tools. I first performed an extended “self-test” of the drive, which seemed to yield suspiciously positive results.

Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      4874         -

I decided to probe further. The tool’s output includes a table of various attributes which can be metrics of a drive’s health:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   000    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0003   100   100   000    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0002   100   100   000    Old_age   Always       -       2664
 12 Power_Cycle_Count       0x0003   100   100   000    Pre-fail  Always       -       2577
170 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       2
171 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       0
172 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       0
173 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       102815
174 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       59
175 Program_Fail_Count_Chip 0x0003   000   000   000    Pre-fail  Always       -       0
176 Erase_Fail_Count_Chip   0x0003   100   100   000    Pre-fail  Always       -       0
177 Wear_Leveling_Count     0x0003   100   100   000    Pre-fail  Always       -       102815
178 Used_Rsvd_Blk_Cnt_Chip  0x0003   100   100   000    Pre-fail  Always       -       1
179 Used_Rsvd_Blk_Cnt_Tot   0x0003   000   000   000    Pre-fail  Always       -       2
180 Unused_Rsvd_Blk_Cnt_Tot 0x0033   100   100   000    Pre-fail  Always       -       784
181 Program_Fail_Cnt_Total  0x0003   100   100   000    Pre-fail  Always       -       0
182 Erase_Fail_Count_Total  0x0003   100   100   000    Pre-fail  Always       -       0
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0033   001   001   000    Pre-fail  Always       -       26018192
187 Reported_Uncorrect      0x0003   100   100   000    Pre-fail  Always       -       2
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       19
192 Power-Off_Retract_Count 0x0003   100   100   000    Pre-fail  Always       -       59
195 Hardware_ECC_Recovered  0x0003   100   100   000    Pre-fail  Always       -       0
196 Reallocated_Event_Count 0x0003   100   100   000    Pre-fail  Always       -       2
198 Offline_Uncorrectable   0x0003   100   100   000    Pre-fail  Always       -       1
199 UDMA_CRC_Error_Count    0x0003   100   100   000    Pre-fail  Always       -       0
232 Available_Reservd_Space 0x0003   100   100   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0003   100   100   000    Pre-fail  Always       -       6425
241 Total_LBAs_Written      0x0003   100   100   000    Pre-fail  Always       -       214612
242 Total_LBAs_Read         0x0003   100   100   000    Pre-fail  Always       -       214201

This is a rather large and nasty table, and it didn’t seem that Plextor implemented any meaningful values for the thresholds. Nonetheless, raw data counts were available. The first alarm bell was metric number 184, End-to-End Error (which sounds terrible); that had apparently happened over 26 million times. Some sources suggest this is critical while others do not; I don’t have historical data regarding the progression of this figure, so it would be difficult to draw conclusions as to how this happened – or, if the sources saying this is a critical metric are correct, how the disk limped to this point in the first place.

Nonetheless, there were other negative indicators as well; 187, 188 and 198 which have been associated with disk failures were all notably more than zero. There were several other ATA errors appearing in the smartctl output as well.

For comparison, I ran the diagnostic tools on my HDD:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   062    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   040    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0007   118   118   033    Pre-fail  Always       -       2
  4 Start_Stop_Count        0x0012   098   098   000    Old_age   Always       -       4642
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   040    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0012   084   084   000    Old_age   Always       -       7425
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       2574
191 G-Sense_Error_Rate      0x000a   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       32
193 Load_Cycle_Count        0x0012   072   072   000    Old_age   Always       -       283355
194 Temperature_Celsius     0x0002   146   146   000    Old_age   Always       -       41 (Min/Max 10/58)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       1
223 Load_Retry_Count        0x000a   100   100   000    Old_age   Always       -       0

Much better. I looked up 199 just in case, but it seemed fine (just one occurrence; and in any case the super important data has been backed up). Notice 197, 198 and 5 (bad sector reallocations) are all zero.

The current situation is fine (the slower startup times are a little unpleasant, though would probably have been worse with Windows). I might investigate replacing the disk and/or getting a new machine (this laptop is reaching 3 years, so an upgrade would be nice) when my budget allows for it. That said, my usage patterns don’t seem to suggest a higher-end machine is necessary at all, so I might stick with it (Sims 4 isn’t that demanding; the most demanding thing I played would probably be Fallout 4 and while I could use a GPU upgrade, that’s clearly a want, not a need). I haven’t quite decided yet.

On the Practical Complexity of Efficient Testing

This is a part 2 to the previous post on the theoretical complexity of efficient testing. Recall that we modelled tests as being used to verify that code satisfied some requirements, and then modelled the problem of efficient verification as finding the smallest set of tests that covered all of the requirements.

Although the decision problem is NP-complete, we can still put forth a decent attempt at solving it. We can rewrite the set-covering problem as an integer linear programming problem (define an indicator variable indicating whether each test was included in the test set or not, and define a constraint for each requirement, indicating that at least one of the tests that satisfies it is true; we then need to minimise the sum of all of the indicator variables). There are many solvers such as GLPK or CBC that can solve even fairly large instances of these problems. Similarly, we can also reformula set cover as boolean satisfiability; there are many solvers that can handle large formulae with many variables as well.

That said, although we can minimise the number of tests being used, it’s not entirely certain that we should, for several reasons. For example, suppose we wanted to test a function that returns all instances of characters occurring exactly two times in a string. Well, this is one possible implementation – and I’d be fairly confident in saying that you can’t really do better than linear time (you can’t avoid parts of the string in general, though there are some cases where you can shortcircuit e.g. if you have examined a portion of the string where every allowable character has appeared at least three times).

public static Set<Character> findPairs(String input) {
  Preconditions.checkNotNull(input, "findPairs called on null input");
  Map<Character, Integer> charCount = Maps.newHashMap();
  for (char ch : input.toCharArray()) {
    charCount.putIfAbsent(ch, 0);
    charCount.put(ch, charCount.get(ch) + 1);
  }
  return charCount.entrySet()
                  .stream()
                  .filter(entry -> entry.getValue() == 2)
                  .map(Map.Entry::getKey)
                  .collect(Collectors.toSet());
}

The first problem would obviously be whether the number of tests is even a good metric. I’ve written a few tests for the method above:

@Test
public void returnsCharacterAppearingTwice() {
  assertThat(findPairs("aa")).containsExactly("a");
}

@Test
public void doesNotReturnCharacterAppearingThrice() {
  assertThat(findPairs("aaa").isEmpty()).isTrue();
}

@Test
public void throwsOnNullString() {
  assertThatThrownBy(() -> findPairs(null))
          .isInstanceOf(NullPointerException.class);
}

@Test
public void canFindPairs() {
  assertThat(findPairs("aa")).containsExactly("a");
  assertThat(findPairs("aaa").isEmpty()).isTrue();
  assertThatThrownBy(() -> findPairs(null))
          .isInstanceOf(NullPointerException.class);
}

I’d certainly prefer having the three tests which each test something specific, as opposed to the single canFindPairs() test (in fact, if I came across the latter in a code review I would push back on it). The main problem here is that one way of reducing the number of tests is simply to merge existing tests or run large integration tests only, which is generally not great. Incidentally, this could lead to an extended heuristic, where we weight test methods by number of assertions.

But let’s suppose tests have disjoint assertions, and we don’t attempt to game the system in the way described above. The next issue is then how we define requirements. One possibility is to give methods well-defined postconditions and check that tests verify these, but this is unlikely to scale to large systems.

A common method, then, is to use code coverage as a proxy (this can be measured automatically via tracing of test calls). Line coverage, including adjusting for conditionals could be a good starting point. However, this isn’t really a good metric either – the three tests introduced above or the single canFindPairs() test actually achieve 100 percent coverage, by most definitions:

  • We have an input that violates the precondition, and two that pass it (line 2).
  • We do exercise the body of the for loop with the “aa” and “aaa” tests (lines 5-6).
  • We have both true and false outputs in the filter construction (line 10). This might not even be considered to be a requirement for the line to be covered.

Yet, if someone submitted the above tests only for findPairs() and I did a code review, I would ask them to add more testing. I’d probably expect at least the following:

- empty string ("")
- one char ("a")
- inputs returning multiple chars ("aabbbccdde")
- nonconsecutive pairs ("abab")

Furthermore, the above method is not actually correct if going beyond UTF-16, so if (but only if) that would be likely given the context of the application involved I would ask for a test featuring that as well.

Thus, by optimising for code coverage and eliminating tests based on that, we run the risk of weakening our tests to the point where they couldn’t catch legitimate faults. For example, a test using characters outside of UTF-16 as described above would be unlikely to improve coverage at all, and thus might be pruned (thus allowing our implementation to pass even though it wouldn’t work). Approaches for evaluating whether this is worthwhile can include having developers plant faults in code, seeing if test suites after pruning can still catch them, or automatically mutating implementations (e.g. interchanging operations, changing the order of lines of code etc.) and seeing if test suites behave differently before and after pruning.

Coverage is still probably one of the least worst metrics in my opinion – I can’t really think of a good way of improving on it cheaply and scalably. Furthermore, studies have shown that in spite of line coverage being a kind of blunt instrument, it is nonetheless able to in several practical cases achieve decent reductions in test suite sizes without harming fault detection too much; nonetheless, the most aggressive solutions (such as using integer linear programming) seem to overfit to some extent, performing more than commensurately less well at detecting faults that were introduced.

On the Theoretical Complexity of Efficient Testing

Complexity was one of the courses I took at Imperial that I found to be especially intellectually stimulating. I would say it was probably the most challenging course of the degree for me at least (excluding the individual project; the second half of Intelligent Data and Probabilistic Inference probably comes close, also bearing in mind the bar being set at 90). I would say the course has certainly helped me reason about algorithms and problems in more interesting and divergent ways, and has also been useful in figuring out when to stop trying to devise excessively clever algorithms (by understanding when that is impossible).

The course starts by looking at decision problems (that is, problems with a yes/no) answer, and whether they fit into the class called P (decidable in polynomial time). Examples of problems in P include finding the maximum element in an array or list, and finding the shortest path between a pair of nodes in a graph (this is breadth-first search for unweighted, and Dijkstra’s algorithm for weighted, both of which run in polynomial time).

We then move on to NP (nondeterministically decidable in polynomial time) – effectively, this means that it is possible to verify a “proof” of the problem being decided as a yes in polynomial time. Well-known problems in NP include the Hamiltonian path problem (given a graph, is there a path passing through each node once?) – this one is in NP because a “proof” that the answer is yes can be given in the form of a path. We can check that the edges match simply by walking down that path, and the path is a permutation of the nodes. Note that we don’t need to be able to prove negative answers in polynomial time – there is a separate class called co-NP for cases where we can prove these. (NP \cap co-NP is a strict subset of both NP and co-NP, and includes P as well as a couple of other problems like integer factorisation.)

(The course also covered the underlying theory behind P and NP, concerning Turing machines – I won’t go into detail here.)

One of the biggest parts of the course involved the concept of many-one reduction; a problem V reduces to a problem W if we can translate an instance of V to an instance of W, within certain constraints. For example, the problem of determining whether all integers in a set are positive reduces to that of determining whether all integers are negative under a polynomial time constraint; the first problem (positive) holds for some set S if and only if the second holds for T, which is S with all elements multiplied by -1. In a sense, we’re showing that if we can solve W, then we can solve V as well.

We then defined the hardest problems in a given class to be the problems that are complete for that class; that is, every problem in that class reduces to a complete problem (and yes, the complete problems reduce to themselves). For NP, the constraint on the reduction is that it can be carried out in polynomial time. This makes sense as NP would not be closed under reductions allowing more generous bounds (for example, if V reduced to W under some exponential time procedure and W was in NP, that doesn’t mean V is in NP; however, if V reduces to W under a polynomial time procedure and W is in NP, then so is V). This holds because if we had a guess for V, we could “translate” it using our reduction to a guess for W which can be verified in polynomial time.

The Hamiltonian path problem (hereafter HP) I introduced earlier is NP-complete, and can be used to show that other problems are NP-complete (by reducing HP to the other problem). This works because from our definition of NP-completeness, any problem in NP can be reduced to HP, and then reduced to our problem. This involves potentially composing two polynomial time mappings – but, the composition of polynomials is polynomial too. Note that it’s not just a sum, because the “output” of the first mapping, which is the input to the second mapping, could be bigger than the initial input (though only polynomially so!).

(For showing HP itself is NP-complete, there is a reduction of SAT – the problem of determining whether a logical formula in conjunctive normal form – to HP, and SAT itself can be proven NP-complete from the definition of NP over Turing machines.)

The inspiration for this post came from some preparatory work I was doing for a reading group with James, on algorithms for test-suite minimisation. The authors comment that the problem of finding a minimal representative subset for a test-suite is NP-complete. As a starting point, removing some of the details, their specific problem (involving mapping of tests to requirements shown by said tests) given is easily reformulated as the Minimum Set Cover problem; that is, given some possible subsets C of a bigger set S, find the smallest subset of C, such that the union of the sets we put in our subset itself is S.

Strictly speaking, this isn’t a decision problem, but we can put it into a decision form: given some possible subsets C of a bigger set S, is there a subset of C, such that the union of the sets we put in our subset itself is S with size not greater than k? Solving the original problem can then be performed by a binary search on k, at each step testing this decision problem for a different value of k. Let’s call this problem Set Cover (Decision) or SC(D).

We could proceed with trying to show HP reduces to SC(D), though I find it easier to do things a bit differently – we can pick any problem that’s NP-complete (so I’ll pick SAT), and we can also exploit transitivity of reductions as mentioned above. I’ll thus use two intermediate steps:

  1. Reduce SAT to 3SAT (the problem of determining whether a formula in conjunctive normal form where each clause has three literals is satisfiable)
  2. Reduce 3SAT to Vertex Cover (VC – given a graph and some positive integer k, is there a set of nodes, size up to k, such that every edge has at least one endpoint in the set?)
  3. Reduce VC to SC(D).

Step 3 is probably the easiest now – given an instance of VC, we map each edge to be an element in S, and each node to be a candidate subset containing all of the edges that are adjacent to it. That mapping requires us to iterate through the graph perhaps a few times, but that’s definitely polynomial in the input. Comparing the definitions it’s pretty clear that this should work (that said, if you were actually doing a complexity course you might need a more rigorous proof).

Step 1 relies on the use of logical equivalences – the idea being that we can always rewrite a clause that doesn’t use three literals into one or more clauses that do. The one and two cases simply involve repeating one of the literals that’s already there; if we have more than three, we can introduce auxiliary variables to chain them together:

// SAT
A or B or C or D or E

// 3-SAT equivalent version
(A or B or X) and
(!X or C or Y) and
(!Y or D or E)

Consider that the 3-SAT version is satisfiable if and only if the SAT version is satisfiable. If any one of the original variables can be set to true, then we can choose suitable values of X and Y (plus other auxiliary variables in larger cases) to satisfy all of the other clauses, “rippling out” from some variable that was set to true. However, if none of them are, we’ll run into a conflict somewhere.

Finally, step 2 is somewhat trickier. At a high level, the idea is that we want to enforce some way of selecting a truth value for each variable, that would satisfy at least one of the literals in a clause. We can then construct triangles for each clause, somehow wanting to have our cover only include two of the nodes in a triangle (corresponding to ‘false’ literals). The unselected literal must then be “true” in some sense, but we cannot have multiple conflicting literals in different clauses being true. To handle this, we introduce an additional constraint linking the literals we pick as true across clauses, yielding a construction like this:

The formula is satisfiable if and only if we can build a vertex cover picking two of the nodes in each clause, and one truth value for each variable. So k = 2C plus V, where C is the number of clauses and V the number of variables.

I think the course and especially exams on it were difficult because these constructions require nontrivial insight, which can be really difficult under pressure. Having a large set of known starting points would certainly have helped; this kind of three-stage derivation would probably not be needed in an exam, though constructions of a similar level of difficulty as step 2 were certainly present.

Another Look at Dynamic Programming

Whilst on the tube today, I overheard a mother teaching her child how to count, using a method likely to be extremely familiar to many – fingers. The child counted correctly from one to ten, and then the mother added her hands too and asked the child to count how many fingers there were now.

“One, two, three -“

And so on, till twenty. The mother then attempted to explain that it would have been faster if the child continued from ten, rather than starting again. Although it wasn’t really an example of the concept, the words dynamic programming immediately shot to the front of my mind. I initially found this to be a rather confusing concept to grasp (let’s say that up to high school programming contests, if a problem wasn’t solvable by exhaustive search or greedy algorithms I’d likely have struggled), so I figured a post on it might be worthwhile.

(This isn’t really an example of DP; I’d say it’s closer to divide and conquer plus the use of a cache. We’ve cached the answer that the child has ten fingers, and identified the problem as being taking the sum of the child’s and parent’s fingers. Note that because of the possibility of amputation or polydactyly, the subproblems are not the same – and, specifically, saying 2 * 10 = 20 isn’t generally correct.)

Essentially, the key idea behind dynamic programming (DP) is that we save time by not re-doing work that we’ve already done, by remembering the results to intermediate steps. Of course, this tends to mean that there’s a space overhead. This is generally useful in cases where a problem is too large to solve, yet it can be decomposed into smaller pieces, and importantly we can combine optimal solutions to these smaller pieces, to get a solution that is optimal for the original problem. (More formally, this is known as optimal substructure.)

Furthermore, we want to get some significant benefit out of actually remembering the answers (in practice, we want to use our previous work multiple times; this manifests in the form of overlapping subproblems). This is what would distinguish an approach as being a DP-based one, as opposed to divide and conquer.

Of course, the fingers example is trivial. There are many other natural examples (the ones that come to mind first for me include knapsack problems and route-planning), though I’m not sure I directly apply DP that much in a natural context (although quite a few days have tasklists that could be done solving an ordered constrained TSP, the last time I used the Held-Karp algorithm was probably for my third year project). It certainly does see many applications that are relevant to daily life (error correction in search queries / autocorrect via Levenshtein distance; not sure how they are actually implemented but routing applications like Citymapper and Google Maps are likely to involve such algorithms as well).

In terms of implementation, the cache-based “top-down” solution was what I learned first, and to me at least was intuitively easier to understand. When you encounter a subproblem, you check a lookup table to see if you’ve done the problem before; if you have, you just take the answer from that. If you haven’t, solve the problem the hard way (this may involve more subproblems – when solving these, it’s important to look at the table again), and then (important!) store the answer you obtained back in the table.

The alternative “bottom-up” method involves generating solutions to smaller subproblems, and using these to build up the solution to a bigger problem. I’d probably first actually used a method along these lines when introduced to the Fibonacci sequence (probably in year 4 or so) – I remember being asked to compute F_{13} and did something like “1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, uh… 144, 233”. (This is linear time. It’s possible to do better via matrix exponentiation, or Binet’s formula – bonus points if you pair the exponentiation with a fancy multiplication algorithm like Karatsuba or even Schonhage-Strassen.)

From a computing point of view there can be both benefits and disadvantages to this versus the “top-down” method.

  • Ease of understanding and/or code readability are likely to depend on the problem; for Fibonacci I would prefer bottom-up, but I usually find the top-down case to be more approachable (it’s more intuitive to me at least to think “here’s how I decompose this problem” as opposed to “here’s how I build a solution from smaller solutions”).
  • The top-down approach might be able to solve some problems without necessarily computing all smaller subproblems that a bottom-up solution counting up from 0 or 1 might deal with. You can, of course, implement this in a bottom-up solution… provided you know how to compute the required subproblems in a way that isn’t itself too costly. With a top-down approach you get this “avoidance” for free.
  • As an extension of the previous point: for bottom-up you’ll need to figure out a safe ordering to go through the subproblems (you can’t have a solution depending on something that hasn’t been computed yet). This is easy in most cases (*cough* Fibonacci), but can be extremely difficult in others (chess transposition tables come to mind; problems with online input, many base cases and a massive domain).
  • Recursive implementations (which tend to be top-down, though could plausibly be in either direction; it’s possible to maintain your own call stack on the heap, or pass some kind of lookup table around) incur the overhead of function calls, and can cause stack overflows for large problems.
  • Handling limited memory (there are many 2D array problems for which only the last row of results needs to be kept; alternatively with Fibonacci we only need the last two results) tends to be more naturally expressed with the bottom up method (though of course, you can clean the top-down cache). This is probably because you’ll have defined an order for solving the subproblems, which may not be as immediately clear with the top-down method.

Note that although this is a powerful tool, there are quite a number of cases where you don’t actually need to consider all of the ways of decomposing a problem into subproblems. A well-known example would be the activity selection problem; given a set of mutually exclusive activities with start and end times, find the largest set of activities I can participate in. I can solve this optimally by sorting events by their ending time, and aggressively picking events to fill my schedule where feasible. The key differentiator here is what’s known as the greedy choice property; that making an optimal choice at each step gives us the overall optimal solution.

In practice anyway it’s highly unlikely that I’d weight my activities equally, so we then get to the weighted activity selection problem, and the greedy method no longer works (but we can still use dynamic programming – as before, sort the activities by their ending time E, and for each activity, pick the better of not attending it, or attending it and behaving optimally before the start time of said activity).