Odd Sums Sudoku

The puzzle above is from the WPF Sudoku GP, Round 4 2016 #13. To solve it, each row, column and marked 3×3 box must contain the digits 1 to 9 once each, and the solution must have, in the marked rows, maximal odd substrings summing to the given totals in the correct order.

I’ve always enjoyed a good puzzle, though I’ve never considered myself especially good at them. That said, I’ve done my fair share of sudoku, including several books where puzzles were labelled as very difficult; yet, fairly often identifying triples, or the odd X-wing, XY-wing or unique rectangle would be sufficient to blow through the puzzles. At least, I thought that clearing some “very difficult” puzzles in 8 or 10 minutes was blowing through them; I found the World Puzzle Federation’s Sudoku GP, where some of these had benchmark times along the lines of 3-4 minutes!

Furthermore, in addition to your standard “classic” puzzles which feature the standard rules (i.e. each row, column and marked 3×3 box must contain the digits from 1 to 9 once each), there are a couple of interesting variants that require understanding new rulesets and figuring out how to deal with them. After a while, as mentioned the classic puzzles largely tend to be solved with standard techniques, and the challenge there is dispatching them with extreme speed; the variants, on the other hand, tend to maintain some degree of freshness (at least as far as I’m concerned).

The above puzzle was an example of this; several of the rows are marked with additional clues, and for these rows, the largest odd substrings in the row must sum to the values indicated by the clues. For example, if a row was 178356924, the clue associated with it would be 8, 8, 9. Note that this is also generated by several other valid sequences, such as 532174689 or 174283569.

For the above puzzle, the 9, 5, 3, 8 clue is a good starting point, and indicates that we have 9, 5, 3 and (17 or 71) as substrings of the solution to that row in some order; given the position of 6, we can conclude that the 9 and the 3 must occur before the 6 (though we don’t know). We thus have 9 in the bottom left hand corner, an even number (either 2 or 4), and then a 3. Now, reading the first column clue 1, 12, 3, 9; we have 1, (57 or 75), 3 and 9 as substrings. The 9 is at the end, and 3 cannot occur before the 4 as there’s no space to fit it. Thus 3 occurs after the 4, but it has to be separated from the 9 by an even number. More simply, the second column clue 10, 9, 6 indicates that our 5 has to be preceded by an even number and then a 1. There’s only one spot left for a 7 in this lower box, so we can insert it.

We thus reach this position:

Moving on, we can consider the middle column. 3 can only be satisfied by a singular 3, so we know that the 3 in the bottom row cannot be in the middle column. It thus must be in the sixth column, with 1 and 7 taking up the eighth and ninth columns (in some order).

Now consider the rightmost column. It is 3, 10, 12; 3 can only be satisfied by a singular 3, so the odd substrings must be 3, (19 or 91) and (57 or 75). Given the position of the 2 (fourth row), it must be the case that the column has 1 and 9 in the fifth and sixth rows, an even number in the seventh, and 5 and 7 in the eighth and ninth. We thus clearly have 7 in the corner, and 5 above it.

Finally, we can actually decide on the ordering of 1 and 9 as well; consider the hint for the middle row, which ends with 8; a 9 would automatically exceed this. Thus the middle row must contain the 1 and the sixth row the 9. The hint gives us for free that we have a 7 to the left of the 1.

There are more interesting deductions we can make as well. Consider the seventh column. The clue is 16, 9 and this can be satisfied in two ways: {1357} in some order and then a singleton 9, or {79} and {135} in some order. However, we can prove that the latter construction is impossible; the middle row is an even number, and also we have already used the 1 and the 5 in the bottom right hand box so we can’t place those numbers there. Thus, the first four numbers must be {1357} in some order and there is a 9 somewhere in the bottom right box.

The clue in the first row requires a 5 at the end. Given the 10, 10, 5 clue and the 4 already placed in the sixth column, the only odd substring in the last box must be the 5. We thus must have the 5 in the first row, and now only the 3 can fit in the fourth row. We also gather that the last two columns of the first row must be even, and we have enough clues to place these (using standard Sudoku rules).

Finally, we can place the 4 in the middle right box (now that the 3 is there, there’s only one space left where it can fit). Similarly, we can place the 5 (knowing that it can’t be in the seventh column any longer).

From this point on, solving the rest of the puzzle should be reasonably straightforward. The next step would probably involve placing 1, 9, 6, 7 and 3 in the first row (in that order – using the column hints and the knowledge that we have two tens)!

For these contests, I’d say the challenge lies largely in figuring out how the ruleset can be used to help solve the puzzle (and, well, speed; you’re thrown 13-15 puzzles to solve in the space of 90 minutes and I can only usually manage about seven or eight, and for that matter the ones I do finish are the ones on the easier end of the scale!).

On the Road Again

I travelled to Denmark for a recruiting event last week. The nature of the travel (2-hour flights plus an hour or so of ground transportation on each end) was not something I was used to; admittedly most of the flights I can remember taking are pretty long-haul ones (London-Singapore, London-San Francisco are by far the most common; London-New York exists too, though is less common). I had a great time, in any case – although being awake for about 24 consecutive hours especially when slightly ill was not too enjoyable, it was nice to meet up with students and flex some speed coding and debugging muscles. I think I still coded well even when really tired, though it did lead to some confusing moments as I tried to understand what exactly “PAY THE PRICE” meant in terms of a discount (it means no discount, but given that there was already “HALF PRICE” and a sliding reduction scheme, this did confuse me quite a bit).

This is just the second trip so far this year, though I anticipate there’ll be quite a few more. The two trips I’ve made so far were for business reasons; I’ve been hammering away at a major project at work that should be drawing to a close soon so there should be scope for personal time off. There are also likely to be more work trips in the pipeline – within the scope of my major project, for company events at large as well as for recruiting purposes. Of course, there would be even more trips if I was a Forward Deployed Engineer as opposed to a dev.

I remember reading an article in the BA inflight magazine en route to Billund, and noted that in some survey, 30 percent of people would accept a lower paying job if it meant that they could travel more for work. I think I subsequently found the survey they were referencing. I certainly would be in the 70 percent of people that would not, though it’s hard to draw an accurate conclusion given the way the question is framed (what’s the baseline here? I’d think people who already travel 50 percent compared to people who don’t travel at all might respond differently to being asked if they would do this to travel “more”). The survey was also commissioned by Booking.com, which might have a vested interest in portraying people as more willing to travel (so that companies would engage travel provider services more).

Of course, in business terms there are the obvious benefits of having in-person interaction between team members and/or getting the people who know what they’re doing on the ground, to see what’s going on. They can be really useful to get things done fast; communicating over instant messaging or even via a video conference doesn’t tend to be as effective.

I would say the upside of travel as far as an individual is concerned includes the opportunity to experience new places and see new sights, though I’m not sure how much of that you could achieve in a business trip. I find that many of the past trips I’ve had had been high-octan fire-filled affairs (though there have been exceptions – and not the programming kind). Other benefits involve meeting up with existing friends and acquaintances in the area (again, the tight schedules of many of my trips in the past precludes this). The article does make reasonable points concerning extending one’s stay, though – I actually did that two years ago when I stayed on in California for an additional week after one such trip – and fluidity of plans leading to last-minute bookings.

One of the things that makes me wary of travelling too much is actually long, unpredictable delays through security and customs – this might partially be a result of the routes I typically fly on (which go through major hubs with lots of traffic). I do have strategies to mitigate this (typically, I book an aisle seat near the front, and walk at a very brisk pace once disembarking from the plane; I also tend to choose flights that arrive at relatively unpopular times). This wasn’t a problem at LCY nor in Billund Airport in Denmark; queues were very short and handled very quickly.

To some extent this can be mitigated by Registered Traveller in the UK, the ability to use the electronic terminals in the US and the biometric gates in Singapore; it’s especially useful for me since I don’t typically travel with checked baggage, making the whole process of leaving the airport and getting where I need to go much faster. I did some computations to check whether the UK Registered Traveller scheme was worthwhile, and in the end decided it was reasonable. I used an assumed number of 12 trips per year with an estimated time savings of 25 minutes per trip (a crude estimate, but there are 20 minutes between the UK Border Force’s benchmarks for non-EU and EU passport queues; going by benchmarks ePassports are probably even faster hence the + 5) to figure out that the net cost was 14 pounds per hour saved post-tax (or 24 pre-tax). It could be argued that those specific hours might have unusually high value (e.g. stress at the end of a long flight), perhaps boosting the case for it. I’ll probably travel even more with Registered Traveller in place, actually.

I guess the other thing that has recently dampened my appetite for travel (at least on the personal front) would be the depreciation in the pound; relative to many people I know my proportion of overseas expenses is actually already fairly high (because of investments; 90% of my equities and 50% of my bonds are based overseas – and I hold much larger quantities of equities than bonds). To some extent I overreacted to Brexit (my savings, even in US dollar terms, have increased from the “Remain” case model I had) and there’s certainly still scope for travel around the EU where the pound has fallen more like 10 as opposed to 16 percent, though the length and luxury of such trips is likely to be reduced.

I’ll be travelling again soon, though it’ll be another of those long flights instead. Nonetheless, at some point in the future I definitely want to revisit Denmark. This trip was focused rather closely on executing the recruiting events in question (well, there isn’t really much you can do with one day anyway; I was in Denmark for less than 24 hours) – while I did get to see a little of the city, there was certainly much more to do.

Revisiting Mathematics Competitions (Hotseat: AIME II, 2015)

I remember I had a pretty good time doing the Haskell January Test earlier this year, and thought it might be both enjoyable and useful to revisit some of these past mental exercises. Hotseat is intended to be a series that discusses my thoughts, ideas and reflections as I take or re-take examinations or competitions; many of these will end up being related to mathematics or computer science in some way, though I’ll try to also include a few that go against that grain.

Background

I didn’t actually participate in the American Invitational Mathematics Examination (AIME) as I don’t think my high school offered it (the only maths competitions I remember taking myself are the Singapore Math Olympiad and the Australian Mathematics Competition). This is typically the “second round” of the main mathematical competition in the US (the first being the AMC, and the third being the USAMO), and features 15 problems, all of which have integer answers from 0 to 999. The point of this is to facilitate machine scoring. While the questions might indeed have integer answers, they can be very difficult especially towards the end.

2015 II was supposed to be an easy paper, though I didn’t find it to be very much easier than what I recall these to be. You can find the paper here.

I managed to fight my way through 12 out of 15 questions, leaving out two hard problems towards the end and a not-so-difficult one in the middle (mainly because I struggle somewhat with geometry in three dimensions). I think I’m not used to doing these as quickly as I would have in the past as well; problem 14, for instance, would probably have taken under 15 minutes in the past while it now took 32. 8 minutes on problem 1 is rather bad, as well.

The values below indicate the time spent on each problem; here green means the problem was answered correctly while grey means no answer was given. (In practice, in a contest you would take the 1-in-1000 shot and guess, since there are no penalties for guessing!). I’ve also outlined the subject areas – you can see that I tend to be stronger with combinatorics and probability, and struggle somewhat with geometry!

I think I remember mentioning in a post a very long time ago that when I look at data, I tend to initially look for things that seem unusual. Bearing in mind that the difficulty of these papers tends to rise, we could say the 32 minutes spent on question 14 or 15 on question 7 is perhaps not too far from expectations (the target “pace” here is 12 minutes per question, if you’re trying to finish). Bearing that in mind, we can look at a couple of outliers:

  • Problems 1, 6 and 11 took unduly large amounts of time.
  • Problems 2, 3, 8 and 12 were done well above pace.

Some of these were actually pretty interesting, which is one reason I tend to enjoy these competitions as well.

Selected Problems in Depth

Problem 1: Let N be the least positive integer that is both 22 percent less than one integer and 16 percent greater than another integer. Find the remainder when N is divided by 1000.

For this question you were actually supposed to find N; the “divided by 1000” shenanigans was so that the answer would be an integer from 0 to 999. The challenge here was really formulating a requisite equation from the problem statement, which would be

\dfrac{39}{50} k_1 = N = \dfrac{29}{25} k_2

and then reasoning that N must be divisible by both 29 and 39, such that k_1 and k_2 were both integers. Since these are relatively prime N = 29 \times 39 = 1131 and the answer was 131. I guess I hadn’t really warmed up yet, and wanted to be very careful to set up the expression correctly, so this took rather long.

6 was a bash with the Vieta formulas, though my brain temporarily switched off when I somehow thought a cubic equation should have four roots. 11, on the other hand, was a rather messy geometry question which I initially found tough to visualise. The solution on Art of Problem Solving involves constructing perpendiculars from the centre of the circle to the two lines, but I didn’t think of that (and for that matter wouldn’t have known that was a property of a circumcentre), and instead had a rather painful trigonometry bash to find the answer.

On the brighter side, 2 and 3 were rather simple questions (an elementary probability question, and a rather quick modulo-arithmetic one) and I cleared them off very quickly. I dealt with 8 using some rather quick-and-dirty casework that yielded the desired result, though it wasn’t too interesting. I’d say 12 was a bit more of a fun one.

Problem 12There are 2^{10} = 1024 possible 10-letter strings in which each letter is either an A or a B. Find the number of such strings that do not have more than 3 adjacent letters that are identical.

I thought in terms of recurrences and dynamic programming, and realised that valid strings (in the long case) must end in BA, BAA, BAAA or their flipped versions (alternatively, with a blank space in front). To end a string with BAA, it must previously have ended with BA, and so on for BAAA. Thus, the following equations drop out:

 BA(n) = AB(n - 1) + ABB(n - 1) + ABBB(n - 1)

 BAA(n) = BA(n - 1); BAAA(n) = BAA(n - 1)

Since the AB and BA cases are symmetric, we can simplify the first equation above to

 BA(n) = BA(n - 1) + BAA(n - 1) + BAAA(n - 1)

And we have of course the base cases;  BA(1) = 1, BAA(1) = 0, BAAA(1) = 0. Computing  2 \left( BA(10) + BAA(10) + BAAA(10) \right) is then not difficult, and yields 548.

In mathematical terms I would say problem 14 required some insight, though for seasoned competitors it would probably still not have been too difficult.

Problem 14Let x and y be real numbers satisfying x^4y^5 + y^4x^5 = 810 and x^3y^6 + y^3x^6 = 945. Evaluate 2x^3 + (xy)^3 + 2y^3.

I initially tried squaring the resultant expression, and combining the two given results in various ways. This went on for about 20 minutes with little meaningful progress. I then took a step back, attempted and finished up with problem 11, and came back to this. I suddenly had the idea of combining the two givens to force a cubic factorisation:

 x^3y^6 + 3x^4y^5 + 3x^5y^4 + x^6y^3 = 810 \times 3 + 945 

And that simplifies down to

 x^3y^3 (x+y)^3 = 3375 \leadsto xy(x+y) = 15 

We then proceed by factoring the first result:

 x^3y^3 (xy(x+y)) = 810 \leadsto x^3y^3 = 54 

We can get 17.5 for x^3 + y^3 by factoring the second given to x^3y^3(x^3 + y^3). The final result is then just 35 + 54 = 89.

I subsequently figured out solutions for 13 (my brief intuition about complex numbers was kind of on the right track, but I didn’t have time) and then 9 (not good with 3D geometry), and had a look at the rather painful solution for 15.

Meta-Analysis

It’s worth taking a look at how this performance was relative to the actual score distribution, which is available on the AMC statistics page. A score of 12 would be in the 96.09th percentile, which is interesting; I did the 2015 AIME I a few weeks ago and scored 11, but on that contest that was the 97.49th percentile!

There’s also a report on item difficulty (i.e. the proportion of correct responses to each problem). I’ve graphed this below (items I answered correctly are in green, while those I did not are in red):

It’s interesting that problem 12, a problem I found to be one of the easier ones (assuming you sort by success and then time, I found it to be the 8th hardest) turned out to be the second “hardest” problem on the paper! I’d attribute some of this to a programming background; if I took this contest when I was in high school I think I would probably struggle with 12.

13 and 15 were hard as expected; the next hardest problem in general after that was 10 (just a notch above 12 in difficulty for me) though again it relies on tricky combinatorics. We then have 6, 11 and 8 before 14; I found 6 and 11 relatively harder as well, as mentioned above, though 8 was fairly trivial for me. It turns out that my geometry weakness is fairly real, as 9 was completed by more than half of the participants! We can also observe problem 1 being challenging relative to its location in the paper.

Conclusion

This was a good mental workout, and it felt nice to be able to hack away at the various problems as well. I think a 12 is pretty solid (the 96th percentile is that of AIME qualifiers, which itself is something like the top 5 percent of high school students taking the AMC, so that would be the 99.8th percentile – and you could argue that considering only students taking the AMC already introduces a selection bias towards those with greater aptitude). I’d probably have been able to solve 9 and maybe 15 if I was doing this back in high school, though I’m not sure I would have managed 12.

I can certainly feel a loss of speed, though I think my problem-solving ability is for the most part intact, which is good.

On Hints in Puzzles

I played a game of Paperback with a couple of friends from Imperial yesterday. The game involves players making words from letter cards in their hands, and using them to buy additional, more powerful letter cards and/or cards worth victory points. It’s pretty much a deckbuilder crossed with a word game; while anagramming skills are indeed important, they aren’t always necessary (for instance, I managed to form the word WORK to buy the second highest tier victory point card) and certainly aren’t sufficient (keeping one’s deck efficient is certainly important).

In any case, the game reached a point where one of my classmates seemed to be agonising over what was presumably a tricky hand. The game does have a number of cards that use digraphs such as ES, TH or ED, which can make searching for words a little trickier. The common vowel (a vowel that players are all allowed to use in their words in addition to the cards in their hand) was an O, and the hand in question was [IT], [R], [B], [U] and two wilds. I immediately saw a solution when he sought help (BURrITO or BURrITOs; none of the cards had abilities dependent on word length) and let him know that a ‘perfect’ solution for his hand existed, though we decided to let him figure it out for himself. This started to drag on, though, and giving him the answer directly seemed excessive; I thus decided to drop hints that would hopefully lead him to the solution.

Technically, the search space that he would have had at this point would have been 5! + 6! \times 26 + {7 \choose 2} \times 5! \times 26^2 which is about 1.7 million possibilities, though of course one does not simply iterate through them; combinations like OqRzBITU would probably not even be considered directly, especially since the distribution of letters in English certainly isn’t memoryless (the Q being a good example; it is rarely followed by any character other than U). I gave a few hints, in this order:

  1. The word is a common word that a person in middle school would likely know.
  2. The word can be made with just one of the blanks.
  3. The blank you use matches one of the letters already on the table.
  4. You can eat it.

We could argue that the point of a hint could be to reduce the search space being considered by disqualifying some solutions, and/or serve as a way to encourage the algorithm to explore cases more likely to be correct first. Interestingly, it seems that clue 1 was neither of the above, though. It didn’t really cut down the search space much, and as far as I know it’s not easy to use this information to organise one’s search. Its point was mainly to reassure my classmate that the solution wasn’t too esoteric and he would know it was a valid solution upon considering it. I didn’t think I had played any particularly uncommon words earlier in the game (the rarest might have just been TENSOR or something), though I think I might have played Anagrams with him before, where I know I’ve called out some more… interesting words.

The next two clues do actually substantially reduce the search space in a sense though; the second clue slashes it to just 6! \times 26 = 18720 and the third further cuts it down to 6! \times 6 = 4320. I would think that after the third hint, the search should be pretty fast; I would probably try doubling up on the R and T first as they’re common.

Nonetheless, hint number 4 seemed to be the hint he responded best to, and it falls into the second category (encouraging the search to explore cases more likely to be correct). I’m not sure how humans solve anagrams, especially when blanks are involved; I personally seem to look at letters, intuitively construct a word that looks like it incorporates the key letters present, and then verify that the word is constructible (in a sense, it is a generate-and-test approach). Given the point values of the tiles, I saw including the B, U and IT cards as being the most important; thinking of words that use these patterns resulted in BITUMEN and BURRITO fairly quickly, the latter of which was feasible.

I noticed that I tend to bias towards using the constraint letters near the beginning of the word though; this might be a cognitive issue with trying to satisfy the constraints quickly, in an unusual form of the human brain often being weak at long-term planning. This wasn’t really the case with the burrito example, but another friend had a hand of ten cards (I think) at some point with an [M], [IS], [T], [R], [D] and a couple of blanks. With the common O and knowing that the M, IS and D were higher-value, I saw MISDOeR and struggled to incorporate the T, finding it difficult to back out of starting the word with a MIS- prefix. It turns out it might have been okay in that I could have found MISsORTeD, there were easier solutions such as aMORTISeD or MODeRnIST.

On hindsight, though, my hinting might not have been ideal. This is because BURrITO and its plural form weren’t the only solutions; there was OBITUaRy, which some people might find easier to see (though I didn’t actually see this during the game). I think clue 1 would probably already be invalid; I knew the word ‘obituary’ in primary school, since it was a section in the newspapers that I would browse through in the mornings, but I’m not sure it would be fair to expect middle school students in general to be familiar with the term. Clue 2 would certainly be incorrect.

The Boy Who Would’ve Reached for the Stars (Q1 Review)

The past few weeks have been incredibly intense; I had been root-causing and then fixing a fairly complicated issue on AtlasDB that involved unearthing rather messy relationships between various components. I thus took three days off this week to cool off a little, and also to collect some of my thoughts on the quarter gone by.

Looking at my public GitHub statistics, I merged twenty-eight pull requests over the last three months, ranging in size from one-liners to an almost 3000-line monster that should probably never have been. That’s about two per week, which is a reasonably healthy rate, especially bearing in mind that much of the last three weeks was spent debugging the big issue. (I acknowledge this metric is pretty coarse, but it does at least reflect positively in terms of the pace at which development work is getting reviewed and merged to develop.)

It seems I’ve done quite a fair bit of pull-request reviewing as well; recently this has to some extent shifted towards benchmarking and performance, which is something I find to be personally interesting. Concepts that a few months ago were merely abstract ideas I was vaguely aware of, such as volatile variables, blackholes and false sharing now actually do mean a fair bit more to me. I’d say there’s progress here.

Other goals in terms of computing such as learning more about distributed systems, contributing to interviewing and sharpening my knowledge of Java have certainly been trundling along acceptably well. My work with MCMAS development also continues, and on a bright note my submission (together with Prof. Alessio Lomuscio) of a paper on symbolic model checking CTL*K got accepted at AAMAS 2017. I’m also in the process of setting up a more modern version control infrastructure for MCMAS, and merging my own CTL*K-related changes from the project down into MCMAS’s master branch. This adds an end-to-end test framework and also fixes some old bugs in MCMAS, including deadlocks in counterexample generation.

Financially, the market continued its march upwards and my portfolio gained 4.8 percent (inclusive of reinvested dividends). Interestingly, I thought it had largely been flat across this January-March period. I was of course aware of the Trump reflation trade, but thought it was already largely priced in by the end of last year. Looking at things within my control, I continued my monthly regular investment scheme, and successfully completed an admittedly ill-conceived challenge in February to limit discretionary expenditures for the month to a hundred and fifty pounds (clocking in at 148.16). I decided to replace my laptop in March, which was a significant budget bump, but not unreasonable given my previous machine had an SSD failure and had served me well for about two years.

Amidst all this, it can be easy at times for me to simply lose myself in the music rhythm of getting things done, to paraphrase a certain rapper, without considering why said things are being done. I had to make a trip to New York to do some important work, and on the flight back to London I noticed X Factor winner James Arthur’s new album, “Back from the Edge” on the inflight entertainment system. I liked “Impossible” and “Say You Won’t Let Go”, and I was aware he had a great belting voice, so I decided to give it a spin.

The title track, among other things, flags a desire for the speaker to return to his beginnings. It’s delivered with a lot of vocal power (as I would expect), which works for me at least (viewed in the sense of announcing or declaring one’s return):

Back from the edge, back from the dead
Back from the tears that were too easily shed
Back to the start, back to my heart
Back to the boy — who would’ve reached for the stars

Thinking back, things have changed quite a fair bit for me over the last few years as I moved from Singapore to London. I’d say the most obvious changes were in terms of academic/professional expectations (they’ve always been fairly high, but skyrocketed after placing 1st in first year), work ethic (I do remember being pretty lazy) and personal finance (I’m now much more frugal and also invest in stocks and funds). In many cases, I’m relatively satisfied that these changes have taken place; I wouldn’t say that after these four to five years and reflecting upon them I’m now seeking to come “back from the edge” or “back from the dead”. However, I’m sure there are some less obvious changes that are certainly less positive, such as relatively less interest in and less time spent on learning outside of engineering and finance; these are often exacerbated by high expectations leading to more time/effort spent on ensuring said high expectations are met. There definitely exist things which I used to like about my past self that aren’t as present in my present self.

I’ll be finalising Q2 targets soon; apart from the usual engineering/finance/friendships (EFF) targets which are certainly very important, I will include one or two other things as well. I’ve generally found identifying what I want to accomplish beyond EFF which are often my primary foci difficult, and other things tend to be very quickly deprioritised if specific objectives for them aren’t established.

On Paying Oneself First

The expression “pay yourself first” is one very frequently put forth in personal finance circles. In practice, this often involves automatically rerouting some amount of money into a separate savings or investment account whenever a paycheck comes in. Besides the mathematical benefits (compounding; to some extent, risk mitigation via cost averaging if one’s investing), I can see how the approach could be useful psychologically (in that it reflects a shift in one’s mindset concerning money, as well as a conscious prioritization of savings and capital growth). I’ve personally been following this, though I’ve been investing the money into a portfolio of equity trusts and index funds (I’d recommend having an emergency pot to sustain a few months’ expenses first, though).

I see no reason why this can’t be generalized beyond money, though, especially since we do have to manage far more important scarce resources. In particular, I’m looking at time. I’ve received feedback that I tend to slant towards being outcome-oriented, and this does mean that if an important project is lagging but I see a way to recover, I’ll tend to vigorously pursue it – it’s thus not too difficult for me to end up spending 70 or even more hours a week on said project (or even 90-plus, as I did in second year at Imperial). I’ve learned that this is unsustainable, but for me it’s still not the easiest decision to drop something (to the point where a friend commended me for actually using some of my leave during the industrial placement!).

If we look at time in the same way, we get 24 hours a day, or 168 a week; things like sleep are of course important, but I tend to see them more as bills or taxes that need to be paid (not paying them tends to lead to interest!). So paying myself first would involve reserving some time for something else; I’d propose personal learning and development as a good candidate for this.

This is perhaps unsurprising; I suspect that if I polled people as to what “investing in oneself” entails, many answers concerning education would be forthcoming. Like bonds and equities (hopefully), developing one’s skills can lead to future payoffs. I do tend to partition this time into roughly three different domains:

  1. Technical development – e.g. paper reading, programming contests, code katas. These are likely to feed back in to my software engineering work. I’d probably consider these similar to equity income mutual funds; they (ideally) grow in value reasonably steadily and generate nice payoffs along the way too.
  2. General professional development – e.g. writing, finance, tax. Useful for both software engineering work (from what I can recall, I’ve written a lot of docs) and also for managing my own professional matters. Again, these are generally useful; perhaps they have smaller immediate payoffs than technical development, though I tend to think of them as also very important in the long run. Perhaps these would be more similar to a growth-focused fund then? Or even BRK-B (or BRK-A; we’ll get to that but I don’t have a quarter of a million quite yet!)
  3. Random development – singing, photography, etc. These are generally quite fun (I do also enjoy software engineering, writing and finance, but they like all other domains tend to have diminishing marginal returns), and might suddenly explode into usefulness or value given the right conditions. Perhaps these are like emerging market equity funds that are focused on a specific country. There’s certainly a fair bit of variance, but the returns can sometimes be impressive. (If one wishes to take the metaphor even further, deeply out of the money options could be more accurate; they certainly add a bit of fun to a portfolio, too! That said, I have no direct experience with option trading.)

Of course, money and finances are an important thing to manage, and I believe that paying oneself first is a good strategy there. However, it does feel to me that time is even more important. My admittedly equity-heavy portfolio lost around 6 percent during the US election jitters, and this is a portfolio which I’ve built up over the course of about a year and a half now – yet I didn’t feel much (I was expecting a further fall post-Trump win, though in the end the markets rallied). I’m the kind of person who can get annoyed if I find that a day was wasted – let’s not even get started on my reaction to 6% of 18 months (just over a month; 32.87 days). Of course, we have to be careful about jumping to conclusions about what constitutes waste or loss, but I think the point that I find loss of time more painful than loss of money (at least at a small-percentage scale) still holds.

Taking Risks on Rotations

I haven’t owned a gaming console since the Playstation 2, mainly because I’ve found PCs to be sufficient for my own purposes and I don’t do very much gaming in the first place. I already have a pretty good PC for some of my own work (such as programming and some image/video editing I’ve done in the past). Nonetheless, I do enjoy watching a few YouTube channels which feature playthroughs of games with commentary, and I’ve certainly seen a lot of news regarding Nintendo’s latest console, the Switch.

One of the launch titles for the console, 1-2-Switch is a minigame collection which largely serves to demonstrate the capabilities of the Switch’s controllers (or “Joy-Cons”). The controllers have buttons, motion sensing capabilities, a camera (“Eating Contest”) and some kind of precise vibration (“Ball Count”). Most of the minigames seem simple and somewhat fun, as well as relatively balanced (in fact most games don’t really feature a way for players to interfere with one another – several others also have players’ roles over the course of the entire minigame being largely symmetric). There are a few that seem unbalanced (“Samurai Training” – assuming equal probability of success the first player has a greater probability of winning), though there was one that was clearly unbalanced in favour of the second player, that being “Joy-Con Rotation”.

The game invites players to rotate their controller about a vertical axis as much as possible, without otherwise disturbing it (e.g. by tilting the controller) too much. Scores are measured in degrees of rotation, with “unsuccessful” turns (e.g. too much alternative movement) scoring 0. Players get three rotations in turn, and then the score from their three turns is added up. In a sense this gives the second player an advantage; on his final turn he needs to know how much to rotate the controller to win, and should aim for precisely that. The problem we’re analysing today is then: how do we make a decision on how much of a rotation to attempt if we’re the first player, and how much of an advantage does the game give the second player (at least as far as the final turn is concerned)?

Let’s make a couple of simplifying assumptions to this:

  • Let X_i be a continuous random variable over [0, \infty) and some is. The probability mass function of the X_i, f_{X_i}(x) are assumed independent, continuous and tending to zero as x tends to infinity. Furthermore, for a given value of x representing the number of degrees player i attempts to rotate the controller, we are successful iff X_i \leq x.
  • We assume scores are continuous (in the actual game, they’re only precise down to the degree).

Now, assuming we aim for a given rotation x, the probability of a win is then approximately f_{X_1}(x) \times (1 - f_{X_2}(x)), which we seek to maximise. Thus, for a simple example, if the X_i are exponentially distributed with parameters \lambda_1 and \lambda_2, we then seek to maximise the function

g(x) = (1 - e^{-\lambda_1x}) \times e^{-\lambda_2x}

This can be minimised, and yields a solution of x^* = \frac{1}{\lambda_1} \log \frac{\lambda_1 + \lambda_2}{\lambda_2}; this gives the first player a winning probability of

\left( \dfrac{\lambda_1 + \lambda_2}{\lambda_2} \right)^{-\frac{\lambda_2}{\lambda_1}} \left( \dfrac{\lambda_1}{\lambda_1 + \lambda_2} \right)

So if both players are equally skilled in terms of rotation ability (that is, \lambda_1 = \lambda_2) then the first player wins with probability of just 0.25. Furthermore, we can find out how much better player 1 needs to be for an even game (that is, the value of k for which \lambda_1 = k\lambda_2); we formulate

(k+1)^{-\frac{1}{k}} \left(\dfrac{k}{k+1}\right) = 0.5

which works out to about k = 3.4035. If this player went second, however, he would have an over 0.905 probability of winning!

The multiple trial case is interesting, especially because it seems like the problem does not demonstrate optimal substructure; that is, maximising the probability you win for a given round doesn’t necessarily mean that that’s the correct way to play if going for more rounds. For a simple (if highly contrived) example; let’s suppose we play two rounds. Player one can (roughly) make rotations up to 50 degrees with probability 1, 200 degrees with probability 0.3, and more than that with probability 0. Player 2 can make rotations up to 49 degrees with probability 1, 52 degrees with probability 0.6, and more than that with probability zero. To win one round, you should spin 50 degrees (win probability: 0.4); but to win two, if you spin 50 degrees twice, your opponent could spin 52 once and 49 once (leaving a win probability of 0.4); going for 200s and hoping for at least one success gives you a win probability of 0.51.

Iterative Dichotomiser (January Haskell Tests 2017)

I probably can’t really explain this much better than the source:

The January tests are on-line Haskell programming tests sat under examination conditions by first-year undergraduate students at  Imperial College at the end of a six-week introductory programming course.

Dr. Tony Field has been publishing them for quite a few years now (the linked page goes back to 2009). I still to some extent remember my first year Haskell courses, somewhat impressed by the rationale for choosing Haskell even though my understanding at the time was rather clouded. I do remember a specific instance where Tony polled the class on past programming experience, noting hands for C++ and Java (I raised my hand for both), and then tossing in Haskell (a few people; probably because Imperial did say to prepare to study Haskell beforehand!). Besides this, I think  having to worry less about persistent state (or race conditions, though I don’t think we covered concurrency at Imperial until second year) and being closer to the notion of mathematical functions, which students should already have been exposed to, would also have helped.

This year’s test (2017) covered decision trees, culminating in a question inviting candidates to implement the information gain version of ID3 when building a decision tree from a dataset. It wasn’t too difficult as a whole, as Tony acknowledged on his page; despite probably last having written Haskell about a year ago when I attempted the 2016 test, I finished comfortably in 2 hours and 12 minutes (the time limit is 3 hours). I think this test as a whole required some fairly careful work, but didn’t demand anything in terms of special insight even at the very end (as some past tests have done). The elegance of my answers would probably leave a fair bit to be desired, though; I found I was building recursive traversals of the given datatypes very frequently.

That said, I found the first part somewhat more difficult than in the past. Typically Part I was extremely straightforward (rather awkwardly, there used to be a question asking students to implement lookUp almost every year); not so much this time. In particular, there was a rather fiddly function to write that involved navigating some data structures and building a frequency table; the spec featured a lot of type definitions that reminded me a bit of some experiments with Object Calisthenics (in particular, the “wrap primitives and strings in classes” rule). That said, completing Part I alone would already have let you pass (with a 47; the pass mark is 40). I think the frequency table was harder than anything in Part II, actually, which had a few, fairly straightforward tree functions to write.

Moving on, part III effectively introduced students to the Strategy pattern (in terms of an attribute selector for deciding which attribute to split the dataset on). Apparently, it was possible to solve partitionData with a suitable one-liner, though I unfortunately didn’t come up with anything along those lines, and went with a rather “direct” approach of dividing the rows by the element and removing the relevant attributes. Part IV introduced the concepts of entropy and information gain, and thus invited students to implement ID3; given the title of the paper I wasn’t too surprised to see this at the end.

I found it fairly interesting to revisit Haskell, and it was nice to see that I was still able to work with the language after not touching it for a year or so. While it might be fairly unlikely that I would work with functional languages in professional terms, concepts and reasoning that are more apparent in functional languages do certainly apply even when I work in Java or C++, whether in the obvious sense (streams/map/filter etc.) or less so (inductive/algebraic datatypes).

Diagnosing a Faulty Disk

Recently, I had a lot of difficulty booting into Windows on my home laptop. More specifically, I could boot, but the OS would very quickly become unresponsive. However, I had no such issues booting into Ubuntu (and I’m typing this from that). I tried booting into Safe Mode and disabling auxiliary services and devices, but that was to no avail.

Nonetheless, I could use Ubuntu to perform some analysis. I noticed rather unpleasant-looking logs in the Linux message buffers (a.k.a. dmesg).

[   20.588310] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[   20.588311] ata6.00: irq_stat 0x40000001
[   20.588312] ata6.00: failed command: READ DMA
[   20.588315] ata6.00: cmd c8/00:08:40:28:1c/00:00:00:00:00/e0 tag 22 dma 4096 in
[   20.588315]          res 51/40:08:40:28:1c/00:00:00:00:00/e0 Emask 0x9 (media error)
[   20.588316] ata6.00: status: { DRDY ERR }
[   20.588316] ata6.00: error: { UNC }
[   20.619619] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs'
[   20.647929] ata6.00: configured for UDMA/133
[   20.647935] sd 5:0:0:0: [sdb]  
[   20.647936] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[   20.647937] sd 5:0:0:0: [sdb]  
[   20.647938] Sense Key : Medium Error [current] [descriptor]
[   20.647939] Descriptor sense data with sense descriptors (in hex):
[   20.647942]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[   20.647943]         00 1c 28 40 
[   20.647944] sd 5:0:0:0: [sdb]  
[   20.647945] Add. Sense: Unrecovered read error - auto reallocate failed
[   20.647946] sd 5:0:0:0: [sdb] CDB: 
[   20.647948] Read(10): 28 00 00 1c 28 40 00 00 08 00
[   20.647949] end_request: I/O error, dev sdb, sector 1845312
[   20.647955] ata6: EH complete

Hmm, UNC stands for uncorrectable; not great. These errors were easily reproduced. Upon finding that I could nonetheless seem to view most of the contents of the file-system when mounting the SSD in Linux, I immediately took a backup of the important data to an external hard drive, though this was probably objectively unnecessary (really important data is backed up in the cloud).

My first instinct was then to carry out a self-test of the drive, using its SMART (self-monitoring, analysis and reporting technology) tools. I first performed an extended “self-test” of the drive, which seemed to yield suspiciously positive results.

Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      4874         -

I decided to probe further. The tool’s output includes a table of various attributes which can be metrics of a drive’s health:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   000    Pre-fail  Always       -       0
  5 Reallocated_Sector_Ct   0x0003   100   100   000    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0002   100   100   000    Old_age   Always       -       2664
 12 Power_Cycle_Count       0x0003   100   100   000    Pre-fail  Always       -       2577
170 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       2
171 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       0
172 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       0
173 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       102815
174 Unknown_Attribute       0x0003   100   100   000    Pre-fail  Always       -       59
175 Program_Fail_Count_Chip 0x0003   000   000   000    Pre-fail  Always       -       0
176 Erase_Fail_Count_Chip   0x0003   100   100   000    Pre-fail  Always       -       0
177 Wear_Leveling_Count     0x0003   100   100   000    Pre-fail  Always       -       102815
178 Used_Rsvd_Blk_Cnt_Chip  0x0003   100   100   000    Pre-fail  Always       -       1
179 Used_Rsvd_Blk_Cnt_Tot   0x0003   000   000   000    Pre-fail  Always       -       2
180 Unused_Rsvd_Blk_Cnt_Tot 0x0033   100   100   000    Pre-fail  Always       -       784
181 Program_Fail_Cnt_Total  0x0003   100   100   000    Pre-fail  Always       -       0
182 Erase_Fail_Count_Total  0x0003   100   100   000    Pre-fail  Always       -       0
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0033   001   001   000    Pre-fail  Always       -       26018192
187 Reported_Uncorrect      0x0003   100   100   000    Pre-fail  Always       -       2
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       19
192 Power-Off_Retract_Count 0x0003   100   100   000    Pre-fail  Always       -       59
195 Hardware_ECC_Recovered  0x0003   100   100   000    Pre-fail  Always       -       0
196 Reallocated_Event_Count 0x0003   100   100   000    Pre-fail  Always       -       2
198 Offline_Uncorrectable   0x0003   100   100   000    Pre-fail  Always       -       1
199 UDMA_CRC_Error_Count    0x0003   100   100   000    Pre-fail  Always       -       0
232 Available_Reservd_Space 0x0003   100   100   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0003   100   100   000    Pre-fail  Always       -       6425
241 Total_LBAs_Written      0x0003   100   100   000    Pre-fail  Always       -       214612
242 Total_LBAs_Read         0x0003   100   100   000    Pre-fail  Always       -       214201

This is a rather large and nasty table, and it didn’t seem that Plextor implemented any meaningful values for the thresholds. Nonetheless, raw data counts were available. The first alarm bell was metric number 184, End-to-End Error (which sounds terrible); that had apparently happened over 26 million times. Some sources suggest this is critical while others do not; I don’t have historical data regarding the progression of this figure, so it would be difficult to draw conclusions as to how this happened – or, if the sources saying this is a critical metric are correct, how the disk limped to this point in the first place.

Nonetheless, there were other negative indicators as well; 187, 188 and 198 which have been associated with disk failures were all notably more than zero. There were several other ATA errors appearing in the smartctl output as well.

For comparison, I ran the diagnostic tools on my HDD:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   062    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   040    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0007   118   118   033    Pre-fail  Always       -       2
  4 Start_Stop_Count        0x0012   098   098   000    Old_age   Always       -       4642
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   040    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0012   084   084   000    Old_age   Always       -       7425
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       2574
191 G-Sense_Error_Rate      0x000a   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       32
193 Load_Cycle_Count        0x0012   072   072   000    Old_age   Always       -       283355
194 Temperature_Celsius     0x0002   146   146   000    Old_age   Always       -       41 (Min/Max 10/58)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       1
223 Load_Retry_Count        0x000a   100   100   000    Old_age   Always       -       0

Much better. I looked up 199 just in case, but it seemed fine (just one occurrence; and in any case the super important data has been backed up). Notice 197, 198 and 5 (bad sector reallocations) are all zero.

The current situation is fine (the slower startup times are a little unpleasant, though would probably have been worse with Windows). I might investigate replacing the disk and/or getting a new machine (this laptop is reaching 3 years, so an upgrade would be nice) when my budget allows for it. That said, my usage patterns don’t seem to suggest a higher-end machine is necessary at all, so I might stick with it (Sims 4 isn’t that demanding; the most demanding thing I played would probably be Fallout 4 and while I could use a GPU upgrade, that’s clearly a want, not a need). I haven’t quite decided yet.

On the Practical Complexity of Efficient Testing

This is a part 2 to the previous post on the theoretical complexity of efficient testing. Recall that we modelled tests as being used to verify that code satisfied some requirements, and then modelled the problem of efficient verification as finding the smallest set of tests that covered all of the requirements.

Although the decision problem is NP-complete, we can still put forth a decent attempt at solving it. We can rewrite the set-covering problem as an integer linear programming problem (define an indicator variable indicating whether each test was included in the test set or not, and define a constraint for each requirement, indicating that at least one of the tests that satisfies it is true; we then need to minimise the sum of all of the indicator variables). There are many solvers such as GLPK or CBC that can solve even fairly large instances of these problems. Similarly, we can also reformula set cover as boolean satisfiability; there are many solvers that can handle large formulae with many variables as well.

That said, although we can minimise the number of tests being used, it’s not entirely certain that we should, for several reasons. For example, suppose we wanted to test a function that returns all instances of characters occurring exactly two times in a string. Well, this is one possible implementation – and I’d be fairly confident in saying that you can’t really do better than linear time (you can’t avoid parts of the string in general, though there are some cases where you can shortcircuit e.g. if you have examined a portion of the string where every allowable character has appeared at least three times).

public static Set<Character> findPairs(String input) {
  Preconditions.checkNotNull(input, "findPairs called on null input");
  Map<Character, Integer> charCount = Maps.newHashMap();
  for (char ch : input.toCharArray()) {
    charCount.putIfAbsent(ch, 0);
    charCount.put(ch, charCount.get(ch) + 1);
  }
  return charCount.entrySet()
                  .stream()
                  .filter(entry -> entry.getValue() == 2)
                  .map(Map.Entry::getKey)
                  .collect(Collectors.toSet());
}

The first problem would obviously be whether the number of tests is even a good metric. I’ve written a few tests for the method above:

@Test
public void returnsCharacterAppearingTwice() {
  assertThat(findPairs("aa")).containsExactly("a");
}

@Test
public void doesNotReturnCharacterAppearingThrice() {
  assertThat(findPairs("aaa").isEmpty()).isTrue();
}

@Test
public void throwsOnNullString() {
  assertThatThrownBy(() -> findPairs(null))
          .isInstanceOf(NullPointerException.class);
}

@Test
public void canFindPairs() {
  assertThat(findPairs("aa")).containsExactly("a");
  assertThat(findPairs("aaa").isEmpty()).isTrue();
  assertThatThrownBy(() -> findPairs(null))
          .isInstanceOf(NullPointerException.class);
}

I’d certainly prefer having the three tests which each test something specific, as opposed to the single canFindPairs() test (in fact, if I came across the latter in a code review I would push back on it). The main problem here is that one way of reducing the number of tests is simply to merge existing tests or run large integration tests only, which is generally not great. Incidentally, this could lead to an extended heuristic, where we weight test methods by number of assertions.

But let’s suppose tests have disjoint assertions, and we don’t attempt to game the system in the way described above. The next issue is then how we define requirements. One possibility is to give methods well-defined postconditions and check that tests verify these, but this is unlikely to scale to large systems.

A common method, then, is to use code coverage as a proxy (this can be measured automatically via tracing of test calls). Line coverage, including adjusting for conditionals could be a good starting point. However, this isn’t really a good metric either – the three tests introduced above or the single canFindPairs() test actually achieve 100 percent coverage, by most definitions:

  • We have an input that violates the precondition, and two that pass it (line 2).
  • We do exercise the body of the for loop with the “aa” and “aaa” tests (lines 5-6).
  • We have both true and false outputs in the filter construction (line 10). This might not even be considered to be a requirement for the line to be covered.

Yet, if someone submitted the above tests only for findPairs() and I did a code review, I would ask them to add more testing. I’d probably expect at least the following:

- empty string ("")
- one char ("a")
- inputs returning multiple chars ("aabbbccdde")
- nonconsecutive pairs ("abab")

Furthermore, the above method is not actually correct if going beyond UTF-16, so if (but only if) that would be likely given the context of the application involved I would ask for a test featuring that as well.

Thus, by optimising for code coverage and eliminating tests based on that, we run the risk of weakening our tests to the point where they couldn’t catch legitimate faults. For example, a test using characters outside of UTF-16 as described above would be unlikely to improve coverage at all, and thus might be pruned (thus allowing our implementation to pass even though it wouldn’t work). Approaches for evaluating whether this is worthwhile can include having developers plant faults in code, seeing if test suites after pruning can still catch them, or automatically mutating implementations (e.g. interchanging operations, changing the order of lines of code etc.) and seeing if test suites behave differently before and after pruning.

Coverage is still probably one of the least worst metrics in my opinion – I can’t really think of a good way of improving on it cheaply and scalably. Furthermore, studies have shown that in spite of line coverage being a kind of blunt instrument, it is nonetheless able to in several practical cases achieve decent reductions in test suite sizes without harming fault detection too much; nonetheless, the most aggressive solutions (such as using integer linear programming) seem to overfit to some extent, performing more than commensurately less well at detecting faults that were introduced.