Browse Category

Lifestyle Engineering

Beyond the Storm

Image by Dieter_G from Pixabay

The 5th of June 2020 bore similarities, for me, to the 29th of June 2016, the 6th of October 2015 and the 4th of January 2012. What is common between these dates is that they are the days just after a large – or as is the case with the most recent instance – intense project or series of projects were completed (the first and third were work related; the second was MCMAS-Dynamic, and the fourth was my full-time national service in Singapore). The final stages of these projects usually involve blatantly unsustainable working practices in the pursuit of excellence, quality, and/or tying up all loose ends. Interestingly, all but the most recent instance were tied to hard timelines of some kind.

It might seem like finishing a large or intense project would be a cause for celebration. That’s certainly true, both because of the extrinsic value of a project (performance improvements, new features, research contributions, or even some peace of mind) as well as the intrinsic sense of accomplishment from having completed (or at least survived) it. However, I find that there’s also a degree of emptiness that tends to grow in the few days after. The projects were inherently challenging and would have taken a large amount of my time, especially when considered in terms of the fraction of my hours they’d take up in their final weeks. There is also often a more significant amount of mental bandwidth and fortitude required for these projects to be successfully completed. Thus, when they’re finished, there’s going to invariably be a large disruption to the routine I’d have had in the days leading up to completion. While my attitude towards downtime and rest periods has grown into a more defensive and I think healthier position over time (so that I’m not immediately thinking “what’s the next productive thing to focus on?”), they can still only last for so long before a fear of laziness and indolence creeps back in, and I need to determine what I should look at next.

This would lead us to one possible strategy for dealing with these down periods: always having something lined up. This was serendipitously true in the 2015 instance; that day, a Friday, was my final day as a Palantir intern, and I began my final year of university on the Monday, with a fresh set of difficult modules and the beginning of MCMAS-Dynamic. We were required to take seven modules across the two terms, and I decided to front-load my modules. There was definitely a drop in intensity from the first week of October to the second, but it was relatively small. I’d say of the four instances here, this was the one that affected me the least. This is on a much smaller scale, but I didn’t feel any downtime after taking or passing the A1 Goethe-Institut exam because I knew I wanted to learn German to at least a B2 standard. I can often get overly focused on the task at hand to the exclusion of this. Pre-planning may not always be possible or optimal, as well: especially at work project selection is more often driven by business requirements and telemetry with a dose of intuition, which can change rapidly. Nonetheless, I think this is a good strategy especially for personal projects.

There is a natural extension of the above: if one maintains a portfolio of parallel projects, then finishing a large project would have less of an impact. This sounds good in theory, but I am slightly skeptical, mainly because in my experience these projects demand so much attention and time especially in their tail that there is often little productive capacity left to look at others. Furthermore, these “surges” in demand at the end often result in an elevated level of work that needs to be paused to maintain one’s sanity. In a sense, although I don’t particularly like the down periods, they are important and necessary (incidentally I’m strongly reminded of a phrase I’ve learnt in German class: wichtig und notwendig). Also, if one prioritises some (possibly one) of these projects substantially more highly than the others, it can be difficult to find the motivation to work on the lower-priority projects while there is still productive work that can be done on the important ones. I do do this for personal projects (maintaining friendships, learning German, competitive puzzles, writing and personal finance all run in parallel), but these are rarely the ones which call for massive, aggressive surges. (I won’t say that there weren’t any, though: the end of uni was where I focused on the first, and around the change of the tax year I do sometimes need to look at the fifth.)

An alternative approach is to avoid these last-minute pushes to reduce the delta between an intense final surge and not having the project at all. This is plausible, though I’m not certain the last-minute pushes are always avoidable, or for that matter a bad thing. A large amount of the stress in the 2016 instance was because with two weeks left, my supervisor Prof. Alessio Lomuscio and I decided to begin formally attacking LDLK on finite traces; the end would have been fairly relaxed if we didn’t do that, I’m happy I did it, though: this was an interesting (and publishable!) part of the investigation. In the 2020 instance, the deadlines were synthetic but getting those projects done quickly was important to reduce the amount of concurrent context I needed to track.

In summary, I’ve become more used to down periods after intense stretches of work and/or large projects are delivered; I think these are to some extent necessary even though I don’t find them particularly enjoyable. Making an effort to have something else planned, and/or working on multiple projects in parallel, where possible is usually helpful. Though these aren’t always possible, and in some cases if rigidly adhered to might limit the complexity of projects that one can successfully undertake. These down periods can be unpleasant, but I don’t think changing one’s projects is necessarily worth that tradeoff.

Remote Routines

It’s been about two and a half months since I started working from home, and I think I’d describe it as better than I expected, but still less than ideal. A tip I’ve suggested to my teammates to help maintain productivity while working from home is to establish a daily routine.

I’m aware of this in theory, but haven’t been as good at adhering to this in practice. The availability of meals in the office resulted in a natural routine: I would eat breakfast in the office before the workday started, and would normally finish most development work before dinner. I’d sometimes write documentation or polish up a test after dinner, but usually the post-dinner work would be less intense. This wasn’t true for everyone, of course (one could elect to skip breakfast and/or have dinner at home), but it was for me. I also set up my laptop to remind me at 8 pm to ask myself if I should stop for the day, and in any case an empty office was useful as a reminder that it was late, and that I perhaps should no longer be around.

Initially, I did follow this: I marked the start of the day with a short walk to the local M&S before breakfast. However, things started degrading over time. I think much of this was in relation to wanting to get as much sleep as possible before meetings or work started. I’m not sure what precipitated that; I imagine that would probably be a late night, perhaps caused by some combination of working late and getting distracted with reading, computer games or something else. I’ve been reading Puzzlecraft: How to Make Every Kind of Puzzle, and I do remember a recent evening where I started trying to craft a triple of linked Sudoku variants that weren’t independently solvable, but had a unique solution taken together at 9.30 pm or so, and finished at 1.30 am.

It’s probably related, but I noticed that I’ve recently been working longer hours – with reduced time outside of work it’s invariably tempting to push one’s sleep backwards, even if doing so may have undesirable consequences. I normally have a 2.5-3 km walking commute to the office (and interestingly this is what is currently recommended by the government), which takes me about 30 minutes at a reasonably quick pace. This is between 45 minutes and 1 hour each day (I almost always walk in the morning, but sometimes take the Tube in the evening) of time that seems to be mostly converted into development time. I guess this is a benefit of working from home if one holds one’s hours constant (that said, my commute also has exercise benefits for me).

Some of this might also be because I’ve started doing more independent development work again (there was a time where I spent <20% time on this, while it’s probably between 30 and 40 percent now). It’s now actually possible and directly relevant to my goals to make larger independent pushes. This is intellectually welcome though I do need to be careful about how far I take this.

I’ve also been getting more sleep for some reason. This may be a product of increased mental fatigue. In addition to development work, the hobbies I’ve been spending time on recently – reading, logic puzzles, learning German and some computer games – are generally quite taxing.

I normally get about 7.5 hours of sleep during the week, and maybe 8.5-9 or so on weekends. I think the amount of sleep I’ve been getting from Sunday to Thursday hasn’t really changed, but I don’t seem to feel as well-rested as normal. In terms of weekend schedules, I do remember an abnormally high frequency of 10- or 11-hour sleeps as of late.

There are definitely advantages to working from home – it can be better for focusing on specific difficult problems, and not having to do a 30 minute commute is a big win (and is possibly a bigger win for others). I’m not sure I would want to, though, given the choice, perhaps not for fault of the concept in general, but instead because I haven’t had the willpower, knowledge, or some other factor needed to thrive on it. There are also other frustrations: video-conferencing while passable still seems frictionful, and while I’m an introvert even I find the lack of human contact unsustainable.

Advents

December is a rather unusual month for me. My birthday is in December, as is Christmas and New Year’s Eve. By many metrics it’s also the end of often discretely-viewed periods of time (Q4, H2, a year – this year, a decade as well). It thus tends to lend itself particularly well to both introspection as well as frenzied rushes to complete things before the end of the relevant period.

The Biblical season of Advent begins four Sundays before Christmas and focuses on preparation for and awaiting the celebration of Jesus’s birth. This means that the day on which Advent begins varies depending on which day of the week Christmas falls on (if Christmas is itself a Monday, then four Sundays before that would be the 3rd of December; conversely if Christmas is itself a Sunday, four Sundays before that would be the 27th of November). I was aware of the season in terms of its observance in church, though I don’t think it manifested much outside that.

The name originates from the Latin adventus; more generally there is the word advent which can also be used to describe something arriving that is significant (e.g. “with the advent of refrigeration, fresh food could be kept fresh for longer” is good, while I’d find “the 20th of September marks my advent in the UK” strange unless you’re someone who made significant changes to the UK). The naming of the season is apt from a Christian perspective (for obvious reasons), and probably even without one (though that’s a separate discussion).

I first came across the concept of an advent calendar, which as the same suggests counts down the days to Christmas during Advent, on a virtual pets site called Neopets. On Neopets, this offered a gift and a small amount of the site’s currency every day during December (though, differently from most advent calendars, this included the days from December 26 to 31 as well). However, apart from the Neopets one I wasn’t aware of this being a tradition in Singapore (or elsewhere, for that matter).

The idea of gifts isn’t inherent to advent calendars (initial versions served very much as mechanisms to track the days to Christmas as well), though it is common, especially in commercial contexts. I saw many more of these when I came to the UK – perhaps this makes sense, as Neopets was started by British developers. Many commercial advent calendars feature small items in 24 or 25 sealed and opaque, but individually openable compartments. The idea here is that one tracks the days to Christmas by opening each compartment only when the relevant day arrives: on December 1, the door marked 1 is opened, and so on until the last day. There’s technically nothing stopping one from opening the later compartments early, but I guess one would be cheating oneself of the theorised anticipation and excitement in the build-up to Christmas.

I tend to associate these primarily with chocolate (I probably first encountered these in Sainsbury’s in my first year), though many other variations (beauty products, alcohol, toys etc.) exist. There are also purpose-built empty containers (presumably intended for people to buy for their kids or spouses) with 24 or 25 small drawers or pockets.

There is even a fairly popular advent calendar for programming problems (Advent of Code), which I’ve found useful to get a bit of algorithm/data structure practice. I’ll be doing that this year in Haskell (maintaining the option to switch to Java or Python if things get too difficult or I get too busy, especially later on in the series).

There’s probably something to be said around what constitutes a good countdown to Christmas. I’ll admit that on first reaction, I find a number of the commercial ones out there a little awkward. However, the definition of ‘good’ is likely to be highly dependent on what one views the Christmas period to be about. For example, among other things I want to be able to evaluate and introspect on the year gone by, and also to be present when spending time with family and friends – and I find that, say, an alcohol-based calendar is helpful for neither end. However, these could be appropriate for someone who finds this to be helpful because they enjoy it, and/or for giving them confidence to interact and/or interact better with family and friends.

Detecting Progress in Sequences

I often try to evaluate whether something that is difficult to measure directly and clearly has improved. For example, I might like to evaluate how my German or logic puzzle skills have changed over time. I could try German past exams or look at logic contest results – however, one problem with these is that there is a lot of noise. For example, for the logic contest results, a score could be artificially low because I had a bad day or the contest was unexpectedly hard; it could also be high because I made multiple lucky guesses on difficult puzzles. Thus, a single measurement is unlikely to be sufficiently reliable.

One solution is then to use one’s average score, or take other statistical summaries of multiple data points. However, we probably don’t want to consider all of the data points we have as equally important. For example, I ranked 155th in a Sudoku GP contest in late 2017 – if I’m trying to evaluate my skill level at the end of 2019, that’s probably not relevant.

We could pick a cut-off point (for example, the beginning of 2019, or the last ten contests) and then discard all of the data from before that, and then simply treat the remaining data as equally important. This is often the basis of sliding window algorithms; if we say that we’re interested in one’s average score from the last ten contests, we can find this metric over time by considering a part of the list ending at today. There are methods for calculating these metrics efficiently (taking time linear in the length of the data stream).

Unfortunately, choosing a suitable window can be difficult – small windows can be vulnerable to noise, while large ones may fail to account for trends present within an individual window. As far as I know, this selection is more of an art than a science.

We can use more complicated approaches as well. Instead of picking a hard cut-off, where data from before the cut-off is irrelevant, we can instead treat data points as becoming less relevant over time. A method that’s often used is exponential weighting; giving the most recent observation a weight of 0 < \alpha < 1, the second most recent a weight of \alpha (1 - \alpha), the third \alpha (1 - \alpha)^2 and so on. As \alpha approaches 0, we approach a simple historical average; as \alpha approaches 1, we approach remembering just the most recent element. I’m not sure if the underlying assumption that events become exponentially less relevant over time is appropriate.

In spite of possibly sounding complex, this method does have computationally favourable properties. If we’re keeping track of a stream of data, we don’t actually need more than constant additional memory. It’s enough to keep just the previous reported average, because incorporating a fresh data point D into our metric can be done by S_{new} = \alpha D + (1 - \alpha) S_{old}.

There are some dangers here as well. The first challenge is bootstrapping; how should one pick the initial value of S? One could use the first observation, or perhaps an average of the first few observations if short-term divergence from reality is unacceptable.

I think there’s also a risk with massive outliers massively skewing the average (e.g. an API call which usually takes nanoseconds exceptionally taking an hour because of a system outage). This exists with any statistical technique, but if \alpha is small, our estimate will be “corrupted” by the exceptional data even after quite a few additional measurements. With the sliding window method, once the window has expired, the exceptional data point drops out.

In general, the methods we’ve covered assign weighting functions to the data points – the simple average just assigns the same weight to everything, the sliding window assigns the same weight to everything in the window and 0 to things outside the window, while the exponentially weighted moving average (EWMA) weights each point differently based on how recent it is.

As an extension, there are techniques for maintaining constant-size reservoirs of values that can be used to approximate more general summaries like order statistics, standard deviations or skewness. These often rely on holding a subset of the values being observed in memory. The selection mechanism for which values should be kept can be written to bias towards more recent measurements. In some ways, the calculation of our standard sliding-window based moving average can be implemented as a special case of this, where new entries are always included, and the oldest entry at each insertion is evicted. That said, we would probably not do this for just an average, as we can do that with constant memory (just remember the current average).

It’s not a particularly scientific or deterministic method, but in practice I find it useful to consider graphs with various transforms on top of them and draw conclusions based on that. I don’t have the sufficient statistical background or intuition to decide beforehand what would work well, unfortunately.

Boundary of Lines and Gradients

In The 7 Habits of Highly Effective People, Stephen Covey introduces the notion that things we encounter in life may be classified as irrelevant or relevant to us (whether they are in our circle of concern), and then of the things that are relevant, whether they are things we can influence (hence circle of influence).

This idea is introduced in the context of arguing that highly effective people are proactive. The book claims that there’s a risk that one spends too much time focusing on things within one’s circle of concern, but external to one’s circle of influence. By definition, these can’t be meaningfully changed by one’s actions, so one logically should not focus on these; yet, there is a tendency for one to index too heavily on these, perhaps because they are worrying to look at.

I remember some concern over my work visa application process when I was still studying at Imperial; at the time, the UK Migration Advisory Committee was actively reviewing the Tier 2 scheme with a specific focus on post-study work. Obviously, that was within my circle of concern, but was out of my circle of influence. I was quite concerned about it then, and spent a good amount of time researching how Tier 2 visas looked like and what changes might have occurred.

I also took positive action to hedge these risks (which was within my circle of influence). This included investigating opportunities elsewhere, restricting my internships to places that were willing and able to sponsor Tier 2 visas, and trying to be “so good they can’t ignore you”. Looking back, I think the hedging was valuable; poring over visa rules was less so.

So far, so good. However, there is some difficulty in applying this idea. Two challenges I’ve thought about recently are accurately identifying the boundaries of one’s circle of influence, as well as figuring out when one should exercise one’s ability to influence things in one’s circle of influence.

In terms of identifying boundaries, one may incorrectly identify things as within one’s circle of influence, owing to overconfidence or excessive optimism – a classical example being what other people, especially other people we don’t know choose to do. The inverse would be identifying things as outside from fear; I’ve done this before in terms of some aspects of how I’d been investing my free time (in particular, a commitment to over-studying).

Errors in both directions may also stem from ignorance; for example, one may (correctly) think that the spot GBPUSD exchange rate is mostly not in one’s circle of influence, and then extend that to say that the number of pounds one must pay for a known upcoming dollar expense is out of one’s circle of influence (wrong; forward contracts and other hedging methods exist).

Some authors make further distinction between things one can influence and things one can control, which usually implies personal ability to decide the outcome (for example, one can control one’s own decisions, but can probably only influence another person’s behaviour). I find this a fairly useful distinction to make.

We then move to figuring out if things within our circle of influence are actionable. They can be, but aren’t always. For example, I’ve thought about how to insulate my portfolio from the storms of Brexit and the recent market turbulence. On one hand, my asset allocation is clearly within my circle of influence. I can sell everything and go to cash (or, if one’s worried about the pound, to the US dollar or yen). I’d say it’s even stronger than that – it’s something I can directly control in that I know I am able to execute the relevant transactions if I want to.

Yet, I don’t necessarily know how changes in my asset allocation will affect the returns I will get and the risk I’ll be taking on. I can make some estimates based on past distributions, but those will suppose that at least to some extent past performance is indicative of future returns; there might be ‘black swans’ which can mess with this and by definition are not foreseeable (and thus outside of my circle of influence). In a sense, I know I can change things, but I don’t know as well what impacts my changes will actually have; there is also a cost to implementing changes (both in terms of tax and dealing fees). Making repeated portfolio changes that individually make sense if one thinks one can significantly influence returns or risk could turn out being very costly.

A variant of this is that we may loses sight of actions we should probably take that contribute towards progress on a goal in our circle of concern, even if we (correctly) identify it as mostly outside our circle of influence. This might include the broader state of the economy, environment or public health – it’s probably reasonable to think of these as not things we can directly influence, but that shouldn’t be a reason to ignore them altogether. These behaviours should be accounted for by related things within our circle of influence, possibly at a finer resolution (e.g. “I will continue to work, earn, invest and spend” because I am concerned about my own personal economy and living standards), but they might not necessarily be.

I agree that we should focus our energies on things that we can influence; that said, we need to be careful to identify these correctly (or for things that are large, identify what we can influence), and also to be aware that being able to influence something doesn’t mean we should.

Rebooting Ourselves

It was a fairly rough week. Thus, over this weekend I thought about re-examining and resetting various procedures and things I do, as opposed to actively filling my time with activities. This reminded me of a Humans of New York post I came across several years ago:

“I’m rebooting my life entirely, again. It’s time for Andrew 5.0.”

In computer science, semantic versioning is a system for identifying different versions of software products in a standardised way. Under this system, a product’s version is an ordered triple of nonnegative integers written major.minor.patch. This is typically used for software, though the definition does not seem to require it. The system discusses changes in terms of a public application programming interface (API), which specifies what functionality the product offers.

In terms of software, a SQL database’s API could include the types of queries that may be processed. For MCMAS-Dynamic, the public API would include the details of the modelling language it is able to verify properties for. A non-software example could include a simple kettle; the public API could include how one adds or removes liquid, how one turns it on or off, and possibly alarms or other feedback mechanisms for when the liquid has boiled.

When a new version of a product is released, the version number is increased (in lexicographic terms). How this increase is done depends on the types of changes since the previous version:

  • If the public API is ‘broken’, meaning that previously valid ways of using the API are no longer valid or accomplish different things, then the change requires a major version bump. To do this, the major version is incremented, and the minor and patch versions are reset to 0 (e.g. 7.5.1 \leadsto 8.0.0). For example, if the kettle used to play an alarm when it the liquid was boiled and this was a part of the public API, then the major version should be bumped if this functionality is removed. (In particular, if the API did not specify that the kettle had to play an alarm, this change might not warrant a major version bump.)
  • If new features are added without ‘breaking’ the API or there are non-trivial internal improvements, the change leads to a minor version bump. The minor version is incremented and the patch version is reset to 0 (e.g. 7.5.1 \leadsto 7.6.0). For example, if the new version of the kettle is substantially more energy-efficient, then that could be a minor version bump.
  • If something was broken and has been fixed (without changing the public API), then the patch version should be incremented (e.g. 7.5.1 \leadsto 7.5.2). For example, if the kettle previously rang an alarm twice when the liquid was boiled even though the kettle’s API specifies it should only ring once, then a change that makes the alarm only ring once could be part of a patch version bump.
  • Multiple changes in aggregate should be evaluated in aggregate. In most cases, the largest magnitude of all constituent changes applies, though generally speaking this is not true (consider one bugfix plus two changes, one which breaks the API and another that reverts that change – that is a patch bump, not a major bump).

Generally, making a more aggressive version bump than would be required for one’s change is acceptable, though it can confuse users. In particular, I tend to expect backward-incompatible changes when facing a major version bump; not finding any can be surprising and confusing.

The sentiment of the quote sounded like it was a major version bump. Defining an API for one’s life is obviously very difficult; even if one tries to use a lot of abstraction, I find that there are just too many facets. Rather loosely, our API might be split into a bunch of micro-services. We can treat physical needs and bodily functions like breathing and digestion as infrastructural. These services might then focus on the range of activities we involve ourselves in, or range of activities we could involve ourselves in. For me personally, this could include software engineering, getting along with other people, finance and budgeting, computer science, writing, puzzle solving and so on.

Hopefully, I would imagine that we chew through a lot of patch versions as we continue to improve skills. Today’s release notes could include “Jeremy knows a little bit more about thread pools” (I read a chapter of Java Performance Tuning today over a post-lunch coffee). Minor versions would also be relatively common; this wouldn’t be from today specifically, but “Jeremy can vaguely attempt a Balance Loop puzzle” is probably pretty recent, extending the sudoku and other puzzle-solving features.

Depending on how we define the API, major version bumps could be very common. It is typically important to be relatively disciplined with breaking changes in an API in a software engineering context, as clients may often depend on one’s product in non-obvious ways. While others’ dependencies on us can indeed be non-obvious, I think one factor that changes things is that our systems seem to be ephemeral whilst program code is not. A codebase left as it is over years or centuries retains its capabilities (admittedly, finding suitable infrastructure to run the product might be an issue).

On the other hand, there is some evidence that we lose skills that are underutilised with time. I used to play Dance Dance Revolution quite a lot and could probably pass an arbitrary level 15 song as well as some 17s; I doubt I can do that today as I haven’t played in a few years. The ways we interact with others or manage our finances may change as our personal environments change as well; for example, if I moved away from the UK, I would not be able to allocate my investments the way I do now, because I would lose the ability to use ISAs (and probably most other forms of UK-specific tax-free savings). This may even happen without action (for example, if the UK government changes how ISAs or tax-free savings work) – though you could argue that declaring the use of specific vehicles in one’s API might be too specific and implementation-dependent (“I will use tax-advantaged accounts that are valid in my location appropriately” is maybe better).

In light of the above, I would be a bit laxer with what constituted a ‘breaking change’, which pulls things back toward subjectivity which I think semantic versioning was trying to avoid. I might regard myself as having major version 2 right now; I could consider everything up to and including my second year at Imperial as version 0, which is typically used in development to refer to a pre-release period of rapid iteration. Although National Service and/or moving to the UK for studies did bring about nontrivial changes, I really didn’t know what I wanted to do at that time (not that I know now, but there is at least a vague direction).

The Google internship was probably the turning point for version 1; that also coincided with several major changes with regard to finance, investment, philosophy and priorities. I’d call the second major change to be when graduating from Imperial and starting at Palantir; even then, I’d regard the first set of changes to be more fundamental. The re-examination I did over the weekend is actually probably a patch release (or maybe a minor that improves several non-functional characteristics); it certainly doesn’t warrant a major version bump.

Scoring Points

I think it would be reasonable for me to describe myself as competitive. Through my years at university I had a goal of scoring a GPA of 90 percent (where a first-class degree is awarded at 70 percent); I added a clause of working a part-time job later on. I’ve enjoyed participating in a variety of contests from young, both in terms of work (Singapore Math Olympiad, ICPC) and play (Sims 4, music exams, puzzles).

Speaking of puzzle contests, I participated in the UK Sudoku Championship 2018 at the beginning of this week. Participants were given 2 hours to solve 16 puzzles of varying difficulty. To keep things interesting, the puzzles aren’t just standard or “classic” sudoku, but also feature additional constraints – for example, the highest-scoring puzzle had an additional constraint where both major diagonals had to have the digits 1 to 9 once each as well. I solved 13 puzzles, which led to a placement of 42 out of 136 (or 157, depending on whether one considers zero scores). I finished the remaining puzzles after the time ran out and would have needed about 2:20 overall; the fastest solver cleared everything in 1:09, just under half that!

I think one of my weaknesses can be a tendency to over-index on pursuing things that I think are worth doing. This may not work out so well if my thoughts turn out to be incorrect or misguided, or if the pursuit causes unpleasant side-effects. Writing this reminds me of the 90 to 100-hour weeks I used to pull in second year when studying for the exams then.

Anyway, one of the things I’ve been looking at recently has been building up a healthy portfolio. I had a conversation with my parents today, and one of the things that came up as part of the discussion was something I used to do in high school.

In Singapore, most food courts and hawker centers have an economy rice stall. The stalls offer steamed white rice accompanied by various cooked dishes. These include meat-based dishes like fried chicken or Chinese char siu, vegetable dishes and tofu, among others. Ordering from an economy rice stall is typically cheap (as the name suggests); thinking about it, it also bears similarity to the low-cost model for airlines in that one chooses and pays specifically for what one wants, and has flexibility in choosing what is included.

My high-school canteen had an economy rice stall, and I remember budgeting S$1 (now 56p; then probably more like 45p or so) for lunch every day. This would typically result in fairly unhealthy meals, admittedly, with rice plus one hot-dog and a small portion of nuts and anchovies. For comparison, many meals in the canteen ranged from S$2 to S$3. Note that cooked food in Singapore is generally much cheaper than in London, and food in school canteens is typically subsidised.

Some of this frugality continued when I entered university. I had similar ‘adventures’ in my first year at Imperial, where I somehow maintained a grocery budget of £7 per week (£1 per day) for a full academic term. Things relaxed over time, especially as I convinced myself that I did have reasonable earning power. To be fair, there was increasing evidence of that as I did several internships and part-time jobs while I was at Imperial.

Clearly, if one is trying to build a portfolio quickly, one factor that can be optimised on would be maximising fresh inflows to the portfolio. Minimising one’s expenses and thus increasing one’s savings rate helps with that, of course.

However, aggressive expense minimisation often leads to other appreciable costs in terms of health, happiness and stress. Apart from direct opportunity costs (for example, from foregoing higher quality or safer food), there are secondary effects as well (affecting socialisation, for instance). The mental overhead of needing to evaluate many small financial decisions every day can be significant as well; I’ve found it to be the case in my personal experience.

Having a sizable portfolio can be useful; I don’t think I have to be convinced of the value of financial independence even though I don’t necessarily have early retirement plans at this stage. However, as an end in and of itself it is not particularly satisfying. I’m fortunate in that I haven’t taken on large debt obligations, and right now (though I may regret saying this later) bumping the numbers in some brokerage account or on my spreadsheets is nice, but brings very little marginal happiness.

It may well be obvious that, to quote a blog post I read this week, life is more than compounding money. In general, reducing it to an arbitrary single end is difficult. Nonetheless, when pursuing a goal I sometimes lose some sight of broader things, and it’s thus important to remind myself of this.

It has been said that “what gets measured gets managed”, and for me at least most goals tend to be fairly quantitiative and measurable in nature. I often place these in some kind of OKR framework which often isn’t as friendly to softer, more qualitative tasks. That probably explains to some extent “losing sight of broader things”. Some may have a very clear and specific view of what they want to do, but I’m not there yet.

On Challenges that Build

On my return flight from Singapore to London, I listened to quite a few hours of music. Two of the songs I listened to and enjoyed at least partially for similar reasons were It’s Gonna Be Me (by NSync), and I Can’t Be Mad (by Nathan Sykes). It’s a bit of a strange pairing as the former seems to be an upbeat, relaxed pop song while the latter is a fairly moody piano ballad. However, the common element I latched on to here was that both songs feature sections that are repeated multiple times, with the vocals developing additional complexity on each iteration (thinking about it this is fairly common in songs that are critically reviewed well, and also in songs I like). For example, in It’s Gonna Be Me there is a line in the chorus which is sung four times over the course of the song, and its complexity develops:

The challenges in I Can’t Be Mad have a couple of changed notes, but also (if trying to reproduce the original) demand different productions of the notes (falsetto vs not, belts, etc). There’s always a risk of adding too many embellishments, though I find expanding upon base melodies can be quite interesting. Singing these, and considering what would be reasonable for my voice (adding a closing run to the last syllable above, for instance) and what would not be (adding a +1 semitone key change after the second chorus in I Can’t Be Mad – original is already awfully hard), can be enjoyable too.

Generalising this, I quite like the idea of “increasingly complex variations on the same theme” when learning concepts and when teaching them. This already seems to happen for many concepts in mathematics. Over the course of an A-level student’s mathematics education, he/she might understand how to write a quadratic expression as a product of linear factors (e.g. converting 6x^2 - 19x - 7 into (2x-7)(3x+1)). This could first begin with expressions where inspection works feasibly. However, students should also be presented with some examples where inspection is extremely difficult or even impossible (though probably only after gaining some confidence with the cases where inspection is plausible). For general expressions, one could try to use both the quadratic formula and factor theorem to factorise something like 6x^2 - 19x - 8 into -\frac{1}{24}(-12x + \sqrt{553} + 19)(12x + \sqrt{553} - 19). However, there will be some expressions like 6x^2 - 19x + 16 where the solutions to the quadratic are not real; later, with some understanding of complex numbers, these would make sense. Students will also learn about problems which may not obviously be quadratics but can be written as such (like x^4 + 2x^2 + 1); the ability to synthesise the various techniques can then be tested with something like 7x^8 - 10x^4.

To some extent my Masters project also had this theme – linear time logic, adding knowledge, adding dynamic modalities, generalising that to full branching time logic, and then switching out the infinite traces for finite traces. I haven’t written a course or a book on a computer science topic yet, but I can imagine that there might at least be sections that follow this kind of sequence.

This pattern also occurs a fair bit in many technical interviews I’ve seen as well, where problems start easy, but additional and progressively more challenging constraints are repeatedly introduced. The purposes here could include testing for a breaking point, seeing how candidates react to problems without an obvious solution, or whether they are able to synthesise additional information to come to a solution.

I find that I often learn best by practicing on smaller examples at first, and then (attempting to) generalise their conclusions to larger models, considering when these conclusions may fail or not. Having multiple variations of progressive difficulty can be useful as they can give a sense of achievement as partial progress towards an overall goal is made. Furthermore, I find understanding how changes in the problem scenario leads to the base solution method being applicable or inapplicable to be a key part of understanding as well; there is a clear need to reason about this when considering incremental variations. Going back to It’s Gonna Be Me, for example, aiming downwards at the word ‘love’ and not conserving sufficient air or energy for it might work for the first three passes, but it’s unlikely to on the last round.

There is a risk that the method can be frustrating in that it seems like it is consistently ‘moving the goalposts’, especially if one forgets that the partial goals are partial goals (and starts to think of them as complete ends in and of themselves). The standard I’m using for understanding (ability to critically evaluate applicability in novel contexts) may be seen as a little high. I also haven’t covered how to bootstrap the method (that is, how to develop an understanding of how to attack the base problem before any variations are introduced). Nonetheless I think there are some contexts where this works well. I’ve found it to be useful in singing, mathematics and interviewing at least!

Making Heads of Tail Risks

I remember that I was fairly anxious at the beginning of my fourth year at Imperial. I was concerned about securing work after university. Looking back, this seemed patently ridiculous; I had topped my class for the third time and already had a return offer in hand from Palantir. However, owing to sweeping government rhetoric about controlling post-study work visas at the time, I saw “not being able to get a work visa” as the primary risk then, even if it was remote. That statement in and of itself was probably correct, though the time I spent to monitor and mitigate that risk (reading up on government committee reports, and considering alternatives like a H1B1, EU blue card or doing a Tier-2 ICT after a year) was excessive.

Of course, this never materialised; and even if it did, the only likely impact would be that I’d have to fly home to Singapore in between finishing uni and starting work (I did not; though on hindsight that might have been a good thing to do).

I’m not sure when I first became aware of the concept of probability distribution functions (or, for that matter, continuous random variables). These functions are continuous, take on nonnegative values and integrate (across all variables) to 1. In the case of single variable functions, one can plot them on a two-dimensional graph; one may get results looking somewhat like the picture above, in some cases.

Areas of regions underneath the graph are proportional to the probability that a value falls in that region. For example, a uniform distribution would have a probability function that’s just a horizontal line. The graphs for the return of investments 1 and 2 in the example above follow what’s called a normal distribution; investment 3 follows a Student’s t distribution which has fatter tails.

Since areas are proportional, a simple technique for generating random values from an arbitrary distribution is called rejection sampling; if one draws a box around the distribution and throws darts randomly at it, one can take the x-coordinate of the first dart that lands underneath the function as a representative random sample.

That’s a basic mathematical introduction. If we had to rank the quality of the return profiles above (remember: right means higher returns), a lot would depend on what we were trying to do. I would personally rank investment 2 (the green curve) on top; it has a considerably higher mean return than investment 1 (blue) and adds only a small amount of variability. We can calculate what’s known as the standard deviation of a given distribution; this is a measure of how much variability there is with respect to the mean. In fact, the blue curve has a standard deviation of 0.6; this is 0.7 for the green curve.

Ranking investments 1 and 3 is more difficult; the mean of 3 is higher, but you add a lot of uncertainty. I’d probably rank them 2, 1, 3. However, there is also an argument in favour of investment 3 – if one is only interested if the returns exceed a certain level. It’s a similar line of argument where if you’d ask me to double a large sum of money (nominally) in 20 years, I’d pick a bond; 10 years, a general stock index fund, and 10 minutes, probably blackjack or aggressive forex speculation.

Whichever investment we pick, it’s possible that we may get unexpectedly awful (or excellent!) results. The standard deviation could give us some measure of what to expect, but there is still a non-zero probability that we get an extreme result. For the normal distributions (the blue and green curves), there is a 99.7% probability that a single observation will be within three standard deviations of the mean; this does also mean that there’s a 0.3% probability it does not, and about a 0.15% probability it’s lower than three standard deviations below the mean.

Tail risk refers to the risk of events that may have severe impact but are low-probability; considering them is important. Going back to the work visa situation, I think I correctly identified visa policy changes as a tail risk, though in hindsight controlling the amount of time spent mitigating them was done poorly – akin to spending $10 to insure against a 1% probability of $100 loss (provided the $100 loss wasn’t crippling – which it wouldn’t have been).

I also spent a lot of time focusing on mitigating this specific tail risk, when perhaps a better solution could be developing resilience to general tail risks that may affect my employment. The obvious routes at the time would have been continuing to do well academically and develop my skills, though others exist too – such as having a greater willingness to relocate, living below one’s means and building up an emergency fund. There are still further tail risks that the above wouldn’t address (e.g. a scenario where computers and automation are universally condemned, all countries practice strict closed-border policies and the global fiat money system collapses) but the costs in mitigating those risks seem untenably high. I haven’t read Antifragile yet (what I describe here is weaker, as it doesn’t demonstrate benefiting from low-probability events), though that’s planned to be on my reading list at some point in the future.

This Side of Town (Goals for 2018)

I’m back in London, though I still think I’m on holiday. While that’s not a bad thing in and of itself, it doesn’t quite feel like 2018 has fully started yet. I have an annual exercise in goal-setting after the year-end reviews, to help me figure out what I should be focusing on in the year ahead.

Software Development

A1. Grow rapidly as a software engineer.

In 2017 I’d say progress was certainly made here. I see this as measurable by considering the change in range, scope and depth of issues and questions I receive and am able to answer/fix/address. Nonetheless, setting a benchmark is pretty difficult. In previous years I’ve written this as “be a strong engineer”, but for me at least I know that’s going to end in failure. I won’t be surprised if I end up complaining at the end of the year that I didn’t grow rapidly enough; while understandable, I think that’s something I’m less likely to berate myself for.

When I was in Singapore, I met up with a close friend, and for both of us it turned out that 2017 was a year largely centered around the pursuit of technical and career development – to the point that we struggled to think of other things to remember the year by. While growing technically is definitely something I want to do (it is target A1, after all), it shouldn’t be at the expense of all else.

A2. Present a paper on computational logic.

We had two papers in 2017 based on the work done as part of my Masters’ thesis. Things are getting a little trickier here now, as writing more will require some original extension of the work that was already done (as opposed to merely tightening up existing work). Nonetheless, I

A3. Get at least two patents in the pipeline.

I enjoy the creative parts of my job – while some of it is indeed cobbling together glue code to ensure that other pieces of code interface correctly, there are many more interesting bits involving designing and building new systems, whether as part of normal work or otherwise. These more… creative projects can be filed as patents, and setting this targets serves as encouragement to look beyond the day-to-day on my team and think more carefully about what can be improved.

Skill Development and Experiences

B1. Write 52 full-length blog-posts on this blog.

I failed this last year; in the end I wrote just 37. I have a few ideas for how I can do things differently this year to give myself a greater probability of success at this one, such as having a series of book or paper reviews, which should also get me to read more widely.

Of course a once-per-week cadence is the target here, though I’ll consider this successful regardless of the actual temporal distribution of the posts. I define full-length as requiring at least an hour of thinking and writing. I won’t assign a word limit (in case I write a poem, or some kind of “Explain Like I’m 5”) though for standard non-fiction prose I’d say it tends to be somewhere between 700 and 1500.

B2. Visit 12 distinct countries this year.

The point of this is that I’d like to spend a bit more time travelling while I still can*. Trips as part of work do certainly count. In terms of edge cases, I’ll allow Scotland, Wales and NI to each count as one (I probably wouldn’t allow it if I’d been before); trips for work count, but airside transits do not. If I ever do a mileage run (i.e. fly purely to preserve airline elite status), that doesn’t count either. There is no requirement for novelty (so Singapore does count, as would a likely trip to the US at some point).

*for health / work / other commitment reasons, not because of Brexit! Since the UK isn’t a part of Schengen I don’t think I really benefit that much from residing in the UK for this, other than proximity and less fettered imports.

B3Walk 3,650,000 steps this year.

This is 10,000 per day and I wouldn’t have managed it last year (I hardly ever broke 10,000 – let alone an average of 10,000). Walking to work helps, but by itself that’s not enough.

This target can be accomplished by walking in circles around my room, though I hope it also encourages me to get out more and try out more different routes.

B4. Be able to sing a B4 consistently, and complete three studio recordings.

This looks like two targets, but I’m pairing them up because they are both related to singing and the alternative of giving music its own section seems a bit excessive.

B4 is a pretty high note (think the high notes in verse 2 of Journey’s Don’t Stop Believin’ – “find” in “living just to find emotion” and “night” in “hiding, somewhere in the night” are both B4s). I’m somewhat more confident of my A4s and Bb4s. I’m able to hit B4s sometimes, but I really wouldn’t go above Bb4 if I had to perform (in fact, I’d prefer to stick with A4s, even).

Obviously, there is no requirement for the B4s to be as bright or sustained as the ones in the Journey song. I’m looking more for reliability here.

The studio recording target is because I have been practicing to maintain my vocal range and to some extent accuracy, but I don’t remember when the last time I actually tried to learn a song was.

Financial Responsibility

C1Maintain a savings rate of 50 percent or higher. This is computed by

SR = \dfrac{\text{savings} + \text{investments} + \text{pre-tax pension contribs.}}{\text{net salary} + \text{pre-tax pension contribs.} + \text{dividends} + \text{other income}}

I’ve been thinking about maxing out my pension contributions, but I’m not so keen to do that because of the Lifetime Allowance and also the lack of flexibility in that the money is tied up till I’m 55. This could make sense if I wanted to pursue an early-retirement path, but I’m not currently thinking about that.

This is a pretty ‘vanilla’ target and existed last year. It’s certainly not easy, though I wouldn’t say it’s that unreasonable either. I’m quite a fair bit above this mark in 2017. However, one thing I’m tracking for 2018 is that super-hard saving and investing can also be irresponsible; see this post on Monevator

C2Live at at least the UK Minimum Income Standard in 2018.

Without considering rent, for a single person that clocks in at £207.13 per week (so about £10,800 per year). That’s still a substantial bump from my expenditure last year.

It’s very easy for me to be very strict with spending, but sometimes this becomes counterproductive. I’ve spent hours agonising over a £20 decision; even if I made the right choice (which is likely to have value less than £20; assuming that paying more yields a better product/service), that’s still way below minimum (or my) wage. I’m rather well taken care of, and sometimes my monthly budgets can be eye-wateringly tight as a result.

Relationships

D1. Maintain clear and regular communications.

In 2017 I did reasonably well here, so there’s not much to say here other than to keep on keeping on. In practice this goal could probably be split into several sub-goals corresponding to people or groups of people, though that information is a little less public.

  • 1
  • 2