Browse Category

Gaming

Lessons from a Purple Dragon: Exploring Game Search Spaces

I think I first played the first Spyro the Dragon game when I was about seven years old. I’m not sure what attracted me to the game. It might have been the idea of collecting bright, shiny gems, the rather gentle learning curve (unlike the Crash Bandicoot, Contra or Metal Slug games that I remember having access to on the PS1 at that time) or perhaps the exploration required to complete a 100% clear of the game.

More recently, a collection of all three games in remastered form was released on PC. I never actually got round to playing Spyro 3, for some reason, and the first two games were pretty enjoyable, so I purchased it. It was available on Steam for £34.99, though I searched elsewhere and managed to procure it for just £13.10.

In Spyro 1, the titular purple dragon has flame breath, is able to charge at enemies with his horns, and can jump and glide around (though he cannot fly). Levels in Spyro 1 generally feature a mostly linear path from a level’s starting point to its exit, with enemies and obstacles along the path; figuring out how to traverse this path can be interesting, in that one usually needs to understand how enemy attacks work, and/or how Spyro can navigate relevant obstacles, typically through judicious platforming.

However, reaching each level’s exit is rarely the end goal. Typically, levels contain a number of gems (and sometimes dragon eggs as well), and collecting these requires exploration beyond the main route. Although I certainly wasn’t familiar with the abstract concept at that time, when I first played these levels as a seven-year-old I would usually perform a depth-first search. I would head down an interesting path until it reached a dead end or looped back to somewhere I’d already explored, and then continue to a new path that hadn’t been searched yet.

Depth first search is naturally a reasonable way to explore a game world. Since players can’t teleport freely, breadth first search is generally not an option. Iterative deepening-style approaches could make sense if paths are uni-directional or if there are frequently very long routes that don’t need to be explored on the way from one area of interest to another, as it is usually possible to exit and re-enter levels from the beginning. However, this isn’t generally true in Spyro level designs. Thus, for Spyro I don’t think this core process has changed very much. What keeps Spyro interesting for me tends to not be about overcoming the obstacles on a given path (with some exceptions), but instead about identifying legitimate paths that can be plausibly explored. A simple example is in one of the areas in World 1, Town Square; there is an optional area accessed by gliding down a staircase and around a wall. This eluded me for a long time when I was seven, as clearing most of the levels (in the sense of reaching the exit, not 100% completion) can be done with gliding in mostly straight lines.

Typically, the game will give you hints that such hidden areas exist. Most obviously, each level has a number of gems to collect, and this number is known to the player – so shortfalls typically indicate that there are still hidden areas to be found (provided the player has meticulously collected everything in the areas they have already explored). It is also fairly common for some gems to be visible from one of the main paths, even if it is not made obvious how to reach them. With the Town Square example, one of the dragons in the area does actually give you a big hint on what to do, though I wasn’t aware of this as I skipped cutscenes then.

I usually had to resort to walkthroughs or online guides to find these answers in the past. I’m pretty confident it was quite different from the way I’d approach these problems now. I think I mostly just tried a ton of different approaches and saw what worked. However, I now use deductive reasoning more heavily – perhaps experience with participating in logic puzzle contests has helped. Instead of simply exploring paths that come to light and trying things that look encouraging, I tend to apply a perhaps more principled approach to identifying suitable routes to explore once the obvious methods have been tried. I would consider the usual platforming techniques that may be appropriate and test them against the level setup (e.g. Is there a high point I could glide from? Could there be some non-obvious platforms that are actually safe to land on? If there is/are supercharge ramps that increase Spyro’s running and jumping speed, can I use these? Can I combine them?). I’ve been able to solve quite a number of levels that I wasn’t able to in the past.

Some of this reminds me of a book I read earlier this year, Problem Solving 101: A Simple Book For Smart People by Ken Watanabe. The techniques introduced there for making decisions on how to approach problems are applicable to clearing a Spyro level – for various root causes/problems what one should do varies, and it is probably more expedient to think through what needs to be considered in a principled way than scurry off aggressively pursuing whatever solution comes to mind. I don’t think I’ve ever been faced with a level sufficiently difficult to need drawing a full-blown logic tree out, but even mapping out such a tree in my mind helps a lot.

Algorithmic Modelling – Touhou Project 11 (Subterranean Animism)

The Touhou Project is a series of “bullet hell” shoot-em-up games. In these games, the player controls a character within a 2D plane and needs to dodge large quantities of bullets. These games tend to be fairly difficult, testing players’ reflexes and in some cases logic as well (for example, many patterns are aimed at or relative to the player’s position; misdirecting such patterns can be useful).

I wouldn’t say my reflexes are very good. Nonetheless, good progress can be made using careful resource management; players are given tools in the form of lives (extra chances after getting hit) and bombs (single-use abilities that clear the screen and deal damage to enemies). The eleventh installment of the series is called Subterranean Animism (SA), and I’m choosing to look at it for this post because it is widely regarded as the hardest game in the series to clear on normal difficulty. For most of these games (on normal), I can just sit down, play the game, dodge bullets and win. There were two exceptions – the fifteenth entry Legacy of Lunatic Kingdom (but even then that only took about five attempts), and SA. SA required a nontrivial amount of planning – I had to learn some of the specific patterns, and also chose a shot type I wouldn’t normally pick.

I generally play Touhou games on normal with an aim to complete the game on a single credit; this is called a “1 Credit Clear” or 1CC. I’m generally somewhere in between difficulties; I’m fairly comfortable with Normal in most cases, but Hard is hard (the game also has an even harder Lunatic mode). I’ve successfully completed 1CCs of most of the games in the series on Normal, and a few of the easier ones (7, 8, 10) on Hard. SA was the toughest game in the series for me to 1CC; it is also the last one I did, at least from installments 6 through 16.

Resource Management

Touhou games usually start the player with two spare lives; this is true in SA as well. However, the bomb mechanic is different from other games, which give the character a fixed number of bombs per life. In SA, players sacrifice some of their shot power when using a bomb. A character’s shot power usually ranges from 0.00 to 4.00; this is increased by collecting powerups when fighting stages or bosses. Firing off a bomb costs 1.00 shot power (and cannot be done if one is below 1.00). This can be frustrating, as some patterns become more than proportionally harder if the player’s shot power is low. When a character is hit, she (the games feature an all-female cast) will drop powerup items such that shot power will be reset to at least 2.25 (higher if shot power was at least 3.00). There is an exception – if it is the character’s last life, a full powerup item will drop that sets shot power to maximum.

The game also has mechanics for earning additional lives. In SA, boss enemies have a staged health-bar with multiple patterns; if the player defeats a pattern within the time limit and is not hit, a life fragment will drop; five life fragments result in an extra life. Bombs are allowed.

A Touhou game is divided into six stages; typically stages 1 through 3 are mostly a non-event for me. That said, for SA, the boss of Stage 3 has a few fairly nasty attacks. Most of my aforementioned “blind” or casual 1CCs involve racking up large stocks of lives and bombs on these stages, and then utilising these aggressively in the later stages. We can see this on SA, as well as on what is often regarded as one of the easier entries in the series, Imperishable Night (IN); the first death on SA is at the end of stage 4 while that on IN is at the end of stage 5. That said, I’m actually already failing to dodge patterns as early as stage 3 or 4. It’s important to be willing to use bombs to deal with difficult patterns, as they are much easier to recover (by subsequently picking up powerup items) in SA. This becomes even more important in other games like IN, where bombs that are unused when a player is hit just go away.

Character Selection

Touhou games usually give the player a choice of multiple player characters, and sometimes for each character different weaponry. Typically, different characters have different movement speed and possibly some other advantages or disadvantages, like having a smaller hit-box or extra bombs. In terms of weaponry, players may select from different normal shots and bombs, which usually have balanced trade-offs. For example, one may pick a homing shot which does less damage but can hit enemies anywhere on the screen, or a shot that only shoots straight ahead but does more damage.

Earlier, I mentioned being able to sit down and just play the game; in most cases this involves the main character of the series, called Reimu, who usually (and in SA) has relatively slower movement and a small hit box. I also normally use her homing shots for a first playthrough, even though I usually prefer straight shots after I get more comfortable with the game. These don’t quite exist in SA.

Apart from slightly different shooting and movement mechanics, many Touhou games also feature a medium to late stage boss (often on Stage 4) which adapts her patterns to the player’s character selection. This is on full display in SA as well; the Stage 4 (out of 6) boss has a relatively easy warm-up battle that is static, before reading the player character’s mind and creating patterns from that (which differ depending on the character and shot type).

Most of the time, the different variants of patterns the boss uses are quite balanced. However, this isn’t the case in SA and thus influenced my selection. Although I find ReimuA (with straight shots) is best equipped to handle the Stage 5 and 6 bosses, the Stage 4 fight one has if one makes this choice is extremely difficult, I’d say possibly even harder than the later bosses. Pictured above is Double Black Death Butterfly; it isn’t apparent from the picture, but some of the butterfly bullets are rotating inwards (and so one needs to dodge bullets possibly coming from behind as well). I thus picked ReimuB (which has a weakly homing shot, a relatively easy Stage 4 fight and a special ability to gather powerups from anywhere on the screen) for my 1CC.

Learning the Patterns

Of course, even with careful resource management it’s unlikely that one can perform a 1CC if one’s dodging skills are too far below the bar. While some of the patterns are random and/or based mainly on reflexes, others have a certain trick to them that makes the pattern a lot easier once figured out. With experience, one can figure out common elements and ways to deal with them (for example, a stream of bullets fired at the character’s position; move slowly, but to change directions make a sudden quick movement to open up a gap in the stream) – this drives most of the “sight-read” clears.

In a sense, good resource management is less critical (consider that one can completely ignore the resource system if one can reliably dodge every single pattern in the game) if one can dodge the patterns. That said, it’s actually possible to clear one these games even if one is quite poor at dodging them, if one makes good use of the resources one has.

Death by Cubes (Algorithmic Modelling: Pandemic)

“So the probability we lose is 2/3, multiplied by 5/23, or 10/69. It’s pretty much a 6 in 7 shot.”

I met a friend for dinner and board games a while back, and one of the games we played was Pandemic. Designed by Matt Leacock, the game is themed around managing and discovering cures for diseases before they spread across the world.

Diseases are modelled using cubes, and they spread across a network of cities (which are nodes in a graph). On players’ turns, they perform actions to contain the spread of diseases, research cures for these diseases or share knowledge with other players. After they are done with their actions, players draw cards from an infection deck, where each city (node) is represented once. For each card, players need to add a disease cube of the city’s colour to the city. For example, considering a simplified network:

  • If D is drawn, one yellow cube will be added to D. Similarly, if E is drawn, one red cube will be added to E; if F is drawn, one blue cube will be added to F.
  • A city is allowed to have a maximum of three disease cubes of each type; if a fourth cube of the same type needs to be placed, an outbreak happens. Instead of placing the fourth cube, all connected cities receive one cube of the relevant colour. For example, if C is drawn, a blue outbreak occurs in C, and one blue cube is added to both B and F.
  • The rules on having at most three cubes of each type still apply when placing additional cubes, meaning that outbreaks can cause chain reactions. For example, if A is drawn, a red outbreak occurs in A. This causes one red cube to be added to B, D and E. However, B already has three cubes, meaning that a chain outbreak occurs in B; this causes one red cube to be added to A, C, E and F.
  • This looks like it would loop infinitely, but there is some mercy; in a given chain reaction, each city may only outbreak once. Thus, instead of adding a red cube and triggering an outbreak again in A, no red cube is added.

The game ramps up steadily in pace, by increasing the infection rate (number of cards drawn) over time. Players lose the game when the eighth outbreak occurs, when they need to place disease cubes but can’t (these are component-limited), or when the deck of player cards runs out.

Separately, the goal of the game is to research a cure for each of the four diseases in the game, which requires collecting data on the spread of the diseases. There is a deck of player cards which contains twelve city cards associated with each disease; players need to collect five of them (in one player’s hand) and bring them to a research station. This can be tricky, as players draw cards independently, and giving cards is restricted (I can only give you a card if we’re both in the one city that is associated with that card, and if it’s my turn). At the end of each player’s turn, players draw two cards from this deck (if they can’t, the game is lost).

The deck of player cards also contains a few special event cards which help players, and more interestingly several Epidemic cards which are disruptive. The number of Epidemic cards to include is variable (4 are used in an easy game, 6 for a difficult one). When an Epidemic card is drawn, players resolve a three-step process:

  1. “Increase”: The infection rate marker moves along its track, possibly increasing the infection rate (number of infection cards to draw at the end of each player’s turn). This begins at 2, but eventually becomes 4.
  2. “Infect”: There is a sudden wave of disease in a new city. Players draw the bottom card of the infection deck, and the city in question receives 3 disease cubes of its colour. The usual Outbreak rules apply, though thankfully only one Outbreak occurs even if the city already has two or three cubes.
  3. “Intensify”: The infection discard pile is shuffled (including the just-revealed card from the bottom of the infection deck). These cities are likely to suffer from more disease soon, especially since this happens as part of players drawing cards (before the infect phase of the current player’s turn).

Going back to the initial calculation, we were on the second last turn of a game with six Epidemic cards, five of which had already been drawn. There were three cards left in the player deck. We already had seven outbreaks occur, meaning that one more would lose the game. However, we had also cured three diseases, and the Medic which I was playing was sitting in a research station with all of the cards to cure the last disease on the next turn. Furthermore, my friend who was playing the Operations Expert held the One Quiet Night Event card, which allows skipping of the Infection phase.

Thus, the only risk of losing was drawing the last Epidemic card, and then in the Infect step revealing a city on the board which already had disease cubes of its colour (from outbreaks in adjacent cities). These events draw cards from separate decks, so their probability should be independent.

The first term is easy – there were three player cards remaining, one of which was the last Epidemic card, so there would be a 2/3 chance we would draw it. For the second term, we looked through the infection discard pile, which at that time had 24 cards (along with Sydney, which we removed from the deck using a special event card). There were thus 23 unseen infection cards; six of them corresponded to locations on the board which had disease cubes. On my friend’s last turn, we were able to defuse one of these locations, leaving a final loss probability of (2/3) * (5/23).

It turned out that we got lucky as we didn’t draw the Epidemic card, and even then the card at the bottom of the deck would have been safe. I wasn’t initially keen on doing too much of the calculation, figuring that clearing one risky place was easy but two was hard, but my friend (who is a statistician) decided to go ahead with the calculation.

Algorithmic Modelling – Achievement Hunting (Slay the Spire, Part 2)

Many modern video games feature an achievements system. Achievements are typically awarded for completing specific tasks; typically achieving all of these requires doing substantially more than just what is required to finish the game. In many cases, it’s not even possible to complete all of the achievements on one playthrough.

In a previous post, I wrote about Slay the Spire, an RPG where combat is driven by a deckbuilding mechanic. The game also features an achievements system. I finished getting 100% completion on these achievements this week – this took quite a long time. If I wanted to start from the beginning and accomplish these as quickly as possible, I would need to plan carefully. The main issue here is that achievements are not independent. For example, there are chains of achievements where the final achievement subsumes all previous ones (e.g. “have 99 block, have 999 block” or “unlock ascension mode, finish ascension 10, finish ascension 20”). Furthermore, some achievements can be achieved using the same strategy (e.g. “play 25 cards in a single turn” and “defeat a boss before it takes a turn”, which can both be achieved by building an infinite combo). Other achievements can often be significant handicaps (e.g. “beat the game with a deck that has no uncommons or rares”, “beat the game with only 1 relic”) and I thus think it’s a lot harder to combine these with other achievements.

The game is played in distinct runs, and a majority of the achievements involve performing specific feats on individual runs. We can model this by identifying strategies that unlock one or more achievements. If we try to use the fewest number of distinct approaches, this becomes what’s called a set cover problem. There are achievements A = \lbrace 1, 2, \ldots N \rbrace and strategies S = \lbrace S_1, S_2, \ldots S_M \rbrace with each S_i being non-empty and a subset of A. We also require that \bigcup_{i=1}^M S_i = A, otherwise not all achievements are achievable. The goal is to identify some set of strategies we need to deploy S' \subseteq S such that \bigcup_{s' \in S'} s' = A. Trivially, picking everything is possible, but it’s very unlikely that that’s going to be minimal.

This problem is known to be NP-hard; the decision version (does there exist a solution using k strategies or fewer?) is NP-complete. Clearly, we can obtain a solution that is worst-case exponential in |S| \times |A| by iterating through each possible subset of S, checking if it satisfies the criteria and remembering the best solution we’ve seen so far. There’s a little dynamic programming trick we can do to memoise partial achievement states (e.g. I can get to the state where I have the first 8 achievements and no others using two runs with one strategy but three with another; I’ll always pick the first). This achieves a solution exponential in |A| but linear in |S|, which is an improvement. We can also try to pre-process the input for A and S a little, by eliminating any achievements that are subsumed by harder achievements, and by eliminating any strategies that are strictly dominated by others.

If this is unacceptable, we can either try to use approximation algorithms (for example, a standard ‘greedy algorithm’ which picks the subset with the most outstanding items doesn’t always fare too poorly). We can also attempt to leverage constraint solving systems; these don’t deal with the exponentiality of the problem directly, but they often have good heuristics to deal with this.

That said, this isn’t a very good model, as some strategies are riskier to implement than others. For example, it may be theoretically possible to combine “beat the game with one relic” and “beat the game without uncommon or rare cards”, but that seems unlikely. For example, if each of these individually has a 1% chance of being completed but attempting them together has just a 0.1% chance, doing the achievements separately will take an expected 200 runs while the combined approach has an expectation of 1000.

A natural extension to this is to consider the weighted version of the problem. We change the formulation slightly, so each strategy has a weight w_s and we seek to minimise the total weight of the strategies we pick. If we assign a strategy a given probability of success p, since runs are (mostly) independent, the expected number of runs it takes to attain a successful outcome would be 1/p following standard geometric distribution results. The dynamic programming based algorithm still works if we keep track of weights instead of number of sets.

This is better, but there are still a few issues with the model. Firstly, partial successes haven’t been considered in this model; we might try a risky strategy for three achievements and fail, but still complete two of them. We could try to account for these by figuring out some kind of state transition between achievement states – it wouldn’t be a set cover problem any more. We would be building some kind of a discrete-time Markov chain on-the-fly.

Modelling this as a set cover problem also seems to imply that our algorithm is offline; that is, it formulates a strategy and then executes it. However, Slay the Spire has a significant luck element, especially if one is trying for specific achievements, and thus an offline algorithm doesn’t work so well. For example, one achievement requires “play 10 shivs on a single turn” and another “have an enemy reach 99 poison”; the cards and relics that support these are quite different, and depending on what cards are offered as the game progresses, one achievement or the other could become easy or nigh-impossible. If both achievements remain to be unlocked, it seems suboptimal to commit early on.

Algorithmic Modelling – The Game: Extreme

I had a weekend break in Zurich to meet up with a friend who works there. We often play board games when we meet, so I brought Hanabi and FUSE (very compact games, suitable for travelling – especially since I almost always travel HLO) but my friend was thinking of getting something different. We thus headed down to a game shop, and picked up The Game: Extreme (Board Game Geek).

The rules are fairly simple. There is a deck of 98 cards – one copy each of the integers from 2 to 99, inclusive. Players will have some number of cards in their hand (eight for a solo game, seven for 2 players), and play them to the table; the game ends in a loss if a player must play, but is unable to (or if all cards are played, which is a win). It is a cooperative game; players all win or lose together. The rulebook also mentions a half-win if players finish with ten cards or fewer remaining.

There are four communal piles that players can play cards to. On two of these, cards played should be numerically increasing; on the other two, they should be numerically decreasing. However, one may also go against the direction of a pile – this can only be done if the jump is exactly 10. Thus, for example, if an increasing pile has a 57 on top, one may play a 59 (increasing) or a 47 (ten less), but may not play a 49 (not increasing, and not ten less). Of course, if a player has both the 59 and 49, the player may play the 59, after which the 49 becomes legal.

One can probably write some search-based algorithm to figure out what the best possible outcome is for a given hand and individual pile, or even set of piles given some fitness function. There’s also a straightforward dynamic programming optimisation, and for certain classes of fitness functions there might be better principles one can leverage.

Players must typically play two cards on their turn, though they may play more (for example, if they have a chain of cards that involves several reverse jumps) while there are still cards in the deck; once that is emptied, they only need to play one. Once they have played all of their cards, they draw back to their hand size and play passes to the next player.

The Extreme version adds special effects to some of the number cards. These may either restrict the current player’s turn (requiring that they stop immediately, play exactly three cards, or immediately cover up the card that was played) or affect all players while they are exposed (requiring players keep quiet, play all the cards in their turn to the same pile, not do reverse jumps or only draw 1 card at the end of their turn). Apart from the stop card, all of these make the rules for playing more restrictive (and thus I’d say this is somewhat harder than the base game).

The solo game doesn’t generally seem too difficult as one can plan and execute sequences of reverse jumps without interference from others. The keep-quiet special effect no longer really matters, and the stop cards actually become positive as they allow for drawing cards back up earlier. I’m not sure how one might formulate an algorithm for determining optimal moves, but I can generally see the following principles:

  • Play the minimum number of cards needed per turn, and then draw back up. This lets you see more options, which is useful. Normally in a multiplayer game you may want to play more than the minimum to ensure that no one interferes with your plan, but in single player no one will anyway.
  • Plan and aggressively use reverse jumps. These often get thwarted in multiplayer games (you might draw a card that allows a reverse jump, but someone might need to play on the relevant pile before it gets to your turn).

Playing with more than one player gets trickier, as communication between players is permitted but precise numerical information is not. I would say things that imply precise numerical information (e.g. “I can jump this pile”) are also bad. There are edge cases which are probably acceptable (e.g. “it makes no difference to me; which of these would you prefer I play to?” with an up-65 and down-67 suggesting I hold the 66). Still, talk of “small/large jumps” and actively saying “stop” after interesting cards are played is very useful.

Cooperating well involves looking ahead to other players’ turns as well; it’s not always optimal to, on each player’s turn, individually minimise “damage” to the piles. There have been times where being forced to jump a pile by 30 or so, I’ve encouraged my teammate to play more cards there since it would be destroyed on my turn anyway. We only played with two players, but I imagine the dynamics get more interesting with larger groups.

The Game: Extreme is a very light game; while there is some minor logic involved it’s not particularly complex. It’s fairly fun and witnessing successful chains can be enjoyable; the special effect cards can also cause annoying obstacles (while blocking reverse jumps is probably the most damaging, the keep-quiet one is interesting as it changes the game for a while). It’s certainly a fun thing to play while passing time and discussing other things, and hard enough to provide a good challenge without being frustrating.

Algorithmic Modelling – Alphabear 2

Alphabear 2 is a word game where players are given a two-dimensional grid of tiles. Some of these tiles show letters, along with timers; others are concealed and must first be exposed by using an adjacent letter in a word. Players use letter tiles (anywhere on the grid) to form a word. When a player makes a word, he/she scores points equivalent to the sum of the timers on the tiles that are used. The timers on all tiles that aren’t used then count down by 1; tiles that reach zero become unusable stone blocks.

Usually, this means that you want to form long words that incorporate tiles that have shorter timers; for example, there is a ten letter word on the above board (PEDESTALED), but it doesn’t use the F. After playing this word, the timers on the remaining letters will count down by 1; the P will now have a timer of 2, the first E 10, and so on.

However, the game does have various mechanics that mean this isn’t always the best strategy. For example, one needs to ensure that there are actually enough letter tiles to work with, and that the distribution of letter tiles allows reasonable words to be formed. The game also has players choose bears for each mission which have special abilities such as awarding additional points for certain words. Actually, in hindsight FEASTED might have been a better choice than DEADLIFTS, as I was using a bear that gives a point bonus for words ending in ED.

In addition to scoring points by making words, players also receive bonus points for the largest fully cleared rectangular area. The game identifies this for you. I’m not sure how this is actually implemented, though there are large enough boards (15 by 15?) that it’s probably not the naive algorithm of generating and testing all rectangles; where N is the width of the board that would be O(N^6). There is a DP-based approach where one can keep track of the number of stone blocks that slashes this to O(N^4) which would probably be enough. With more clever approaches it’s possible to knock the time complexity down to O(N^2) – and that’s probably optimal as any less would involve not actually reading some parts of the grid.

Most of the time, this isn’t something to worry about too much. I’m able to clear most levels with no stones at all. However, if we need to take a stone, ideally it should be as far away from the center of the board as possible (note that only the largest rectangular area counts).

Defining optimal play in general is difficult, owing to the spatial reasoning required by the largest clear area bonus, uncertainty of tiles that are yet to be revealed and sheer breadth of options when faced with many open tiles.

Before starting a round, the player has to select bears to use. Optimal decisions here largely depend on the specific challenge being played. A bear that scores bonus points for six-letter words isn’t particularly useful on a tight board where one has only a few open tiles at a time; some bears add time to timed boards, which isn’t useful if the board has no time limit. Furthermore, players select up to three bears each round; these bears can have useful synergies. In particular, I like a combination of “summon three Zs” and “score triple points for words including JQXZ (not stacking, unfortunately)”; “summon ING” or “summon ED” goes quite well with these two as well. I’ve used these to complete a mission deemed “impossible” given the strength of my bears at the time.

We can perhaps begin by focusing more narrowly on optimal play of an Alphabear endgame (where all tiles have been revealed), ignoring special abilities bears may have for now and assuming that the board has a solution. Here is an example:

From this point on, there is no more uncertainty as to what letters we have to use. Interestingly, this board shows that a greedy algorithm where one tries to play the longest possible words doesn’t work. One could begin WORKSHOPPED, leaving RAII, but this is actually an unwinnable state. There are several two-turn solutions, like ROADWORKS + HIPPIE, or WORSHIPPED + KORAI. (Obviously, this is very high-level play; I think I played SKIPPER with this board. That leaves RODIAHWO, for which I’d probably have played something like RADIO + HOW, though HAIRDO + WO is better.)

Given a multiset of pairs of letters and timers L = (l,t)^+, we want to return a suitably ordered list of words S = \left[ l^+ \right] , that satisfies the following properties:

  • legitimacy: every word s \in S is included in the game’s dictionary.
  • resourcing: words do not use tiles that aren’t in L.
  • timing: for each pair (l, t) \in L, l is used in a word before or on position t.

The above set of properties is not correctly specified without a refinement to deal with duplicate letters. There are several ways to deal with this; assuming low cardinality, one approach could be to recognise all duplicate instances of letters as separate letters, and suitably expand the dictionary to allow this. Our list is then L = \left[ (A,3), (B_1,1), (B_2,1), (E, 1) \right], and the dictionary recognises both B_1 E and B_2 E.

I don’t have a good way of finding an optimal solution; I suspect the problem is actually NP-complete; it feels like SAT or 3SAT may be reducible to this problem, though I haven’t worked through the details. Nonetheless, there are several search heuristics we can try. Clearly, on a given turn we must use everything which now has a timer of 1. Also, although this board is an example of why you don’t want to be greedy, in general it probably makes sense to do some kind of ordered backtracking, where one expands the most promising nodes first. I can imagine an A* search could actually work well, though finding an admissible heuristic could be hard.

There’s a fourth property we want as well:

  • optimality: the score produced by the words is maximal.

Going back to our sample board, WORSHIPPED and KORAI will score one more point than ROADWORKS and HIPPIE, because there is one fewer timer tick. In general, we want to minimise the number of timer ticks by playing longer words upfront where possible, as this reduces the number of timers that are still ticking. Once one has devised a feasible solution, it may make sense to perform some iterative optimisation by seeing if it’s possible to bring letters forward. Of course, that approach may get us stuck in a local optimum. I could see some kind of iterative hill-climbing algorithm from the top-n admissible solutions (as found by our ordered backtracking) yielding decent results.

Unfortunately my vocabulary and anagram skills are far from good enough to handle the required complexity, especially under time pressure. Most of the time, successfully playing every letter on the board is enough to complete the mission’s objectives. Interestingly, it looks like the game wasn’t really programmed to deal properly with successfully using every letter but failing the mission. The game claims that there’ll be an additional reward on the reward screen when identifying the board as clear; yet, there is no reward screen as the mission was failed.

Algorithmic Modelling – Codenames

Codenames is a board game designed by Vlaada Chvatil which tests communication. The game is played with two teams; each team splits itself into clue-givers and guessers.

There is a board of 25 cards, each with one (or more) words on them. Of these cards, 9 are owned by the first team, 8 owned by the second team, 7 are neutral and one is the ‘assassin’.

The clue-givers attempt to communicate to the guessers which cards are owned by their team. This is done by giving a one-word clue followed by the number of cards N corresponding to the clue. The rules establish some bounds on allowed associations (for example, ‘sounds like’ clues are not allowed).

I don’t know how much time went into selecting the words to appear on individual cards, but there are certainly many words in the deck that can be interpreted in many ways, which makes the game fun. For example, I can think of quite a number of ways to clue BOND (COVALENT for unambiguity; DURATION, ASSET or DEBT for the debt; JAMES, AGENT or SEVEN or even at a stretch something like CASINO for the character; FRIEND or FRIENDSHIP; STREET or TUBE; CONTRACT, PROMISE or WORD for the idea of a promise; ADHERE or CONNECT, among others). Which one I might pick would depend on the other words on the board.

Guessers then identify up to N+1 cards they think their team owns. This can be based on the entire history of clues given, not just the previous clue. These cards are guessed one at a time; a team is only allowed to make further guesses if they guess ‘correctly’ one of their team’s cards. Teams may refrain from making all guesses.

The game ends either when all cards owned by a team are revealed (that team wins), or if a team reveals the assassin (that team loses).

In practice, clue-givers will need to consider all cards, not just the ones their team owns. The penalty for ‘failure’ varies; the assassin is an instant loss, and revealing a card owned by the other team is also bad. For example, with the cards APPLE, BANANA and GRAPE it would be very tempting to declare (FRUIT, 3) as a clue; yet, if KIWI is owned by the other team (or worse, is the assassin) it might be dangerous.

However, if the other team has already revealed that KIWI is theirs (e.g. with another clue, maybe SOUTH or BIRD) then the FRUIT clue becomes safe. Thus, pre-computing a strategy at the beginning of the game (e.g. by partitioning the eight or nine clues owned into several logical groups while avoiding other words) may not be optimal.

I tend to consider making an effort to win to be a key part of games, and thus what moves may be considered good also often depends on the state of the game. For example, if the opposing team has just one card left, I will give a broader clue that may have more tenuous links in a ‘do or die’ effort to finish on this turn. A less extreme example would be attempting weaker links if behind, or perhaps playing slightly more conservatively if ahead.

Mathematically modelling Codenames can be tough. We can try modelling the game state as a tuple (O, E, N, x, h, f_O, f_E) where O, E, N are sets of the remaining clue words our own team has, words the enemy team has, and the neutral words, x is the assassin, h = (t,w,n)^+ is the history of the game, and f_O and f_E are preference functions of the form h, w, n, O, E, N, x \rightarrow P, returning an ordered list O \cup E \cup N \cup \lbrace x \rbrace. (t, w, n) is a clue meaning that a given team gave a clue word and hinted that some number of cards was relevant. This already abstracts two difficult parts away – ordering the clues, and determining how many of the top preferences to pick.

I think the preference functions need to take into account previous clues from both teams; if a previous clue could clearly have corresponded to two words and I picked the wrong one, it might be fairly high on the list even if unrelated. Similarly if this scenario happens to the opposing team, I would probably avoid the word that would blatantly be theirs.

The notion of degree of confidence also isn’t captured as well in our ordering; going back to the fruit example, having a clear ordering would imply that clues for less than 4 items could reliably result in correct guesses (if we knew that APPLE was by a small margin the best pick, we could safely and confidently give (FRUIT, 1) to clue it, which seems wrong). One could imagine modelling these with selection probabilities, though things would be even more complex.

The above representation still seems computationally difficult to work with. The evolution of the preference function as one moves from one state to another is unclear (so in that representation it is fully general), making lookahead difficult. It doesn’t seem like a greedy choice is always best; for example, given eight clues that are reasonably divisible into four pairs, a clue for 4 words taking one element from each pair might be a bad idea if the other words can’t easily be linked.

We can proceed by simplifying the preference functions; a simple approach could be that for each w, n each team has a persistent preference function that returns an ordered list of O_0 \cup E_0 \cup N_0 \cup \lbrace x \rbrace. The preference functions during the game then return the subsequence of that list that contains strictly the words still left in the game. This of course doesn’t take into account past information or clues from the other team.

With this, we can attempt to solve the game using standard techniques; assuming that the vocabulary of clues is bounded (let’s say it must be from the Linux dictionary), a game state is winning for the current team if there exists some word for which the preference function returns everything in O as a prefix. A state is losing if all moves in that state produce a state which is winning for the opposing team.

We can generalise this to a simple probabilistic model as well; the preference ‘functions’ instead return a discrete random variable that indicates either guessing some word or passing. A simplified model could then, at the start of the game, assign weights to each candidate indicating the relative probability that candidate would be selected. These can be normalized to account for a pass probability. As words are removed from the board, the probability of the remaining words being selected scales (we can imagine a simple rejection-sampling where we discard words that aren’t actually on the board any more).

The algorithm for the probability that we get a win from a given state is then slightly more complex (though I think still reasonably covered by standard techniques).

On Deckbuilding Games and Slay the Spire

Deckbuilding is a genre of games where players start with a (usually) standard deck of cards and then improve it over the course of the game. This can involve acquiring new cards that are usually more powerful than the initial set of cards and/or synergise in better ways, upgrading existing cards, or even removing cards that are weaker or less relevant so that one is more likely to draw the (better) remaining cards.

These games typically offer some replayability because the sequence of decisions one may have on how one can improve one’s deck differs from game to game; what it means for a deck to be successful can also vary depending on what the other players are doing.

Slay the Spire is a player-versus-environment RPG where combat is driven by a deckbuilding mechanic; players start with simple offensive (Strike) and defensive (Defend) moves plus possibly a few class-specific cards. The combat is turn-based. On a player’s turn, the player draws 5 cards. Cards have an energy cost, and players initially have three energy per turn to play as many cards from their hand as they want. Of course, there are cards and other mechanics that can be leveraged to obtain additional card draws or energy. Players are also shown their opponents’ next moves, which allows for some degree of planning; a simple example of this could be deciding to eliminate a specific monster in a group, because it is about to attack.

In terms of gameplay, the start of the game does feel relatively rudimentary as the player always has the same or a similar deck. However, each time a player defeats a group of monsters, he is given the option (but not the obligation) to add one of three cards to his deck.

The options that come up in the first few rounds will typically shape how the deck develops. For example, while most cards can be upgraded once only, there is a specific card called Searing Blow that may be upgraded multiple times; eventually, it can develop into a card that costs 2 energy but yet deals far more damage than Bludgeon, a 3 cost card that deals pure single-target damage. It is a viable strategy to build a deck around this, eliminating other cards from the deck so that one draws it on every turn; there are cards like Double Tap (which, for 1 cost, doubles another attack) that can further enhance this type of deck.

Conversely, I frequently play in a style which relies on combining cards that give a constant strength bonus to all attacks along with cards that attack multiple times for small amounts of energy. That said, some degree of flexibility is important, as the choices given to the player will be different on different plays; being wedded to a single strategy can be dangerous, as specific key cards may not be available.

Players can also acquire relics which further influence game mechanics; these can simply provide boosts (such as “every 10th attack does double damage”), make certain styles of gameplay more attractive (“the third Attack card you play each turn boosts your Dexterity by 1 for the rest of the combat”) or even significantly change game mechanics (“retain your hand rather than discarding it” or “if you empty your hand, draw 1 card”). These are random (and apart from at the shop / rewards from winning boss battles, players aren’t really given options here) so, as before, planning on specific relics becoming available is also typically not a good solution.

Winning the game unlocks a new-game-plus mode called ascension; players start at Ascension 1, which increases the rate at which elite enemies spawn. Winning the game on Ascension 1 unlocks Ascension 2, which applies the restrictions of Ascension 1 and makes normal enemies do more damage. This increases incrementally up to Ascension 20. I’ve got up to 16 on one of the characters. At high levels there’s not very much room for error; I have had a 10 win-streak without Ascensions, but struggle with high Ascensions quite a bit. I think I tend to force my deck to fit a certain mold perhaps too early on; this can get scuppered if I subsequently am unlucky.

Another challenge a player can take on, as with many other games, is speed-running, or attempting to win quickly. Playing fast (as shown in the screenshot above) limits one’s time for decision making. For example, I normally plan my entire route through the map for each level at the beginning of the level, but that of course takes a bit of time. A normal, more relaxed game can easily take 30-40 minutes, especially if I’m playing with a relatively less aggressive build.

I’ve played the game for about thirty hours or so. It has been enjoyable, and the Defect character (an automaton that manipulates a queue of orbs) is particularly fun, if difficult to use. In terms of educational value I think the game demands a decent intuition of probability, as well as understanding how cards may work together. There are some synergies that are explicit or clear, but others are often less obvious and require some thought/experimentation.

Colour Rush (Board Game: FUSE, Kane Klenko)

While I’m not sure I would go the full length to say that efficiency is the highest form of beauty, I’d certainly say that I can find efficiency to indeed be beautiful. In controlled environments, I enjoy testing my own efficiency (typing tests, programming and Sudoku competitions, competitive Sims). Many of these environments feature elements of uncertainty, meaning that managing probabilistic outcomes is key to being efficient (in Sudoku bifurcation can sometimes be faster than reasoning through complex logical chains; in a programming contest, you might luck out on some buggy or sub-optimal algorithm actually passing all test cases anyway). I’ve recently enjoyed a challenging game which has probability management at its core.

FUSE is a real-time dice game from designer Kane Klenko. Players play bomb defusal experts who are trying to defuse bombs; this is represented by rolling dice and placing them on ‘bomb’ cards to match constraints. The team rolls some number of dice, and then allocates them to the cards, incurring penalties if some of the dice cannot be allocated. The game itself has a hard time limit of 10 minutes, so quick thinking is encouraged as players get more turns (and thus more rolls of the dice).

The dice are almost standard six-sided dice; interestingly, I noticed that it was not the case that opposite faces of each die added up to seven (2 was opposite 4 and 3 opposite 5, for some reason). The dice come in five colours (red, blue, black, yellow and green), and there are five dice of each colour. Card constraints may involve numbers, colours or both, and vary widely in the difficulty; easy cards have requirements like “two dice matching in either colour or number” or “A – B = 2”, while more difficult ones can involve building a six-die pyramid where each die is a specific colour. The bombs are rated in difficulty from 1 to 6, skipping 5.

Some bombs also introduce ordering constraints, requiring players to build a tower or pyramid (and players are not allowed to build unsupported structures). There is also a dexterity element here, as the penalty for knocking over a tower or pyramid is losing all progress on that card.

I’ve only played solo or in a team of two. Playing solo, a player draws and rolls three dice at the beginning of his/her turn, and then allocates them to a pool of four active cards, possibly placing multiple dice on a single card. At any time (except towards the end of the game), there are four active cards and five cards in a ‘market’ ready to become active when an active card is completed.

I’ve been able to complete the single-player “Elite” difficulty (getting through 23 cards in 10 minutes) about half of the time if playing with a random mix of 27 cards, and have done it (once in ten attempts or so) if playing with cards drawn from the most difficult sets (i.e. all the level 6s and 4s, with all remaining cards being level 3s).

I made some simple tactical observations regarding card constraints. For example, there is a level 2 card requiring three numbers that sum to 11. If I have a free choice of a number to play there, I would play as close to 4 as possible (considering that two dice most likely sum to 7).

I think the main strategy in solo play involves ensuring that one’s cards don’t end up having conflicting demands, as far as possible. For example, consider the following set of cards.

This is an awful situation to be in, because apart from one space for a 1, all of the cards here demand high numbers to start. Four decreasing numbers only admits a 4 or higher (and even then, I find starting that card with a 4 very restrictive as it forces 3-2-1); the 15 sum only admits a 3 or higher (and then forces 6-6; I usually play one six and then on a later turn find a pair adding to 9). Misaligned numerical constraints can be painful; misaligned colour constraints might be even worse because colours are sampled without replacement (well, until card(s) are cleared or penalties are incurred). There is a level 4 card that demands four dice of the same colour, which I try not to pick up.

There are many cards that have rather specific requirements (e.g. a specific colour and number); I tend to try to carry one or maybe two of these at a time, while trying to ensure the rest of the cards cycle regularly. The penalty for a unused die involves re-rolling it, and then removing a die that matches in terms of colour or number (if possible).

Although the game only lasts ten minutes (or less, if the deck is cleared in time), I’ve found it to be highly addictive, especially difficult runs on the elite difficulty. I’d expect times to vary quite widely as it’s a dice game, though I’ve noticed that I run out of time with between one and three cards from the end most of the time. I’ve never counted, but it wouldn’t be unreasonable to average fifteen seconds for a turn, and to some extent things average out over 120 selections and rolls. As mentioned above, I think a key part of solo FUSE strategy involves managing interactions between the requirements of active cards.

Minor criticisms would include some disconnect in the difficulty of the cards. I find that most of the level 6s are less luck-dependent than some of the lower level cards, like the level 3 “any 5, a yellow die, and a green 2”. More bizarrely, the ‘6-5-4 pyramid’ card in the photo above is only level 2, though it’s considerably more restrictive than the ‘sum to 15’ card also in the photo (level 3). There might be a case for a more permissive card being rated higher if the cost of verifying correctness is higher (e.g. ‘three digit prime number’), but I don’t think adding three numbers introduces much overhead.

The ceiling for individual card difficulty also isn’t very high. The highest stacks are of height 5 (with very loose constraints) and pyramids of height 3 (using six dice). I’d imagine there would be scope to define some level 8 cards that might use seven or more dice and/or require combinations that are stricter or more complex, like a six-die pyramid where each die not placed on the ground level must equal the sum of the two dice directly underneath it, or even something like this:

I’ve played this with a friend from Imperial as well. With two players, each player has two active cards, four dice are rolled and each player must take two dice. We only played up to the standard difficulty. I’d like to play this with a bigger group, though; communication gets much trickier, and also with two I am able to maintain sufficient context in my head to compute a good allocation of four dice across two sets of two cards. With five, there are 10 active cards, which would probably be unreasonable to keep track of.

In summary, I’d highly recommend this game based on my solo/pair experiences. I’ve probably had about twenty to twenty-five playthroughs in total, and have enjoyed it thoroughly even though I generally don’t enjoy real-time games. Each run is short, but can generate an adrenaline rush, and the difficulty level is highly configurable so it should be possible to find a level that works well. I might even extend the game with some custom “hard mode” cards (this can be done by getting opaque sleeves so the backs are indistinguishable). I haven’t yet had the opportunity to play with a larger group, which should bring new and interesting challenges too.

Taking Risks on Rotations

I haven’t owned a gaming console since the Playstation 2, mainly because I’ve found PCs to be sufficient for my own purposes and I don’t do very much gaming in the first place. I already have a pretty good PC for some of my own work (such as programming and some image/video editing I’ve done in the past). Nonetheless, I do enjoy watching a few YouTube channels which feature playthroughs of games with commentary, and I’ve certainly seen a lot of news regarding Nintendo’s latest console, the Switch.

One of the launch titles for the console, 1-2-Switch is a minigame collection which largely serves to demonstrate the capabilities of the Switch’s controllers (or “Joy-Cons”). The controllers have buttons, motion sensing capabilities, a camera (“Eating Contest”) and some kind of precise vibration (“Ball Count”). Most of the minigames seem simple and somewhat fun, as well as relatively balanced (in fact most games don’t really feature a way for players to interfere with one another – several others also have players’ roles over the course of the entire minigame being largely symmetric). There are a few that seem unbalanced (“Samurai Training” – assuming equal probability of success the first player has a greater probability of winning), though there was one that was clearly unbalanced in favour of the second player, that being “Joy-Con Rotation”.

The game invites players to rotate their controller about a vertical axis as much as possible, without otherwise disturbing it (e.g. by tilting the controller) too much. Scores are measured in degrees of rotation, with “unsuccessful” turns (e.g. too much alternative movement) scoring 0. Players get three rotations in turn, and then the score from their three turns is added up. In a sense this gives the second player an advantage; on his final turn he needs to know how much to rotate the controller to win, and should aim for precisely that. The problem we’re analysing today is then: how do we make a decision on how much of a rotation to attempt if we’re the first player, and how much of an advantage does the game give the second player (at least as far as the final turn is concerned)?

Let’s make a couple of simplifying assumptions to this:

  • Let X_i be a continuous random variable over [0, \infty) and some is. The probability mass function of the X_i, f_{X_i}(x) are assumed independent, continuous and tending to zero as x tends to infinity. Furthermore, for a given value of x representing the number of degrees player i attempts to rotate the controller, we are successful iff X_i \leq x.
  • We assume scores are continuous (in the actual game, they’re only precise down to the degree).

Now, assuming we aim for a given rotation x, the probability of a win is then approximately f_{X_1}(x) \times (1 - f_{X_2}(x)), which we seek to maximise. Thus, for a simple example, if the X_i are exponentially distributed with parameters \lambda_1 and \lambda_2, we then seek to maximise the function

g(x) = (1 - e^{-\lambda_1x}) \times e^{-\lambda_2x}

This can be minimised, and yields a solution of x^* = \frac{1}{\lambda_1} \log \frac{\lambda_1 + \lambda_2}{\lambda_2}; this gives the first player a winning probability of

\left( \dfrac{\lambda_1 + \lambda_2}{\lambda_2} \right)^{-\frac{\lambda_2}{\lambda_1}} \left( \dfrac{\lambda_1}{\lambda_1 + \lambda_2} \right)

So if both players are equally skilled in terms of rotation ability (that is, \lambda_1 = \lambda_2) then the first player wins with probability of just 0.25. Furthermore, we can find out how much better player 1 needs to be for an even game (that is, the value of k for which \lambda_1 = k\lambda_2); we formulate

(k+1)^{-\frac{1}{k}} \left(\dfrac{k}{k+1}\right) = 0.5

which works out to about k = 3.4035. If this player went second, however, he would have an over 0.905 probability of winning!

The multiple trial case is interesting, especially because it seems like the problem does not demonstrate optimal substructure; that is, maximising the probability you win for a given round doesn’t necessarily mean that that’s the correct way to play if going for more rounds. For a simple (if highly contrived) example; let’s suppose we play two rounds. Player one can (roughly) make rotations up to 50 degrees with probability 1, 200 degrees with probability 0.3, and more than that with probability 0. Player 2 can make rotations up to 49 degrees with probability 1, 52 degrees with probability 0.6, and more than that with probability zero. To win one round, you should spin 50 degrees (win probability: 0.4); but to win two, if you spin 50 degrees twice, your opponent could spin 52 once and 49 once (leaving a win probability of 0.4); going for 200s and hoping for at least one success gives you a win probability of 0.51.

  • 1
  • 2