We have 52 guests and no members online
Balancing protocols in symmetric 2-player games
In symmetric 2-player abstract strategy games alternating turns between equally matched players present an inherent a-symmetry. This presumes some leeway because no two players are ever exactly evenly matched, but in Chess and Go ratings give a fair enough indication. Statistics support the concensus is that in both games the first player has a distinct 'advantage'.
What is an 'advantage'?
In a game theoretical context the games I consider here tend to have enormous yet finite game trees. That's the thing computer programs are searching regardless of the method used. 'Finite' may be arguable in games that contain cycles, but every game has a finite number of positions and usually game trees either are finite or they can be made finite by rule adaptations such as '3-fold' in Chess or 'ko' and its refinements in Go.
It is possible to actually make complete game trees of some very small games. MiniMancala is a nice example and it shows a fundamental property of a game tree: there's no 'advantage' to be found. Each position is either a determined win, or a determined loss, or a determined draw, but in any case 'determined'.
In Chess the total number of legal positions is of course of a totally different order but otherwise things are quite the same: theoretically the status of each position, including the initial one, is determined and nowhere does 'advantage' enter the equation.
'Advantage' is a human notion about a game position and it is rooted in its inherent opaqueness. This holds in particular for any turn order advantage on the first move. The extent of the initial advantage is more or less known in games like Chess and Go so it can be compared to the extent to which mistakes influence the outcome of a game. If many mistakes can be demonstrated, the impact of a small initial advantage may be considered less significant. The great traditional games cope with it in different yet simple ways. In Chess, Shogi and Draughts playing an equal number of games with black and white is usually considered to give a fair outcome. Go uses 'komi', an appointed or negotiable number of points appointed to the second player.
The pie rule
Hex is a relative newcomer in the realm of serious games. It is simple, drawless and deep (to use another human notion). There is an existence proof of a winning strategy for the first player, and for smaller boards brute force generated divisions have been made between winning and losing opening moves. The article states that in 11x11 Hex there are approximately 2.4x10^56 possible legal positions to 4.6x10^46 Chess. I can't really verify these numbers, but Hex is 'infinitely complex' in human terms, and knowing a given first move is winning is not quite the same as showing it is winning.
Not surprisingly the divisions between winning and losing opening moves on small boards showed the former to be in the center and the latter along the edges. Not all winning moves were equally easy to convert to a win: the more in the center, the 'easier' the win became. This led to the idea of using a 'divide and choose' principle known as the pie rule: one player makes a first move, then the other player decides to play with that first move, or against it. The rule is also known as the 'swap rule', and though not serving all games equally well, it is yet widely applicable. In Hex it does a sufficient job. Players tend to avoid the center as well as the edges. On smaller boards for which the division between winning and losing positions is known, a player may typically choose a 'win' close to a 'lose' or vice versa. Winning a won game can be very hard in Hex. On bigger boards the exact division of winning and losing moves is not known (yet), but the general areas are and players tend to open accordingly.
Till now we have three balancing protocols and I'll summarise them briefly:
This one is universally applicable, but less effective if coping with a large turn order advantage. However, of the traditionals Chess has the largest first player advantage, and it is still comfortably within its range. New games that have a large turn order advantage should turn to one of the balancing protocols available or make a new one.
Komi inherently requires a win condition based on scoring points. As a 'balancing fix' it is quick, dirty and in the case of Go, with its enormous amount of statistical data to get to a reliable estimate of the game's turn order advantage, quite effective. A typical value for komi is in the region of 5-8 points. To prevent drawn games komi is usually set to a fractional value such as 6.5.
Given the general knowledge about its value, komi may also be subject to a swap or even an 'auction' where players alternately bid for the amount of komi they are willing to play against.
The pie rule shows more affinity with placement games than it does with movement games, though it by no means works for all the former. It doesn't work for Reversi for instance. Nor does it exclude all the latter although I can only think of Slither which is a move/place hybrid.
In the world's greatest placement game Go, a single stone swap would work, but it might not give the required 'resolution'. If the 'resolution' of a single placement is felt to be insufficiently high, then a 3-stone placement made by one player can be used. But neither version would have the convenient fractional values that komi provides. And neither would raise much support in the Go world.
A slight but inherentl unfairness of the pie rule is that it's easier to choose a piece than to cut the pie. It implies a near inconsequential advantage for the chooser.
An extension of sorts
At the end of the 18th century 10x10 Draughts was still in its infancy and the first wonders of the new game had just started to emerge. They included a very intricate endgame that was used by a not particularly good Draughts player to make some money on the market. He was known as 'the Marquise' and he knew every nook and cranny of the endgame. In the position White was to move and his bet was that he would win with white and draw with black. I'm not sure whether the balancing protocol I used in Swish & Squeeze and Pylyx and that I named after him is exclusively my own invention. But I don't know of any other games that use it.
This protocol can be used for games that allow a variable initial set-up in a placement stage, followed by a movement stage in which the actual goal is pursued. One player makes the complete initial set-up, then the other decides on a color or on playing first. If he chooses a color, his opponent may move first. If he chooses playing first, his opponent decides on color.
The method is very if not only suitable for games with a simple goal and tricky tactics. It gives one player the option of presenting an initial position of which he has studied 'every nook and cranny'. But he can't have it both ways and may end up not playing his preferred color or not moving first. So he needs a trick or two for either case. His opponent implicitly is aware of this and must try to unravel any dark tricks that may be woven into the position, before deciding.
A foreseen disaster should not kill the handicapped
In most games with a significant turn order advantage, it's the first player who benefits. Introducing restrictions on movement or placement options is a simple albeit somewhat crude way to balance the situation.
In this protocol the first player must accept one or more disadvantages in placing or moving.
In Go significant differences in rating may be overcome by allowing the weaker player to place a number of stones on certain points in advance. It is called a handicap system, but balancing differences in strength are not the subject of this article. Between evenly matched players the method chosen can be to impose one or more restrictions on the movement or placement options of the first player. I'll give three examples.
Pente is a fairly well known 5-in-a-row game featuring 'custodian' capture and removal of pairs of adjacent like colored stones. Its tournament rules require that the first player's first move is in the center and his second move must be at least three intersections away. Hexade is a fairly less known hexagonal configuration game, also featuring 'custodian' capture of pairs of adjacent like colored stones. Its rules require that the first player's second move must be at least three cells away from his first.
Pretty much the same and pretty effective because the second player can place at least one stone in between the two initial ones of his opponent. And it's quite enough too, for games that are far from trivial but not quite in the same league as the big ones. Yet the ambition may be there sometimes and that's where Renju comes in. Renju is a more than a century old 5-in-a-row game without capture. Without a balancing rules it would be a first player win. So there came first player restrictions. But restrictions can be insufficient or overshoot their target. So there came refinements. And yet more refinements, sometimes including a 3-stones pie or the removal of stones. And they're still figuring out new balancing protocols, all with slightly different and 'more refined' rules.
But you can't balance a game perfectly. It's like riding a bike: you can ride on a straight track, but not in a straight line. The great games are easy to balance and Renju can't reach such greatness by attempts to balance it ad infinitum.
Trouble shared is trouble halved
In Checkers aka. Englisch Draughts an unbalancing protocol called 'balloted games' is sometimes used to handicap both players: the first three moves are drawn at random from a set of accepted openings. Two games are played with the chosen opening, each player having a turn at either side. Its goal is to reduce the number of draws but it in fact only shows that some games just 'can't make it' to world level play. Checkers is another one.
One step ahead
Here's one showing a great affinity with placement games and hardly any with movement games, though examples to the contrary do exist. It's called the '1-2-2' protocol, usually written as 12*. One of the best known games that use it is Connect6.
There are many placement games that remain basically invariant, or even improve if sides are allowed to make two placements per turn instead of just one. If the first player on his first turn is allowed only one placement, this protocol results in positions where each player on his turn gets one placement ahead of his opponent. Not every game adapts equally well to the method, but those that do may feel more balanced for it and certainly faster.
Go's character would totally change by the protocol. Ko and seki situations wouldn't emerge, or at least hardly, and groups would require at least three eyes to live. That doesn't sound like a game the world is waiting for, but the elimination of cycles may yet make it interesting enough to dig a bit deeper into the consequences.
The 12* protocol has a weird relative made notorious by progressive Chess. Players make succesively longer series of moves per turn, with giving check being subject to certain restrictions that depend on the chosen variant. It's relatively popular in correspondence play but it's not exactly a 'balancing protocol', more a series of fun puzzles that players exchange. A poster at BGG summarised it as follows: "I strongly suspect that progressive Chess is not a balanced game but I'm not sure which side is supposed to win".
Balancing ad infinitum
The point of this article is that 'advantage', unless explicitly demonstrable, is only rooted in human perception. For the initial position of any new or less known game the measure of it is usually no more than a foggy notion.
The Thue-Morse sequence if used as a turn order protocol aims to eradicate any turn order advantage ad infinitum. However, games are finite and players make mistakes and as a rule these after a couple of moves turn out to be more consequential than the remnants of a slight and foggy initial advantage. So the protocol's usefulness as an actual balancing means seems to overshoot its target. However its implementation implies keeping track of who's turn it is and this inherently gives players the means to incorporate the occurrence of single and double moves in tactical considerations for the immediate future. That in itself is an interesting aspect.
Two steps forward one step back
In some games actions may be made that are (or at least would appear) advantageous in terms of the goal, but only at the cost of a simultaneous disadvantage, thus causing a dilemma.
Negative feedback is subject to some widely different implementations, so I'll give a couple of examples:
Catchup's goal is to have the largest group of stones on the board when the board is full. It basically uses the 12* protocol (though you may place one stone even if entitled to two) but with a twist: if (and only if) the opponent has just "improved" you may also place three stones. "Improving" means taking the lead, extending the lead, or catching up with your opponent's previously leading score. It's like every intermediate sprint you make gives your opponent some extra breath. The persistence of this is at the heart of the game, annoying as it is supposed to be, or even more.
In Yinsh each player has five 'rings' that he uses to create lines of five 'markers', read Othello pieces, with his own color face-up. Every time that sub goal is reached he removes one of his own rings, the object being to remove three of them. Getting closer to winning thus necessitates weakening oneself, which considerably complicates strategy.
This actually is a multi player race game based on skill rather than luck, and it has an interesting way of hampering progress. The track consists of 65 squares and you can always move forwards as far as you like - but only if you can afford to pay for it. This you do by consuming units of energy called carrots. The 65 carrots you start with are just enough to get you home one square at a time by spending one carrot per move, with one carrot left. But they're also enough to get you up to ten squares forward in a single leap.
There are some smart ways to gain carrots along the way, including going backward, but the further ahead you are, the fewer carrots you earn when you land on a pay-out square. That's one way to have negative feedback. The main way however is the cost of moving forward. The cost follows the triangular series: 1 square 1 carrot, 2 squares 1+2 carrots, 3 squares 1+2+3 carrots and so on. In one formula: n squares cost n(n+1)/2 carrots.
Yellow is played on a board using the Cairo Pentagonal Tiling. Black has 21 double-pentagon black pieces with two opposite yellow corners, White has the reverse. The board has alternating black and yellow edges. Players alternately must place a piece with the object to connect the opponent's sides with the opponent's colors.
The first player gets the 'cone' to indicate that he may place two pieces at any time. If he does, the cone switches sides.
Playing for the advantage
Here is a placement protocol that results in an initial position for a subsequent main game that in itself may be placement or movement based. It not only allows for but also easily adapts to a wide range of goals for that main game, like elimination, territory or connection. It has a particular affinity with games that profit from a variable initial position and are 'pit based', that is without any particular direction of play. It ends naturally with, broadly speaking, up to half the board filled, but the number of placements (and thus the 'density' of the position) may vary. However, both sides will have made an equal number of them and either player may end up being the one to move first in the main game. That's of course one of the goals to achieve in this phase. Other goals depend on the nature of the main game.
This is a special case of the 12* protocol. The first player places one stone. Next players alternately place two stones.
Obviously the first stone can always be placed, but at a certain point either one of the players will find himself unable to place the second stone. That signifies the end of the protocol and gives the first move of the main game to the opponent.
My gratitude for finding the protocol resulted in a bunch of games, three of which I'd like to mention explicitly. Inertia is a connection game of the 'unification' type: you have to unify all your stones, however many there are, into one connected group. A well known game with the same object, and similarly tricky tactics for that matter, is Claude Soucie's Lines of Action. LOA has no balancing rule and its fixed initial position suggests opening study and preparation would make a formidable weapon. That may be a bit undue for a game that arguably does not quite qualify as a "sport weapon" in the way Chess or Go do. Neither does Inertia, so its opening protocol makes it better suited for the recreational realm these games basically aim at. At the same time it will 'keep its balance' if more profound scrutiny were to be applied.
The second game, Pit of Pillars, is a stacking game that shows a very game-specific form of negative feedback in its own right. Mixed stacks can be captured, whereby opponents men are out of the game and own men become reserves that may be reentered. You win by leaving the opponent without any stacks on the board. This rather casually phrased object actually provides a unique strategic dilemma. Players will encounter positions in which they have reserves and may be able to capture even more. But a capture never increases one's precence on the board, and often reduces it. In some of these positions you can afford to make further captures without losing the game in the short term. But in other similar positions you actually run that risk. The problem are those positions where you cannot quite figure out how much breath you would have left if you would make another capture. It's a delicate line to cross and it emerges in every evenly matched game.
The third game, Io, is actually the game I was looking for when I found the one-bound-one-free protocol: a better version of Othello. Finding the protocol led me to provisionally 'store' it in an intermediate game testifying of that goal: Triccs. It took a couple of games before I returned to my original goal and found the rules to be simple and elegant and easy to implement.
Io has compulsory placement - a player will always have a legal move at his disposal - but no compulsory capture. It bridges the implicit division of the one-bound-one-free protocol by featuring capture in both phases. Its tactics are more interesting than Othello's and its strategy is less opaque. On odd sided boards it is drawless. If an occasional draw is not considered a problem, then an Othello board well suited to get to know the game. It doesn't even have much of a threshold.
Sowing or growing - the Symple move protocol
I found the Symple move protocol because Benedikt Rosenau felt Symple must exist, even though it wasn't invented yet. So did I, but at the time I was reluctant to wrap my mind around it to say the least. Read all about it here.
The protocol implies a dilemma between 'sowing' and 'growing' and gives a clear advantage to the first player. That's why I was delighted to find a balancing mechanism embedded in it and specific to it. As a whole the protocol is 'fairly generic', with an obvious affinity to goals that are territorial or connective or, as in Symple's case, both. It gives players at any turn the option to either 'sow', that is place a singleton, or 'grow', that is add a stone to every friendly group, compulsory or voluntary as the case may be. This already indicates that the protocol allows slightly different implementations to serve different games. I'll give three examples of that.
But let me address the protocol's inherent turn order imbalance first. Obviously placing singletons early on is a way to grow faster in a later stage, so play usually begins with making single placements. If at some point White, who is the first player, starts to grow first, then Black can follow suit with the same number of singletons to grow at. But if Black wants to grow first, then White can follow suit with one more singleton to grow at. If growing continues in subsequent turns, the Black will lag one more stone behind at every turn. Hardly a good reward for taking the initiative. So that was the problem and a pie wouldn't solve it. Fortunately the solution came 'drifting up' much like the protocol itself. Here it is:
So long as neither player has grown, the second player may grow and place a singleton in the same turn.
The rule requires some reflection so I'll leave that to the reader. Thorough reflection may lead to the discovery of a slight a-symmetry, as a poster at BGG pointed out in this comment:
A neat protocol but I do have a niggle with it. If White grows first he will be left with 'n' groups of two stones versus 'n' singletons, Black to move. If Black grows first he will be left with 'n-1' groups of two stones + 1 singleton versus 'n' singletons. White to move. Why does Black's prerogative allow him to grow and then place rather than to place and then grow?
Reversing the order as suggested indeed restores this slight a-symmetry and should therefore be considered by inventors who feel like using the protocol. But I don't follow the suggestion for reasons I explained in my reply:
The lure about it is symmetry: why not make the outset exactly the same for both players? I like symmetry but I have a reservation: sometimes there's something boring about it. More importantly, it serves no purpose here in terms of improving the protocol. The source of the balance is a dilemma in which the relative value of a couple of extra stones and their future 'influence' must be measured against the relative value of having 'the move', that is having the turn when the number of groups is equal. These 'relative values' are implicitly opaque, and a lot of experience is needed to get to a more or less reliable estimate, regardless of the game (because they behave slightly differently in say Symple, Sygo and Scware).
Whether this dilemma is centered around a strictly symmetric starting point or one that is slightly off center doesn't really affect its nature.
What I don't like about the idea is the fact that it implies a group that emerges and grows in the same turn. I feel an ever so slight imbalance there that is in fact exactly as inconsequential as the one you noted. But it is one that I like less than the current one.
I'll give a short summary of the working of the protocol in three significant and significally different games, Symple, Sygo and Scware.
Symple is the cradle of the protocol but the game took some time to adapt to it so Sygo actually finished first. In Symple every stone adds a point to the final score, but ever separate group is penalised with an agreed number of points. The game has compulsory placement and in case of growth it demands every possible group to be grown. If a player finds himself unable to grow and thus is obliged to 'invade', every placed single actually loses 'p-1' points. That means balanced games tend to end fairly dramatically.
Sygo is a Go variant based on othelloanian capture and the only such variant that doesn't require any additional rules to ensure 'life'. There are similarities with Symple, like the working of the balancing mechanism and the connection of groups, but being a Go variant, Sygo has no compulsory placement and players may grow selectively or pass altogether if they see reason to. Territory is counted in the same way as in Go.
This is a square connection game in which there is no compulsory placement, but barring a win there's no reason not to use all available placement options. The balancing mechanism will usually kick in earlier than in Symple and Sygo because a winning structure takes less stones.
A symple derivate of sorts
Luis Bolaños Mures came up with this one.
Instead of making a regular move, a player may choose to make a certain special move. Once a player makes the special move, neither him nor his opponent can make it for the rest of the game. If the special move is made by the second player, he still has the right to make a regular move on the same turn.
The Symple protocol is multi-move and that enables the balancing mechanism to be organically embedded in the game. In another move protocol that might not be so simple. The cookie's value should start considerably lower than the first player's advantage and slowly increase, until either player decides to take it. The cookie itself should by its nature 'fit' the game and if not naturally embedded, than at least as embedded as possible.
I know of no implementation of this particular idea, but I'm sure there are games in which an implementation would fit nicely. The main problem is finding the right cookie.
Take the money and sit
This one popped up in a forum discussion at BGG. It specifically addressed an experiment conducted by professor Elwyn Berlekamp. He called it "Environmental Go" but it became known as "Coupon Go". A summary can be found at Sensei's Library. Basically it is Go, but players have access to a stack of coupons with different values, usually with each value between 0.5 and 20, fractions included. On his turn a player may take a coupon instead of playing on the board. Taking a coupon, like a board play, removes any ban on taking a ko.
So you can sell initiative and the price is variable because initiative in early placements carries more weight than initiative in endgame disputes.
This method solely applies to games in which the object is to accumulate points and in which the value of a move can be more or less reliably estimated. Instead of making a placement or a move, the player is entitled to take a number of points from a limited stack of numbered cards with different values. The endscore is the sum of points a player has secured on the board plus the points he has 'in hand'.
I understand why Berlekamp chose Go because it fits the criteria. At the same time I can see a simpler game specifically fitting this protocol. Go may be considered simple in terms of structure, but it's hellishly complex to play. Come to think of it, Othello might be worth a try. But it's a very interesting protocol and designing a game around it should be nice challenge. Obviously the design itself would require to balance the number, range and distribution of value cards against the number of points that is available on the board. Given that, such a game would seem to be implicitly balanced because if the value of a move is not always totally clear, the winner will be the one who's estimates were better.
As a simple example I suggest Swish & Squeeze.
Both games require a player to capture at least 10 of the 19 available beads. Now add eight 'value cards', two each of the values 1 to 4, totalling 20 points. All rules remain the same, including the 'Marquisian method' of setting up the game. The only difference is that on his turn a player may take a card instead of making a move. The object is now to get at least 20 points.
Enschede, February 2017,