If everyone around you was overfishing what would you do, join them or only take your fair share? In many situations it would seem like you would be quite justified if you said that you would join them in overfishing. Yes this makes it more likely that the stock of fish is going to run out but after all that is almost guaranteed anyway by the actions of others so you might as well get as much as you can while there’s still any fish at all.
Many may recognise this as a classic tragedy of the commons type problem, put more broadly it is also one of a larger set of coordination problems. The way I am using the term coordination problem may be slightly broader than in most other areas, for example I lump collective action problems and coordination problems together. To me coordination problems are just instances where if agents in a group took different actions there could be a positive sum gain for all of them from doing so.
As we will see there are lots of different examples of situations that fit this definition as well as lots of different solutions society has come up with to align people in more beneficial ways. Below I try to outline a number of different techniques that have been used to help overcome one of the most difficult issues society faces, incentivising people to act in the best interests of the group.
This section looks at a few simple illustrations of coordination failures where agents have two alternative choices they could make.
Prisoners dilemma: Here both agents would be better if they held out but their own incentives mean it is always better for them to confess regardless of what the other player does and so we see ending up at Confess,Confess as a natural result of the game as it stands. We end up with a single non payoff dominant Nash equilibria.
Coordination game: Multiple Nash equilibria but agents could be at the non payoff dominant one and no unilateral action can change this.
We can extend these games to reference many agents. Here we assume all other agents are acting in the same way and look at a marginal agents options. Because all the other agents act in the same way and the one agent is such a small part of the whole system the other agents payoffs won’t be changed by the actions of this one agent. The one agent has their payoffs represented by the numbers on the left of the commas.
Tragedy of the commons. There is a single Nash equilibrium where all agents overfish this period.
Bank run model: There are two stable equilibria of Run,Run or Hold,Hold depending on what the rest of the agents are doing
If we look at the characteristics of where a system might end up it helps to consider a few different types of dominance. The first type of dominance is strategic dominance. This occurs where one option is better for you regardless of the actions of other agents. This is the kind of dominance that, when it goes wrong, can lead to the Prisoners Dilemma. This dominance is not present in the coordination game, your best response is conditional to the actions of other players.
Another type of dominance is payoff dominance. An equilibrium is payoff dominant of it is Pareto superior to all others i.e you can’t change the outcome without at least one agent being worse off. This is, almost by definition, not going to hold at the equilibrium our solutions aim to change, otherwise we wouldn’t need to change them.
The last kind of dominance is risk dominance. This kind of dominance is particularly interesting from a game theory perspective. In the analysis of the games above one typically looks at the payoffs and then decides strategy, there is little talk of risk. Despite this what often happens is that agents end up at a risk dominant equilibrium. This is certainly the case for the prisoners dilemma, here while the NE is not payoff dominant it is typically risk dominant.
These present different problems to overcome. In the prisoners dilemma setting what one needs to overcome is the incentives to deviate from the payoff dominant setting. In the coordination game what one needs is to provide a stimulus to change the equilibrium agents occupy, the hope is that the new equilibrium should be self sustaining.
There are clearly considerations missing from these models and it is in these details that many of the solutions emerge so lets consider a few things these models assume.
- Either two agents or enough agents so that your individual actions can’t change the outcome at all. There are clearly intermediate options.
- Perfect information. You actually know what payoffs you and the other agents will receive
- Games are played once and in isolation from any other systems
- Only two choices
- Actions have one definite payoff. e.g the payoff isn’t just the expected value of a choice which could be very different in practice from what an agent would expect to get
- You can’t credibly commit to an option ahead of time
- Homogenous agents
Spectrums of solutions
There are a few different axis I want to consider along which different coordination solutions lie. The first is the type of coordinating agent. There are many options for this entity to take, for example:
- The Government (directly) e.g Assigning quotas to production to prevent overfishing
- The Government (indirectly) e.g creating legal systems that uphold contracts
- Exogenous non governmental agents e.g insurance company
- Agents within the system e.g many irrigation systems
- The market e.g tradable carbon permits
- Technology e.g kickstarter
One thing to note is that many solutions operate with different levels of coordinating agents, there are the direct schemes used to coordinate but then often there is the larger system in which these schemes operate in that gives them credibility.
The second axis is the coordinating scheme, it is worth noting that in many cases the options overlap
- Increasing trust within the system
- Removing the need for trust
- Increasing the risk of not participating in the socially optimal way
- Decreasing risk to act in socially optimal way
I’m not going to particularly focus on direct government solutions, the main reason for this is that most people aren’t ‘the government’ and lack some of the foundational powers that allow it to behave in a way drastically different from other agents.
The above has all been mostly abstract and we will now typically not be dealing with abstract systems when looking at how coordination has been solved. One of the main reasons for doing this is that, as Elinor Ostrom mentions in her book ‘Governing the Commons’, most of the the solutions and problems are in the institutional details not the broad abstract structures. So without further ado what are some strategies that have been used to solve coordination problems?
One of the most common options to solve coordination problems throughout history has been to commodify risk. The idea here is that for any individual agent the risk they bear by taking one course of action is too large even if the action has a positive expected value and so they therefore don’t undertake it/too few people do. If there is a larger system in place however where some entity(s) can buy this risk off many people and hence diversify it they can get access to the positive expected value aspect of the arrangement without risking ruin on any given bet.
Take the following example. Merchants know that on average 30% of ships get lost at sea and all their cargo destroyed. If this happens then the merchant loses all of their possessions and wealth and may even be sent to prison for unpaid bills. If the ship is successful then the merchant will make double their money. For any individual merchant the risk of ruin may be too large for them to take on the positive expected value bet but it is clear that if all the merchants completed their journeys and just shared the gains then all could be better off. Marine Insurance can partially achieve this. It ends up that the successful merchants who don’t need to use the insurance end up offsetting the losses of the less successful merchants. This contract allows risk to be diversified and merchants to be able to engage in a positive expected value wager to the benefit of all.
Here we see exogenous non governmental agents coming in and changing the risk profile of maritime trading. What they end up doing is effectively taking a fee to turn individual high risk endeavour into collective and lower risk endeavours.
If the insurance contracts are the first level solution to the coordination problem what are the second level ones? In this case I would say that there are two. The first is a legal system that is willing to enforce the contracts themselves. This builds a level of trust into the insurance that is absent in the initial system. The second is repeated contracts. If you don’t pay your insurance contract when you should then while you save a lot of money you are likely to loose business in the future. As offering insurance is positive expected value this is therefore against your incentives.
Assurance contracts and contingent claims
These two categories are often very similar in mechanism and outcome and so I’ve lumped them together. The basic mechanism for this section is that some action is triggered either if a certain threshold is met or in other cases if a certain threshold isn’t met. This action should change your incentives/distort the risk of taking certain actions better aligning group and individual preferences.
There are two main types of assurance contracts that I look at. The first is strictly costless contracts, these are ones where there is no contingent capital being promised as an incentive corrector and system design alone that creates coordination. The second type is theoretically costless assurance contracts. These are situations where capital is used to change peoples incentives but the theoretical result is that that this makes people’s actions change in such a way that the capital never actually needs to be paid, hence ‘theoretically’ costless.
Let’s start with the first type, strictly costless contracts. The best example of this is companies like Kickstarter. Here you offer money for a product, if enough others contribute then the product will have enough money to get made, if not you get your money back. This allows people to coordinate to produce goods without having to worry about what will happen to their investment should not enough others contribute. Given the power of this kind of contract it is used surprisingly little.
What about theoretically costless contracts?
An example of these is Alex Tabarrok’s idea of dominant assurance contracts. This is similar to the above idea but aims to help mitigate the indifference you might have to funding a good if you don’t expect enough others to contribute. You wouldn’t lose money if not enough people fund it but you may see the endeavour as not worth your time &or attention if you don’t think it will be funded.
Suppose you need N contributors to get a project funded. If more than N people contribute the project is funded, like usual with assurance contracts. If fewer than N contribute then the entity trying to get funding pays a prize to all the people who did contribute. Contributing is now a strictly dominant strategy and so this should incentivise enough people to contribute that the prize never has to get paid.
Another group of things I am placing under theoretically costless assurance contracts are many forms of contingent claims. I’m doing this because they share much of the structure of using thresholds and capital to change incentives towards the social optimum.
The first example of this is deposit insurance. Here if you believe everyone else will run to the bank to withdraw your money it is in your interests to also do this. What this can mean though is that what started off as a manageable amount of withdraws quickly cascades into a situation where the bank really doesn’t have enough money to handle all the claims. The main issue is that if you think enough others will run to the bank it really is in your interests to follow suit and run to the bank yourself.
Deposit insurance aims to correct this. The promise is that whatever the rest of depositor do your money will be guaranteed by the government. The aim is that this removes the incentive to run to the bank so depositors don’t run to the bank so the government never actually has to pay the deposit insurance.
This is a good example of why these contracts are only theoretically costless. While deposit insurance has seemed to work very well to solve the problem of consumer bank runs it has been a rather expensive programme none the less. This is because if deposits are lost by the banks being the dumb rather than the consumers rushing to withdraw the government still has to pay the depositors and it turns out banks are dumb enough to make this very expensive.
Another example is a contingent wage subsidy. The idea here is to move the economy from a low employment equilibrium to a high employment one. It aims to achieve this by promising to pay employers a subsidy if they hire workers contingent on the aggregate level of hiring being below some level. This should make the marginal cost of hiring low enough that employers hire workers which should then bring employment up high enough so the subsidy doesn’t have to be paid.
As one can see these contracts come in a particularly broad array, from being technological to created by the government directly. Most of the time they require the addition of some external agent who possesses high levels of credibility but they are an incredibly powerful mechanism for aligning peoples behaviour.
One aspect that the basic games at the beginning lacked was any sense of where they settled in relation to the rest of the system they were in. For example much literature shows that if you repeat the prisoners dilemma over and over again then it can be in your best interests proceed with the socially beneficial outcome. If not then you the other agents can punish you in the subsequent periods and you end up worse off.
This is in fact the way many system solve coordination problems. The agents will be interacting over and over again and so even though they could benefit this period by cheating they don’t. There are two subsets of this I want to consider, repeated partially anonymous interactions and repeated non anonymous interactions.
By partial anonymity I only mean that the agents in this system either deal with different agents over time, but all agents know their history of actions, or the agents involved don’t have a personal connection with each other. Think one company dealing with another company. In this case the game theoretic answer of not deviating this period as you will be punished holds a fair bit of weight. Do you want to be known as the company that repeatedly used shady accounting to hit a quarterly target? If you did it once it might well benefit you but over and over again and there may well be reputational issues to contend with.
In the non anonymous case I think there are other important factors much more in play. As you interact with the same people over and over you move closer to what some call ‘team reasoning’. Your individual payoffs come to resemble the collective payoffs of the group. Yes you could screw someone over this period but actually that harms them and now that you care about their welfare as well as your own this may more than offset any gain you received.
The other consideration to do with interactions I want to look at is whether the game you play is nested in some larger system. In many cases this will cross over with repeated interactions but not always. Say you are in a fishing village. You not only fish with the same people but buy groceries off them, send your kids to school with theirs, drink at the same pubs as them. If you now cheat whatever system the village has in place to increase your fish yield in one period this can lead to a souring of your other interactions which may again make the net payoff not worthwhile.
The above is in some ways a overly formal way of saying people care about each other and don’t just care about how much fish they catch, but the point is a strong one. These systems are often ones that require no external agents to enforce or create and so hold a special place among solutions to coordination problems.
There are two types of knowledge to look at here, asymmetric and common. For this section asymmetric knowledge is when only a section of interested agents know something and common knowledge is when all agents have the same knowledge & are aware that all other agents have the same knowledge.
As an example of asymmetric knowledge take anaesthesia deaths in the USA. For a period of time deaths from anaesthesia stabilised at rates over 100x what they are now. Clearly this seems like a situation where all parties would be better off if death rates were lower. Hospitals would have fewer deaths and fewer patients would die. One reason for this was that no patients/families knew how many other patients were dying from anaesthesia and so weren’t able to tell if their family members death was part of some larger issue or a genuine case of medical risk.
No hospitals had an incentive to publish this data though. Firstly they wouldn’t get any benefit from it, most people went to the local hospital rather than seeking out any kind of metrics for death rates and then going to the best hospital. Further to that though if you were the only hospital to publish this data you might well look very bad indeed and invite a whole host of lawsuits.
It turns out that when this death rate stopped being asymmetric knowledge it fell dramatically as people could see that what happened to their loved ones didn’t wasn’t just a off freak incident.
Common knowledge and the coordinating force it has is the focus of Michael Suk-Young Chwe’s book Rational Ritual. Public events can create action not just, or even, because of some heightened emotional appeal but through the act of giving rational agents common knowledge.
One of the main drivers of common knowledge leading to actions that benefit the group is in its ability to dispel pluralistic ignorance. In these cases every member of some group may believe something but be unable to know for certain that the other members of the group feel the same way. As soon as there is some event that lets them all know that the others think the same they can ‘safely’ act as a coordinated unit.
For example, there is a classic game theory problem, that is identical in set up to the coordination game at the beginning, called the stag game. The idea is that if every member of the hunt band together they can kill a stag, however each member could go off and kill a rabbit on their own. Killing a rabbit is guaranteed but yields less meat than killing a stag. The issue is unless you know everyone else will join in hunting the stag then it is risk dominant to go off and hunt the rabbit.
One way to solve this could be to hold a meeting the night before the hunt and get all participants to declare their willingness and commitment to hunt for the stag the next day. While this does bring the game somewhat into a larger system the actions are still endogenous as there are no outside agents needed. By having everyone know that everyone has said they will hunt the stag each agent can, the next morning, commit to the action with a much lower chance that they will be left with someone not joining in.
Another example could be protesting. It might be that everyone in a country wants to protest about the government but they aren’t able/don’t know that this is the view of the rest of their countrymen. If some event happens that is able to credibly suggest to people that others feel the same way they do then it may well be enough of a push to citizens into a protest equilibrium.
These kinds of solutions often require some kind of exogenous start to them e.g mandating hospitals have to share death statistics, but once this initial push happens because individual incentives don’t need change little effort often has to be sustained to maintain the new equilibrium.
Suppose you are a university. Are you incentivised to provide the best possible education you can? It seems like the answer to this is yes, after all one would be forgiven for thinking that the aim of university is to provide the best education for the students who attend it.
The structure of most universities however means that this goal isn’t necessarily aligned with what is best for them. Obviously universities care about their reputation but many will also care a great deal about the amount of money they are able to generate. Money is paid by students upfront and so the university has an incentive here to try to get as many students through as possible and spend as little as possible on them.
I am not saying that universities don’t care at all about their students education or that money is their number one priority. I use this example mainly as it is one where the can be an easy to see misalignment of incentives that many people will be familiar with.
ISAs are a way to try to change this. They try to align how much funding an organisation gets directly to how well the students who went there do. One example of this is Lambda School, here you only start paying them back if you earn over a certain amount, they are directly motivated to get you up to scratch so that you will be able to earn enough for them to make a profit on each student.
This whole process increases the risk to the organisation if they don’t provide the best education possible. Here the coordinating agents can take on a few different guises. There is obviously a two party system of agents, the students and the institutions, who are endogenous. What is exogenous is the enforcing mechanism. In the case above this is typically provided by the rule of law established by governments. This need not always be the case as we will see later on though.
Some may have thought to themselves ‘aren’t ISAs just another form of contingent claim, and haven’t you already talked about contingent claims?’. The answers to this is yes and yes. ISAs are a form of contingent claim but they are different enough to the ones earlier that I wanted to separate them. In the examples earlier, deposit insurance etc, the ‘contingent’ part came from some aggregate behaviour. Here you could set up this contingent contract between just two individuals and so it does not have the same mechanics.
Common pools of resources
If one is talking about how coordination problems have been overcome without mentioning Governing the Commons then they will almost certainly have missed many areas. Elinor Ostrom’s work is far ranging and also incredibly detailed in its efforts to look at how groups have solved their own tragedy of the commons issues.
She outlines eight principles that most long standing common resource pools (CPRs) tend to have either implicitly or explicitly. Before having a look at the principles I want to look at one of the case studies Ostrom looks at and then we can see what makes it work well.
In the village of Alanya, Turkey, they had a problem. They only had so many fishing spots and everyone was always trying to go to the best ones and overfishing in them. So they devised a system; every September a list of eligible fishers was prepared and all the usable fishing locations were named and listed. In September eligible fishers drew lots to locations, from then until January each day the fishermen moved east one spot and from January until March they moved west. The rest of the time there were no designated spots.
What are some features that make this scheme work?
- There are clearly defined boundaries to avoid duel claim to a particular area, no fisherman will at any point between September to March will be unaware which lot has been assigned to them.
- The rules were drawn up with local conditions in mind i.e how to break up the surrounding waters. It might be that the differences in sights is only subtle or that the fish have a particular migratory pattern that an outside observer would find very challenging to discern and so might make an inappropriate system if they didn’t use local knowledge.
- The issue of monitoring is neatly sorted out. In their system if you have one of the better areas allotted to you then you are very likely to want to use it. This means that it is very difficult for people to cheat and use better plots than they have been allocated because the rightful user will almost certainly be using it.
Another point Ostrom mentions a number of times and that is sanctions. She notes that often with CPRs systems the sanctions in place are both graduated and start surprisingly low. This seems to allow mistakes to be punished without crippling the mistake maker and also allow them to break the rules in times of dire need all without breaking the idea that the system is working.
One thing that is also stressed is that in many of these situations there are repeat interactions with agents who you can model fairly well. The fact that many participants have a shared set of norms allows agents to gauge how others will react to the system and whether/to what extent they are likely to pursue opportunistic behaviour.
Smart contracts utilise blockchain technology to create a trustless environment for exchange (well you both need to trust the underlying blockchain but for the purposes of this post it is the fact you don’t need to trust each other that matters). The idea is that signatories draw up an agreement that can be mathematically described and then the contract works like an If-Then statement. If this person transfers x units of Ethereum into my wallet then the deed to my house is transferred to them. They might think that I am a lying good for nothing cheat but as long as the deed can be specified in such a way that allows someone to verify that they own it and that it can be transferred using blockchain technology then I can’t cheat them even if I wanted to and even if they don’t trust me.
This enforcing mechanism is rather different from the others considered. It is fundamentally trying to get rid of the need for mutual trust whereas many of the mechanism we have looked at so far aim to actually build trust up within a system. It is also completely technologically reliant. There is no one group or person who controls the power to allow these events to occur and no one could really stop them even if they wanted to given the distributed nature of blockchain technology.
One particularly interesting thing about smart contracts is that they are one of the first coordination mechanisms that are possible only because of the computer revolution. This means that they are one of the first technologies to leverage one of the most powerful advancements in history towards the goal of creating better coordination among people all over the world.
Why does this matter so much? The world is trending, on the whole, to being more open, you can buy something from someone from Boston to Bangladesh and never have any contact with them before or after. Most of the systems above either rely on increasing trust or, at some level, on the government of a country acting as an enforcer to mechanisms. With cross border transactions and when one engages with agents one has little knowledge of these two systems become much more difficult to rely on and so can break down.
Outside of model considerations
There are clearly reasons people are able to cooperate that game theory isn’t the best lens to use to understand. Things like wanting to play a part in a larger narrative or out of blind altruism. These are crucial for the real world and I may at some point look at them further but for this piece I am mainly focused on mechanisms that would lead a (semi)rational agent or collection of agents to acting in socially optimal ways.
This is more of a reference piece than finished work so I will probably add to it over time. If you can think of anything that seems like I’ve analysed incorrectly or any other mechanisms you think deserve to be mentioned please feel free to reach out at @neil_b_hacker I’d love to hear any suggestions.