How to outsmart the Prisoner’s Dilemma - Lucas Husted
- 3,805,020 Views
- 2,989 Questions Answered
- TEDEd Animation
There are several things related to the math and the infinite nature of the problem that were slightly glossed over. The first is why we need discount rates at all. Not only is the discount rate a nice addition to the game since we want to capture the fact that players might care more about immediate success then about future (uncertain) success, but also it is actually necessary when considering infinite games like this to introduce the discount rate. Without it, the payoffs for every strategy would be infinite! Imagine the discount rate was set to 1 (so no discounting), then the stream of payoffs that any player considers for cooperation would be 3 today, 3 tomorrow, 3 the next day, and on and on, for all time. They would compare this to 4 today and then 1 tomorrow, 1 the next day, and so on. Without discounting this infinite stream, both payoffs add up to infinity. We cannot compare them!
So the discount rate is a necessary part of the infinite game. Additionally, it allows us to use the fact that the sum of a geometric series of the form:
when r is less than 1, to get nice solutions to the problem. You can find more on the geometric series here.
Finally, you may be wondering what would happen if we repeated this game a finite number of times instead of an infinite number of times. Surely if the players repeat it 20 or 30 times then they can cooperate just like in the infinite period, right? The answer is no. The reason can be found by solving backwards through backward induction. Take the last period: there is no “next day” so the players face the same incentive as in the regular prisoner’s dilemma and they both cheat on each other. The thing is, if that happens, then the day before the last day, both players know they will cheat in the last period no matter what happens today, so they have no incentive to cooperate now to secure better payment later. Again cooperation breaks down. This reasoning can be continued backwards to the first period.
So it’s actually the fact that the players don’t think the game will ever end that leads them to cooperate. Pretty sad right? Not so fast.
As it turns out, that is only because there is a unique Nash Equilibrium of the single-period game. What if instead of 2 choices, the players had 3 (imagine in the real prisoner’s dilemma you enabled the prisoners to either “snitch”, “lie”, or “tell a half-truth” where the last option was some intermediate thing that helped them a little but didn’t harm the other person a little)? Also imagine that this new game had two Nash Equilibrium solutions (both “cheat” or both “half-truth”).
One way cooperation could form would be because the players could threaten to fall back to the “bad” Nash equilibrium in future play rather than the “good” equilibrium in the final period. This is called a “Nash Threat.” This kind of strategy is just one example of cooperation. There are many such possibilities elucidated by the famous “folk” theorems in game theory.
For a fun episode about optimal strategy in repeated games see this planet money episode.
Finally, this talk has featured a great deal of discussion about concepts that are already assumed you were somewhat familiar with in game theory, namely Nash equilibrium. For more discussion about this topics, the following Ted-Ed videos may be up your alley.
This lesson explains the psychology of rationality and discusses the limits of this kind of thinking more generally.
This lesson discusses Nash equilibrium from another angle, explaining why businesses cluster together even though you might think they shouldn’t.
This final video tests your knowledge of Nash equilibrium with a fun ultimatum game.
Create and share a new lesson based on this one.