…An example of a prisoner’s dilemma game (in this case, a multi-person prisoner’s dilemma) is a game that I have, for the last ten years or so, been playing with the audience whenever I present the results of my research at university colloquia or conferences. I begin by saying that I want to give the audience a phenomenal experience of ambivalence.
Index cards are then handed to ten randomly selected people and the others are asked to imagine that they had gotten one of the cards. They choose among hypothetical monetary prizes by writing either Y or X on the card.
The rules of the game (projected on a screen behind me while I talk) are as follows:
1. If you choose Y you get $100 times N.
2. If you choose X you get $100 times N plus a bonus of $300.
3. N equals the number of people (of the 10) who choose Y.
Then I point out the consequences of each choice as follows: “You will always get $200 more by choosing X than by choosing Y. Choosing X rather than Y decreases N by 1 (Rule #3), costing you $100; but if you chose X, you also gain the $300 bonus (Rule #2). This results in a $200 gain for choosing X. Logic therefore says that you should choose X, and any lawyer would advise you to do so. The problem is that if you all followed the advice of your lawyers and chose X, N=0, and each of you would get $300; while if you all ignored the advice of your lawyers and chose Y, N=10 and each of you would get $1,000.” Sometimes, depending on the audience, I illustrate these observations with a diagram like Figure 1 (bold labels).
Then I ask the ten people holding cards to make their choices, imagining as best they can what they would choose if the money were real, and letting no one else see what they have chosen. …I have done this demonstration or its equivalent dozens of times with audiences ranging from Japanese psychologists to Italian economists. The result is an approximately even split between cooperation (choosing Y) and defection (choosing X), indicating that the game does create ambiguity. Although the money won by members of my audiences is entirely hypothetical, significant numbers of subjects in similar experiments in my laboratory, with real albeit lesser amounts of money, have also chosen Y.
Figure 1 (labels in bold typeface) represents the contingencies of the prisoner’s dilemma game that I ask my audience to play. Point A represents the condition where everyone cooperates. Point C represents the condition where everyone defects. The line from A to C represents the average (hypothetical) earnings per person at each value of N (the inverse of the x-axis). Clearly, the more people who cooperate, the greater the average earnings. But, as is shown by the two lines, ABC (representing the return to each player who defects) and ADC (representing the return to each player who cooperates), an individual always earns more by defecting than cooperating.
Suppose, instead of hypothetically giving money to each player, I instead pooled the money each player earned (still hypothetical) and donated it to the entertainment fund of whatever institution I lectured at. Given this common interest it would now pay for every individual to choose Y; a choice of Y by any individual would increase N by 1 for all ten players, gaining $1,000 at a cost of the individual player’s $300 bonus, for a net gain to the pool of $700. A common interest thus tends to reinforce cooperation in prisoner’s dilemma games.
… Consider a population of organisms divided into several relatively isolated groups (tribes, for example [or business enterprises..?]). Within each tribe there are some altruists and some selfish individuals (“egoists”) interacting with each other repeatedly in multi-person prisoner’s dilemma-like games such as the one with which I introduce my lectures, except, instead of monetary reward, the players receive more or less fitness – that is, ability to reproduce. In these games, the altruists tend to cooperate while the egoists tend to defect. Within each group (as in the prisoner’s dilemma) altruists always lose out to egoists.
However, those groups originally containing many altruists grow much faster than those originally containing many egoists – because cooperation benefits the group more than defection does.
Consider the case of teams, such as basketball teams, playing in a league. It is commonly accepted that, all else being equal, teams with individual players who play unselfishly will beat teams with individual players who play selfishly; however, within each team, the most selfish players will score the most points. Imagine now, that instead of scoring points and winning or losing games, the teams competed for reproductive fitness. Then the number of players on teams with a predominance of unselfish players would grow rapidly while that of teams with a predominance of selfish players would grow slowly or (in competition for scarce resources) shrink – the group effect. Although, within each team, selfish players would still increase faster than unselfish ones (the individual effect), this growth could well be overwhelmed by the group effect. As time goes on, the absolute number of unselfish individuals (altruists) could increase faster across the whole population than the absolute number of egoists even though within each group the relative number of altruists decreases. … the essential point is that while individual altruists may always be at a disadvantage relative to egoists, groups of altruists may be at an advantage relative to groups of egoists…