Draft replacement for The Evolution of Cooperation
Author | Robert Axelrod |
---|---|
Language | English |
Genre | Philosophy, sociology |
Publisher | Basic Books |
Publication date | December 5, 2006 |
Publication place | United States |
Media type | Hardback, paperback, audiobook |
Pages | 241 |
ISBN | 0-465-00564-0 |
OCLC | 76963800 |
302 14 | |
LC Class | HM131.A89 1984 |
The evolution of cooperation can refer to:
This article is about the book The Evolution of Cooperation, which expands on the ideas in the article of the same name.
The Evolution of Cooperation is an academic work intended for non-specialist readers written by Robert Axelrod, a professor of Political Science and Public Policy at the University of Michigan and an expert on game theory, artificial intelligence, mathematical modeling, and complexity theory [1]. The book attempts to explain how cooperation can be advantageous for essentially selfish actors using a competition where professional and amateur game theorists submitted various strategies to a prisoner’s dilemma game. [2] Axelrod then explores the significance of his discoveries, applying them to a wide variety of situations including biology, WWI trench warfare, business relationships and nuclear proliferation.
Widely regarded as an important work for explaining how cooperative behavior can occur between selfish individual actors, The Evolution of Cooperation is an influential and frequently cited text in academics. It was popularized in the bestseller The Selfish Gene.
Game Theory is the "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." [3]. Game theory attempts to find optimal decisions or strategies in scenarios where the outcome is dependent on the choices of others. It first addressed zero-sum games but is now applied to a wider variety of scenarios.
The Prisoner’s Dilemma is the main scenario in game theory utilized in The Evolution of Cooperation. [4]. Two members of a criminal gang are arrested and kept in isolation, with no means of communication. They are each offered a bargain, they can either betray the other prisoner by testifying or cooperate with the other prisoner by remaining silent. If they betray each other, they both get 2 years in prison. If one betrays and the other does not, the prisoner who betrayed will be set free and the prisoner who did not betray will get 3 years in prison. If both A and B choose not to betray they will both get 1 year in prison. The best scenario for the prisoners as a group is for them to cooperate. But the best scenario for each individual prisoner is to betray. Either your partner will cooperate and you will go free, or your partner will betray and you will get 2 years instead of 5. However, since both prisoners know this, the result becomes a double betrayal, a worse result for either prisoner than cooperation. An iterated version of this scenario, where two “prisoners” meet many times and are allowed to react to their opponent’s previous decisions forms the basis of Axelrod’s tournament. [5].
Before publishing the Evolution of Cooperation, Axelrod co-wrote a 1981 Article published in Science with W. D. Hamilton under the same name. In this article he presents the results of a Prisoner’s Dilemma tournament where a variety of strategies are submitted and those with the highest score “reproduce” over many generations until they meet an equilibrium. [6] He applies this model to biology to explain how cooperation could be beneficial to “selfish” genes. He emphasizes strategies which are stable over time and resistant to invasion by other strategies because only they will persist over time without going extinct. These ideas are re-published and further expanded in chapter 5 of the book.
The Evolution of Cooperation centers on the results of two computer based tournaments. For each tournament Axelrod solicited strategies from a variety of participants. Each strategy was a program which must make a simple binary choice: COOPERATE or DEFECT. Both strategies make their choice simultaneously. If both strategies COOPERATE they both get 3 points. If one strategy chooses COOPERATE and the other chooses DEFECT then the COOPERATE strategy gets 0 points and the DEFECT strategy gets 5. The strategies are allowed to remember their opponents previous actions and change their choice accordingly. Some strategies were highly complex pieces of programming that analyzed the entire history of decisions, others were very simple.Cite error: A <ref>
tag is missing the closing </ref>
(see the
help page)..
For the first tournament game theory experts and other academics from psychology, sociology, and political science provided the strategies. [7]. Each pair of strategies went for 200 rounds. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move [8]. TIT FOR TAT never did better than its partner in any pairing, but it accumulated a higher total score than any other strategy. [9].
Axelrod analyzed his results and presented them to a much larger group of participants for a second tournament. [10]. All the participants were aware of the strategies and the rankings of the previous tournament. Despite the presence of a wider variety of strategies TIT FOR TAT also won the second tournament.
In both tournaments all of the high scoring strategies were nice, meaning they never chose DEFECT first. Many of the competitors went to great lengths to gain an advantage over the "nice", and usually simpler, strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TIT FOR TAT, along with other high scoring nice strategies, "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the others' weakness." [11].
The high scoring strategies were also “retaliatory”, meaning they would respond to their opponent’s DEFECT with at least one DEFECT. [12] Without being retaliatory, strategies run the risk of being suckered by a “mean” strategy like ALWAYS DEFECT. Finally they were all “forgiving”, they would respond to an opponent’s COOPERATE with a COOPERATE, even if there was a history of defection. This allowed them to benefit from encounters with strategies that mostly COOPERATE but occasionally DEFECT. [13]
Axelrod also ran a variety of scenarios where he changed the parameters of the competition but not the strategies. The most important of these scenarios was a generational tournament, where each strategy reproduced, changing the composition of the strategies [14].. He also ran a tournament where each strategy was assigned a geographical position and competed against neighbors, expanding or contracting over generations based on its score. [15]. These scenarios were used to create models for biology and political science.
Axelrod developed 4 basic principals correlated with success in the tournament which may also apply to a wide variety of real life scenarios.
Axelrod uses the Prisoner’s Dilemma game to make three proposals for promoting cooperation in a variety of real world contexts:
Axelrod spends a chapter re-iterating his article published in Science with the evolutionary biologist Hamilton. By putting the strategies of his tournament in a generational scenario, where higher scoring strategies produce more offspring and lower scoring strategies go extinct, he discovered that TIT FOR TAT is an evolutionarily stable strategy. [24]. From a small starting population it can overcome other strategies and do well with a large number of copies of itself. Once the population is exclusively TIT FOR TAT, no other strategy can “invade” and do better than the TIT FOR TAT strategy.
There are other evolutionarily stable strategies. If present in large enough numbers ALWAYS DEFECT will kill off any isolated cooperative strategy. [25]. With no one to cooperate with, the strategy is exploited on the first move and that starts a chain of DEFECT moves, with the first defector always staying slightly ahead. All scores remain low in this scenario. However, Axelrod found that if TIT FOR TAT invades in a cluster so that TIT FOR TAT has a fair chance to meet a copy of itself and begin a chain of cooperation, then TIT FOR TAT will quickly supplant ALWAYS DEFECT. [26]. This effect is even more pronounced if the cluster is geographically close and TIT FOR TAT can rely on cooperative neighbors. [27].
Axelrod concludes that there are three traits a strategy requires to become prevalent in biological systems.
These ideas help resolve a conundrum in biology. Cooperation is clearly better for the group, but genes are selected on an individual level.
[28]. Cite error: A <ref>
tag is missing the closing </ref>
(see the
help page).. The Evolution of Cooperation argues that cooperation is an advantageous and evolutionarily stable strategy so long as there are other individuals to cooperate with, there’s a way to recognize and punish cheaters, and there’s a sufficiently high chance that the cooperators will meet again in the future. If cooperation is paired with retaliation, even one cluster, or family, of cooperators can invade a population of non-cooperators and become the dominant strategy.
[29].
Axelrod uses some events in World War I trench warfare as an example of cooperation between enemy combatants. He frames trench warfare as an iterated prisoner’s dilemma. Each day combatants could choose to shoot to kill, or choose to ignore each other. Since at some point high command would demand a bloody push, the benefits of thinning the numbers of your enemy were great. But if you attack the enemy unit in earnest, they will retaliate just as harshly. Under these conditions, with units facing each other for weeks on end, each capable of dealing great harm, temporary truces and tacit agreements developed between the combatants. Just like the classic prisoner’s dilemma scenario, this example of cooperation didn't require friendship or lines of communication between the participants. [30].
The Evolution of Cooperation also frames cooperation between businesses as an iterated prisoner's dilemma. Businesses work together smoothly when they anticipate a long and profitable relationship and both businesses have some power to harm the profit of the other. Axelrod uses a failing business as an example of cooperation breaking down. The business will soon disappear and no longer has the ability to punish defectors with the cessation of the profitable relationship. This leads to the betrayal of former partners, who suddenly complain about defective products and refuse to honor contracts. [31].
According to Axelrod, cooperation is not always desirable in society; price fixing is a form of mutually beneficial cooperation between businesses that hurts consumers. He argues that we can use his discoveries to discourage cooperation. By disrupting stable relationships, changing payoffs to weight more for short term gains, and removing the ability for partners to punish each other for defection we can discourage unwanted collusions. [32].
Axelrod briefly applies his ideas to the Cold War and nuclear disarmament. He notes that cooperation is possible when the relationship is durable and both sides have the ability to retaliate. The relationship does not have to be friendly, both sides only have to be confident that they will be dealing with each other in the future. Applying the principals of cooperation derived from the tournaments, he argues that it’s ideal to be initially friendly and forgiving, but to also be quick to retaliate to any provocation. This behavior should encourage cooperation but not exploitation and allow for peaceful relationships between powers in the absence of a central authority. [33].
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts" [34]. and estimated that citations to The Evolution of Cooperation alone were "growing at the rate of over 300 per year". [35] As of 2015 The Evolution of Cooperation has been cited more than 37,000 times in a variety of fields, making it extremely influential in academics. [36] Axelrod’s work was summarized in the bestseller The Selfish Gene. [37].
Boyd and Lorberbaum criticized Axelrod's conclusion that TIT FOR TAT is an evolutionarily stable strategy, arguing the no single strategy is truly evolutionarily stable in a game dependent on the strategies of others. [38]. TIT FOR TAT can be invaded by any "nice" strategy, such as TIT FOR TWO TATS, which forgives twice. Both strategies will be equally successful, the only difference being how they react to other invaders in the future. TIT FOR TAT would do better against meaner strategies like ALWAYS DEFECT, but worse against strategies that mostly COOPERATE but always retaliate to DEFECT with DEFECT and have a small chance of choosing an unprovoked DEFECT. They conclude that no pure strategy can be stable in all circumstances. Bendor and Swistak resolve this debate by defining two different kinds of stability: strong stability is when all invaders decline in frequency and weak stability is when all invaders in numbers smaller than the native population cannot increase in frequency at the expense of the native population. [39] They agree that no pure strategy is strongly evolutionarily stable and they point out that a wide variety of strategies can be weakly stable. These strategies do vary by how many copies of the strategy must be present before the can be weakly stable, a trait they term robustness.
Axelrod has a subsequent book, The Complexity of Cooperation, [40] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally, [41] other mechanisms for generating cooperation, [42] the Iterated Prisoner's Dilemma under different conditions and assumptions, [43] and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play. [44] It has also been used to challenge the rational and self-regarding "economic man" model of economics, [45]and as a basis for replacing Darwinian sexual selection theory with a theory of social selection. [46]
Draft replacement for The Evolution of Cooperation
Author | Robert Axelrod |
---|---|
Language | English |
Genre | Philosophy, sociology |
Publisher | Basic Books |
Publication date | December 5, 2006 |
Publication place | United States |
Media type | Hardback, paperback, audiobook |
Pages | 241 |
ISBN | 0-465-00564-0 |
OCLC | 76963800 |
302 14 | |
LC Class | HM131.A89 1984 |
The evolution of cooperation can refer to:
This article is about the book The Evolution of Cooperation, which expands on the ideas in the article of the same name.
The Evolution of Cooperation is an academic work intended for non-specialist readers written by Robert Axelrod, a professor of Political Science and Public Policy at the University of Michigan and an expert on game theory, artificial intelligence, mathematical modeling, and complexity theory [1]. The book attempts to explain how cooperation can be advantageous for essentially selfish actors using a competition where professional and amateur game theorists submitted various strategies to a prisoner’s dilemma game. [2] Axelrod then explores the significance of his discoveries, applying them to a wide variety of situations including biology, WWI trench warfare, business relationships and nuclear proliferation.
Widely regarded as an important work for explaining how cooperative behavior can occur between selfish individual actors, The Evolution of Cooperation is an influential and frequently cited text in academics. It was popularized in the bestseller The Selfish Gene.
Game Theory is the "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." [3]. Game theory attempts to find optimal decisions or strategies in scenarios where the outcome is dependent on the choices of others. It first addressed zero-sum games but is now applied to a wider variety of scenarios.
The Prisoner’s Dilemma is the main scenario in game theory utilized in The Evolution of Cooperation. [4]. Two members of a criminal gang are arrested and kept in isolation, with no means of communication. They are each offered a bargain, they can either betray the other prisoner by testifying or cooperate with the other prisoner by remaining silent. If they betray each other, they both get 2 years in prison. If one betrays and the other does not, the prisoner who betrayed will be set free and the prisoner who did not betray will get 3 years in prison. If both A and B choose not to betray they will both get 1 year in prison. The best scenario for the prisoners as a group is for them to cooperate. But the best scenario for each individual prisoner is to betray. Either your partner will cooperate and you will go free, or your partner will betray and you will get 2 years instead of 5. However, since both prisoners know this, the result becomes a double betrayal, a worse result for either prisoner than cooperation. An iterated version of this scenario, where two “prisoners” meet many times and are allowed to react to their opponent’s previous decisions forms the basis of Axelrod’s tournament. [5].
Before publishing the Evolution of Cooperation, Axelrod co-wrote a 1981 Article published in Science with W. D. Hamilton under the same name. In this article he presents the results of a Prisoner’s Dilemma tournament where a variety of strategies are submitted and those with the highest score “reproduce” over many generations until they meet an equilibrium. [6] He applies this model to biology to explain how cooperation could be beneficial to “selfish” genes. He emphasizes strategies which are stable over time and resistant to invasion by other strategies because only they will persist over time without going extinct. These ideas are re-published and further expanded in chapter 5 of the book.
The Evolution of Cooperation centers on the results of two computer based tournaments. For each tournament Axelrod solicited strategies from a variety of participants. Each strategy was a program which must make a simple binary choice: COOPERATE or DEFECT. Both strategies make their choice simultaneously. If both strategies COOPERATE they both get 3 points. If one strategy chooses COOPERATE and the other chooses DEFECT then the COOPERATE strategy gets 0 points and the DEFECT strategy gets 5. The strategies are allowed to remember their opponents previous actions and change their choice accordingly. Some strategies were highly complex pieces of programming that analyzed the entire history of decisions, others were very simple.Cite error: A <ref>
tag is missing the closing </ref>
(see the
help page)..
For the first tournament game theory experts and other academics from psychology, sociology, and political science provided the strategies. [7]. Each pair of strategies went for 200 rounds. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move [8]. TIT FOR TAT never did better than its partner in any pairing, but it accumulated a higher total score than any other strategy. [9].
Axelrod analyzed his results and presented them to a much larger group of participants for a second tournament. [10]. All the participants were aware of the strategies and the rankings of the previous tournament. Despite the presence of a wider variety of strategies TIT FOR TAT also won the second tournament.
In both tournaments all of the high scoring strategies were nice, meaning they never chose DEFECT first. Many of the competitors went to great lengths to gain an advantage over the "nice", and usually simpler, strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TIT FOR TAT, along with other high scoring nice strategies, "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the others' weakness." [11].
The high scoring strategies were also “retaliatory”, meaning they would respond to their opponent’s DEFECT with at least one DEFECT. [12] Without being retaliatory, strategies run the risk of being suckered by a “mean” strategy like ALWAYS DEFECT. Finally they were all “forgiving”, they would respond to an opponent’s COOPERATE with a COOPERATE, even if there was a history of defection. This allowed them to benefit from encounters with strategies that mostly COOPERATE but occasionally DEFECT. [13]
Axelrod also ran a variety of scenarios where he changed the parameters of the competition but not the strategies. The most important of these scenarios was a generational tournament, where each strategy reproduced, changing the composition of the strategies [14].. He also ran a tournament where each strategy was assigned a geographical position and competed against neighbors, expanding or contracting over generations based on its score. [15]. These scenarios were used to create models for biology and political science.
Axelrod developed 4 basic principals correlated with success in the tournament which may also apply to a wide variety of real life scenarios.
Axelrod uses the Prisoner’s Dilemma game to make three proposals for promoting cooperation in a variety of real world contexts:
Axelrod spends a chapter re-iterating his article published in Science with the evolutionary biologist Hamilton. By putting the strategies of his tournament in a generational scenario, where higher scoring strategies produce more offspring and lower scoring strategies go extinct, he discovered that TIT FOR TAT is an evolutionarily stable strategy. [24]. From a small starting population it can overcome other strategies and do well with a large number of copies of itself. Once the population is exclusively TIT FOR TAT, no other strategy can “invade” and do better than the TIT FOR TAT strategy.
There are other evolutionarily stable strategies. If present in large enough numbers ALWAYS DEFECT will kill off any isolated cooperative strategy. [25]. With no one to cooperate with, the strategy is exploited on the first move and that starts a chain of DEFECT moves, with the first defector always staying slightly ahead. All scores remain low in this scenario. However, Axelrod found that if TIT FOR TAT invades in a cluster so that TIT FOR TAT has a fair chance to meet a copy of itself and begin a chain of cooperation, then TIT FOR TAT will quickly supplant ALWAYS DEFECT. [26]. This effect is even more pronounced if the cluster is geographically close and TIT FOR TAT can rely on cooperative neighbors. [27].
Axelrod concludes that there are three traits a strategy requires to become prevalent in biological systems.
These ideas help resolve a conundrum in biology. Cooperation is clearly better for the group, but genes are selected on an individual level.
[28]. Cite error: A <ref>
tag is missing the closing </ref>
(see the
help page).. The Evolution of Cooperation argues that cooperation is an advantageous and evolutionarily stable strategy so long as there are other individuals to cooperate with, there’s a way to recognize and punish cheaters, and there’s a sufficiently high chance that the cooperators will meet again in the future. If cooperation is paired with retaliation, even one cluster, or family, of cooperators can invade a population of non-cooperators and become the dominant strategy.
[29].
Axelrod uses some events in World War I trench warfare as an example of cooperation between enemy combatants. He frames trench warfare as an iterated prisoner’s dilemma. Each day combatants could choose to shoot to kill, or choose to ignore each other. Since at some point high command would demand a bloody push, the benefits of thinning the numbers of your enemy were great. But if you attack the enemy unit in earnest, they will retaliate just as harshly. Under these conditions, with units facing each other for weeks on end, each capable of dealing great harm, temporary truces and tacit agreements developed between the combatants. Just like the classic prisoner’s dilemma scenario, this example of cooperation didn't require friendship or lines of communication between the participants. [30].
The Evolution of Cooperation also frames cooperation between businesses as an iterated prisoner's dilemma. Businesses work together smoothly when they anticipate a long and profitable relationship and both businesses have some power to harm the profit of the other. Axelrod uses a failing business as an example of cooperation breaking down. The business will soon disappear and no longer has the ability to punish defectors with the cessation of the profitable relationship. This leads to the betrayal of former partners, who suddenly complain about defective products and refuse to honor contracts. [31].
According to Axelrod, cooperation is not always desirable in society; price fixing is a form of mutually beneficial cooperation between businesses that hurts consumers. He argues that we can use his discoveries to discourage cooperation. By disrupting stable relationships, changing payoffs to weight more for short term gains, and removing the ability for partners to punish each other for defection we can discourage unwanted collusions. [32].
Axelrod briefly applies his ideas to the Cold War and nuclear disarmament. He notes that cooperation is possible when the relationship is durable and both sides have the ability to retaliate. The relationship does not have to be friendly, both sides only have to be confident that they will be dealing with each other in the future. Applying the principals of cooperation derived from the tournaments, he argues that it’s ideal to be initially friendly and forgiving, but to also be quick to retaliate to any provocation. This behavior should encourage cooperation but not exploitation and allow for peaceful relationships between powers in the absence of a central authority. [33].
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts" [34]. and estimated that citations to The Evolution of Cooperation alone were "growing at the rate of over 300 per year". [35] As of 2015 The Evolution of Cooperation has been cited more than 37,000 times in a variety of fields, making it extremely influential in academics. [36] Axelrod’s work was summarized in the bestseller The Selfish Gene. [37].
Boyd and Lorberbaum criticized Axelrod's conclusion that TIT FOR TAT is an evolutionarily stable strategy, arguing the no single strategy is truly evolutionarily stable in a game dependent on the strategies of others. [38]. TIT FOR TAT can be invaded by any "nice" strategy, such as TIT FOR TWO TATS, which forgives twice. Both strategies will be equally successful, the only difference being how they react to other invaders in the future. TIT FOR TAT would do better against meaner strategies like ALWAYS DEFECT, but worse against strategies that mostly COOPERATE but always retaliate to DEFECT with DEFECT and have a small chance of choosing an unprovoked DEFECT. They conclude that no pure strategy can be stable in all circumstances. Bendor and Swistak resolve this debate by defining two different kinds of stability: strong stability is when all invaders decline in frequency and weak stability is when all invaders in numbers smaller than the native population cannot increase in frequency at the expense of the native population. [39] They agree that no pure strategy is strongly evolutionarily stable and they point out that a wide variety of strategies can be weakly stable. These strategies do vary by how many copies of the strategy must be present before the can be weakly stable, a trait they term robustness.
Axelrod has a subsequent book, The Complexity of Cooperation, [40] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally, [41] other mechanisms for generating cooperation, [42] the Iterated Prisoner's Dilemma under different conditions and assumptions, [43] and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play. [44] It has also been used to challenge the rational and self-regarding "economic man" model of economics, [45]and as a basis for replacing Darwinian sexual selection theory with a theory of social selection. [46]