Complexity of Cooperation: Agent-Based Models of Competition and Collaboration (Princeton Studies in Complexity) by Robert A. Axelrod (18-Aug-1997) Paperback
Los siete artículos que componen este libro fueron escritos a lo largo de una década y constituyen una excelente contribución al tema de la colaboración y la competencia, que involucra tanto la teoría de los juegos como teoría de la complejidad. El original enfoque de Axelrod es de interés para múltiples disciplinas, entre las que destacan las ciencias sociales y las ciencias políticas.
Robert Axelrod (born 1943) is a Professor of Political Science and Public Policy at the University of Michigan. He has appointments in the Department of Political Science and the Gerald R. Ford School of Public Policy. Prior to moving to Michigan, he taught at the University of California, Berkeley (1968-1974). He holds a BA in mathematics from the University of Chicago (1964) and a PhD in political science from Yale University (1969).
He is best known for his interdisciplinary work on the evolution of cooperation, which has been cited in numerous articles. His current research interests include complexity theory (especially agent-based modeling), and international security. Among his honors and awards are membership in the National Academy of Sciences, a five-year MacArthur Prize Fellowship, the Newcomb Cleveland Prize of the American Association for the Advancement of Sciences for an outstanding contribution to science, and the National Academy of Sciences Award for "Behavioral Research Relevant to the Prevention of Nuclear War".
Recently Axelrod has consulted and lectured on promoting cooperation and harnessing complexity for the United Nations, the World Bank, the U.S. Department of Defense, and various organizations serving health care professionals, business leaders, and K-12 educators.
Axelrod was the President of the American Political Science Association (APSA) for the 2006-2007 term. He focused his term on the theme of interdisciplinarity.
In May 2006, Axelrod was awarded an honorary degree by Georgetown University.
This is a collection of article spanning Axelrod's career, in which computer simulations are used to look into interactions which can involve cooperation or transfer of traits. The articles are academic papers from academic journals, but each has a preface added in the book which is written in a more readable style. I found my mind wandering or myself nodding off more often than I normally would while reading nonfiction.
Some readers will find the first article, on the "re-iterative Prisoner's Dilemma," helpful in understanding that kind of game theory question and the history of computer simulations to study it. However, some who would choose to read such a book as this will already have considerable familiarity with the material.
The fifth article was significant for me. It dealt with how tech companies line up in alliances when seeking wider sales as a result of creating an industry standard for a product. This chapter goes into more detail comparing real world experience with simulation models. We find that it's in the nature of businesses that their fundamental motivation (individual profit) places them in a paradoxical position. The more companies which participate in developing an industry standard, the more likely a successful standard will result to encourage consumers to buy. However, companies don't want their primary rivals in the group, as then the rival will get the same benefits as they do. There are implications for both companies and consumers. It's normal for humans to act on a mixture of selfishness and social motivations - there is some degree of what one can call rivalry. However, the rivalry between businesses is at a greater level and is not restrained by a conscience. So, there are different natures to different kinds of players. Unfortunately, this isn't an area Axelrod delves into.
In general, I had concerns about the simplifications used to create these simulations. Certainly, simplifications are necessary. However, there are real world factors which seem overlooked. He does discuss the fact that in the real world a person might treat another person unfairly by mistake or from misunderstanding, but the simulated players always know whether they've been treated fairly or unfairly - there doesn't tend to be a simulation of players who fool others into thinking they're being fair. Similarly, when simulating how people can influence other people's cultural choices, he has those with some cultural aspects in common influencing other people on cultural traits on which they had differed. There's no simulation of people who pretend to share one trait in order to influence another area. A "chameleon" player who pretended to have something in common with people - when he actually did not - would influence more people than non-chameleons. He treats it as equally likely that Person A will cause Person B to shift as the other way around, ignoring how some people are more manipulative. There isn't much representation of unequal power and influence, or the tendency to exercise them. Yes, there are social interactions in which these factors play little or no role, but I see a potential for them to play a major role in important areas of society.
I am a fan of Axelrod's book "The Evolution of Cooperation." (Looking back, I see that I only gave it three stars. You can look at my reviews to see my reasons, which I think are valid, but I think it probably deserves 4.)
To recap briefly, TEC was about a computer simulation tournament that he ran of the iterated Prisoner's Dilemma, and the results of that tournament. The book is an exploration of the simple but subtle idea of how cooperation can evolve and stabilize among self-interested autonomous individuals. TEC is an extremely accessible book, and the ideas Axelrod wrote about there are related to some other interesting work like Elinor Ostrom's.
TCC is a "sequel" of sorts. It is interesting, but not as compelling as its predecessor. It is really a collection of academic papers, each with a brief introduction by Axelrod. The papers are all extensions, of one sort or another, of the ideas from TEC. For example, how does the situation change when agents can "misunderstand" each other? For the most part, I think they are very readable, with credit to Axelrod's style. They were mostly published in relatively small/obscure journals, and I wonder whether there is a connection there.
As someone with an economics background, I found this work interesting. Broadly speaking, the work in this book focuses on agents that are "myopic" in one way or another. They do not try to work out a globally optimal solution, but rather, follow incrementalist strategies that seem like they will make some improvement on the status quo. Such strategies are likely to lead agents to local maxima. This is probably a fairly good description of how most real-world agents behave. It is quite appealing from a modeling perspective, in certain ways. For example, the agents need not have in mind fully-specified probability distributions for various stochastic events, which is typically one part of optimizing-agent models that can seem unrealistic.
On the other hand, I can completely understand why it has been hard for such models to gain a foothold in academic research. Moving away from the optimizing agent introduces a ton of degrees of freedom all at once. To paraphrase Tolstoy, all optimizing agents are alike, but every myopic agent is myopic in its own way. There are just too many ways of not-optimizing. If you think it is easy to build an agent with optimizing agents to "prove" whatever you want, it is even easier to do that when you allow your agents not to optimize. (Though it may be difficult to tell what the outcome will be ex ante.) To be clear, I do not at all think that Axelrod is building models "just to get the results he wants." But without doing a lot of work to calibrate the non-rationality to the behavior of actually-observed agents, which Axelrod does not do, it is hard to say what it is that we learn from these simulations.
They can provide existence proofs, which I think is what we see in TEC--yes, cooperation can arise and stabilize among self-interested agents. A simulation is stronger, or at least complementary, evidence relative to any case study, because we can "open up" the agents to see exactly how they are operating. Is this how cooperation "really" arose in any given real-world situation? Difficult or impossible to say. But we know now that we do not have to resort to notions of altruism--self-interested agents can be a legitimate competing hypothesis. But in TCC, I didn't see any compelling existence proofs of this sort. Rather, the book contains a lot of interesting explorations of various model specifications. I was entertained by it, but I'm not sure what I learned from it.
I was obsessed with his first book, and I'm also taking my own stab at agent-based modeling - hopefully, it will be more successful than in the past (especially for econ).
A few things I learned. 1. I finally get the genetic algorithm, and I swear to god I'm gonna try using it in some of my simulations! One simply needs a population of individuals doing something and evaluating how "good" they are at this something. Then, each of these individuals must have some traits where the traits could take on multiple values. Then, after some time period, one measures who the "best" individuals are, and make them exchange their traits with other best individuals to make new hybrids while discarding the individuals who are bad. Then, repeat! This is quite obviously how evolution and genes work, but doing this computationally through simulation with anything is quite novel for me.
2. Despite Axelrod's impressive quant abilities that are ahead of his time, I was very disappointed in his application of determining how WW2 parties chose their sides. Even though his landscape theory is interesting, the sample size is just too small to be useful, which seems very obvious.
3. Is the fact that one can completely control the initial conditions of the ABM's the reason they didn't catch on? I tend to think not since behavior is SO unexpected from them (see the culture example in the book). Maybe it's because they aren't externally valid, but hopefully with the rise of GPT as an agent the times are a changing!!!
Just like "Micro motives, macro behaviors" by T. Schelling, this book explores how local decisions enlist unexpected global behaviors, emergence of states, of cooperations, of cultural regions, etc. As other reviews noted, this is a collection of seven scientific publications by R. Axelrod—whose text has been adapted, I think. The sole common theme is that data comes from agent-models that are simulated (as opposed of as the more traditional equation-based models). I found the text surprisingly easy to follow, with little or no equations. Two additional publications come as appendices. The first one presents how to align two agent-based models. The second one is a list of resources for those who would like to know more about agent-based models.
Although the findings were predicated on agent-based computer modeling, this provided a wealth of insights on a variety of topics dealing with cooperation, competition, and corruption in the gamut of interactions, including politics, warfare, technology, business, and society. My mind is racing with ways to find the practical implications of this and implement them.
enlightening book on computational ethics for interative situations involving more than two participants. the book applies the principles to such real world cases as the computer industry's efforts to coalesce on a UNIX standard and coalition formation among european countries in world war 2.