The Cost of Cooperating


David Rand: “…If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that’s going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I’m cooperating with someone who I’ll interact with again, it’s worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there’s a large enough likelihood that we’ll interact again.

Even if it’s with someone that I’m not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.

There’s a lot of evidence to show that this works. There are game theory models and computer simulations showing that if you build these kinds of future consequences, you can get either evolution to lead to cooperative agents dominating populations, and also learning and strategic reasoning leading to people cooperating. There are also lots of behavioral experiments supporting this. These are lab experiments where you bring people into the lab, give them money, and you have them engage in economic cooperation games where they choose whether to keep the money for themselves or to contribute it to a group project that benefits other people. If you make it so that future consequences exist in any of these various ways, it makes people more inclined to cooperate. Typically, it leads to cooperation paying off, and being the best-performing strategy.

In these situations, it’s not altruistic to be cooperative because the interactions are designed in a way that makes cooperating pay off. For example, we have a paper that shows that in the context of repeated interactions, there’s not any relationship between how altruistic people are and how much they cooperate. Basically, everybody cooperates, even the selfish people. Under certain situations, selfish people can even wind up cooperating more because they’re better at identifying that that’s what is going to pay off.

This general class of solutions to the cooperation problem boils down to creating future consequences, and therefore creating a self-interested motivation in the long run to be cooperative. Strategic cooperation is extremely important; it explains a lot of real-world cooperation. From an institution design perspective, it’s important for people to be thinking about how you set up the rules of interaction—interaction structures and incentive structures—in a way that makes working for the greater good a good strategy.

At the same time that this strategic cooperation is important, it’s also clearly the case that people often cooperate even when there’s not a self-interested motive to do so. That willingness to help strangers (or to not exploit them) is a core piece of well-functioning societies. It makes society much more efficient when you don’t constantly have to be watching your back, afraid that people are going to take advantage of you. If you can generally trust that other people are going to do the right thing and you’re going to do the right thing, it makes life much more socially efficient.

Strategic incentives can motivate people to cooperate, but people also keep cooperating even when there are not incentives to do so, at least to some extent. What motivates people to do that? The way behavioral economists and psychologists talk about that is at a proximate psychological level—saying things like, “Well, it feels good to cooperate with other people. You care about others and that’s why you’re willing to pay costs to help them. You have social preferences.” …

Most people, both in the scientific world and among laypeople, are of the former opinion, which is that we are by default selfish—we have to use rational deliberation to make ourselves do the right thing. I try to think about this question from a theoretical principle position and say, what should it be? From a perspective of either evolution or strategic reasoning, which of these two stories makes more sense, and should we expect to observe?

If you think about it that way, the key question is “where do our intuitive defaults come from?” There’s all this work in behavioral economics and psychology on heuristics and biases which suggests that these intuitions are usually rules of thumb for the behavior that typically works well. It makes sense: If you’re going to have something as your default, what should you choose as your default? You should choose the thing that works well in general. In any particular situation, you might stop and ask, “Does my rule of thumb fit this specific situation?” If not, then you can override it….(More)”