Human ability to cooperate can be startling. From a seemingly natural instinct to help each other among friends, family and relatives, to random acts of kindness that we encounter even in the harshest circumstances, humans have a strange impulse to care about others.
However, we are also accustomed to think of each other as ruthless and selfish. In fact, we are so used to this idea that an entire discipline of behavioral modeling in economics as well as other social situations is based on the assumption that each one of us is a rational egoist. The term does not necessarily refer to the fact that humans have absolutely no regard for others, but rather that whenever there is a strategic choice to be made, actors will choose the one that pays off best individually.
In such way, cooperation is treated as a strategy that is chosen exclusively when the most selfish thing to do is to help others. By this, human cooperation is stripped of any kind of altruism and taken as an individualistic strategy. There are numerous examples of game-theoretic studies of human cooperation treated in this manner. One notable example is Michael Taylor’s classic The Possibility of Cooperation where it is argued, by means of mathematical modelling, that in most situations where some public good is concerned, even a rational egoist would choose cooperation.
But a recent study conducted by Valerio Capraro of Center for Mathematics and Computer Science in Amsterdam gives even more weight to the suggestion that in most situations humans are inclined to help one another.
Capraro devised several experiments, where participants had to pay a participation fee of 30 cents. In the first experiment each player could choose between taking away other player’s fee or donating one’s own. Another player had to guess the first player’s choice with the reward of 10 cents. Other two experiments included a right for the first participant to opt out freely, and another to do the same with the cost of 5 cents.
The second and the third experiments confirmed what is predicted by most behavioral models. Most chose the free way out in the second experiment, and most acted selfishly in the second, where the relatively small exit fee of 5 cents outweighted cooperation and in turn was ruled out by the pay-off for a selfish choice.
However, as much as 28% of players chose to donate in the first experiment with no way out. What is more, most participants guessed right about the first player’s choice. Certainly, 28% is but above the fourth of all participants (601 in total), hence it does not constitute the majority. But what this number suggests is that in a conflict situation where one has to choose between hurting another and hurting oneself, a chance that a person acts altruistically is statistically significant.
In another study with a three-player situation, participants had to decide between taking money from one of the others and sharing it with another, or giving their own money to be split between the two other players. The game invokes an additional moral dilemma: one has to choose not only whether to hurt oneself or hurt another, but they also have to choose which one of the two players will be hurt.
The results were surprisingly similar to the two-player game: 28% of the 600 participants chose to donate their part.
What these results suggest is that some of the economic modelling that we have today might be rendered more adequate by including the probability of cooperation derived from lab experiments, instead of just treating everyone as rationally selfish. The latter is less of a utilitarian assumption about human nature as such, and more of an unavoidable methodological necessity. But lab data such as presented by Capraro may be indicative of probabilities for available choices. Such probabilities can be taken into account and thus give a more adequate description for predicting human behavior.
Source: arXiv:1410.1314v1 [q-bio.PE] 6 Oct 2014, source link.