Capitalist Economies, Commodification and Cooperation

Branko Milanovic has an interesting post on the topic of commodification and the nature of economic relations in capitalist economies. Milanovic argues that commodification (by which he roughly means the extension of market relations, i.e. price-governed relations, to social activities that were historically outside the realm of markets) works against the development of cooperative behavior based on “repeated games”. Milanovic’s main point is that while non-altruistic cooperative behavior may indeed be rational and optimal when interactions are repeated with a sufficiently high probability, the commodification process makes economic relations more anonymous and ephemeral:

Commodification of what was hitherto a non-commercial resource makes each of us do many jobs and even, as in the renting of apartments, capitalists. But saying that I work many jobs is the same thing as saying that workers do not hold durably individual jobs and that the labor market is fully “flexible” with people getting in and out of jobs at a very high rate. Thus workers indeed become, from the point of view of the employer, fully interchangeable “agents”. Each of then stays in a job a few weeks or months: everyone is equally good or bad as everyone else. We are indeed coming close to the dream world of neoclassical economics where individuals, with their true characteristics, no longer exists because they have been replaced by “agents”.

The problem with this kind of commodification and flexibilization is that it undermines human relations and trust that are needed for the smooth functioning of an economy. When there are repeated games we try to establish relationships of trust with people with whom we interact. But if we move from one place to another with high frequency, change jobs every couple of weeks, and everybody else does the same, then there are no repeated games because we do not interact with the same people. If there are no repeated games, our behavior adjusts to expecting to play just a single game, a single interaction. And this new behavior is very different.

This claim can be seen as a variant of Karl Polanyi’s old “disembeddedness thesis” according to which commodification, through the institutionalization of “fictitious commodities” (land, money, labor), has led to a separation between economic relations and the sociocultural institutions in which they were historically embedded. As it is well-known, Polanyi considered this as the major cause for the rise of totalitarianism in the 20th century. Though less dramatic, Milanovic’s claim similarly points out that by changing the structure of social relations, commodification leads to less cooperative behavior, especially because it creates opportunity costs that previously do not exist and favors anonymity. Is that completely true? There are two separate issues here according to me: the “monetization” of social relations and the “anonymization” of social relations. Regarding the former, it seems now well established that the introduction of (monetary) opportunity costs may change people’s behavior and their underlying preferences. This is the so-called “crowding-out effects” well-documented by behavioral economists and others. Basically, the fact that opportunity costs can be measured in monetary unit favors economic behaviors based on “extrinsic preferences” (i.e. favoring maximization of monetary gains) and weakens “intrinsic preferences” related, for instance, to a sense of civic duty. It is unclear to what extent this crowding-out effect has had a cultural impact in Western societies from a macrosocial perspective but at a more micro level, the effect seems hard to discard.

I am less convinced regarding the “anonymization thesis”. It is indeed quite usual in sociology and in economics to characterize market relations as being anonymous and ephemeral. This is contrasted with family and other kinds of “communitarian” relations that are assumed to be more personal and durable. To some extent, this is probably the case and it would be absurd to deny that there is no difference between giving the kids some money for them to buy some meal to an anonymous employee and cooking the meal myself. Now, the picture of the anonymous and ephemeral market relationship mostly corresponds to the idealistic Walrassian model of the perfectly competitive market. Such market, as famously argued by the philosopher David Gauthier, is a “morally-free zone”. But actually, every economist will recognize that markets are imperfect and that their functioning leads to many kinds of failures: asymmetric information and externalities are especially the cause of many market suboptimal outcomes. This is at this point that the “anonymization thesis” is unsustainable. Basically, because of market failures and imperfections, market relations cannot be fully anonymous and ephemeral to survive. Quite the contrary, mechanisms favoring the stability of these relations and making them more personal are required. The examples of Uber and AirB&B provide a case to this point: the economic model of these companies is precisely based on the possibility (and indeed the necessity) for their users to provide information to the whole community regarding the quality of the service provided by the other party. Reputation (i.e. the information regarding one’s and others’ “good-standing”), segmentation (i.e. the ability for one to choose his partner) and retaliation (i.e. one’s ability to sanction directly or indirectly uncooperative behavior) are all mechanisms that favor cooperation in market relations and they are indeed central in the kind of social relations promoted by companies like Uber. Moreover, new technologies tend to reduce considerably the cost of these mechanisms for economic agents as giving one’s opinion about the quality of the service is almost free of any opportunity cost (though that may lead to a different problem regarding the quality of information).

Now, once again, the point is not to say that there is no difference between providing a service through the market and within the family. But it is important to recognize that market relations have to be cooperative to be efficient. In this perspective, trust and other kinds of social bonds are quite needed in capitalist economies. Complete anonymity is the enemy, not the constitutive characteristic, of market institutions.


Greed, Cooperation and the “Fundamental Theorem of Social Sciences”

An interesting debate has taken place on the website Evonomics over the issue of whether or not economists think greed is socially good. The debate features well-known economists Branko Milanovic, Herb Gintis and Robert Frank as well as the biologist and anthropologist Peter Turchin. Milanovic claims that there is no personal ethics and that morals is embodied into impersonal rules and laws that are built such that it is socially optimal to follow his personal interest as long as one plays along the rule. Actually, Milanovic goes farther than that: it is perfectly right to try to break the rules since if I succeed the responsibility falls on those who have failed to catch me. Such a point of view fits perfectly with the “get the rules right” ideology that dominates microeconomic engineering (market design, mechanism design) and where people’s preferences are taken as given. The point is to set the right rules and incentives mechanisms such as to reach the (second-) best equilibrium.

Not all economists agree with this and Gintis’ and Frank’s answers both qualify some of Milanovic’s claims. Turchin’s answer is also very interesting. At one point, he refers to what he calls the “fundamental theorem of social sciences” (FTSS for short):

In economics and evolution we have a well-defined concept of public goods. Production of public goods is individually costly, while benefits are shared among all. I think you see where I am going. As we all know, selfish agents will never cooperate to produce costly public goods. I think this mathematical result should have the status of “the fundamental theorem of social sciences.”

The FTSS is indeed quite important but formulated this way it is not quite right. Economists (and biologists) have known for long that the so-called “folk theorems” of game theory establish that cooperation is possible in virtually possible in any kind of strategic interactions. To be precise, the folk theorems state that as long as an interaction infinitely repeats with a sufficiently high probability and/or that players have a not too strong preference for the present, then any outcome guaranteeing the players at least their minimax gain in an equilibrium in the corresponding repeated game. This works with all kinds of games, including the prisoner’s dilemma and the related public good game: actually, selfish people will cooperate and produce the public good if they realize that this is in their long term interest to do so (see also Mancur Olson’s “stationary bandits” story for a similar point). So, the true FTSS is rather that “anything goes”: as there are an infinity of equilibria in infinitely repeated games, which one is selected depends on a long list of more or less contingent features (chance, learning/evolutionary dynamics, focal points…). So, contrary to what Turchin claims, the right institutions can in principle incentivize selfish people to cooperate and this prospect may even incentivize selfish people to set up these institutions as a first step!

Does this mean that morality is unnecessary for economic efficiency or that there is no “personal ethics”? Not quite so. First, Turchin’s version of the FTSS becomes more plausible as we recognize that information is imperfect and incomplete. The folk theorems depend on the ability of players to monitor others’ actions and to punish them in case they deviate from the equilibrium. Actually, at the equilibrium we should not observe deviations (except for “trembling hand mistakes”) but this is only because one expects that he will be punished if he defects. It is relatively easy to figure out that imperfect monitoring makes the conditions for universal cooperation to be an equilibrium far more stringent. Of course, how to deal with imperfect and incomplete information is precisely the point of microeconomic engineering (see the “revelation principle”): the right institutions are those that incentivize people to reveal their true preferences. But such mechanisms can be difficult to implement in practice or even to design. The point is that while revelation mechanisms are plausible at some limited scales (say, a corporation) they are far more costly to build and implement at the level of the whole society (if that means anything).

There are reasons here to think that social preferences and morality may play a role to foster cooperation. But there are some confusions regarding the terminology. Social preferences do not imply that one is morally or ethically motivated and the reverse is probably not true altogether. Altruism is a good illustration: animals and insects behave altruistically for reasons that have nothing to do with morals. Basically, they are genetically programmed to cooperate at a cost for themselves because (this is an ultimate cause) it maximizes their inclusive fitness. As a result, these organisms possessed phenotypic characteristics (these are proximate causes) that make them behaving altruistically. Of course, animals and insects are not ethical beings in the standard sense. Systems of morals are quite different. It may be true that morality translates at the choice and preference levels: I may give to a charity not because of an instinctive impulse but because I have a firm moral belief that this is “good” or “right”. For the behaviorism-minded economist, this does not make any difference: whatever the proximate cause that leads you to give some money, the result regarding the allocations of resources is the same. But this can make a difference in terms of institutional design because “moral preferences” (if we can call them like that) may be incommensurable with standard preferences (leading to cases of incompleteness difficult to deal with) or to so-called crowding-out effects when they interact with pecuniary incentives. In any case, moral preferences may make cooperative outcomes easier to achieve, as they lower the monitoring costs.

However, morals is not only embedded at the level of preferences but also at the level of the rules themselves as pointed out by Milanovic: the choice of rules itself may be morally motivated as witnessed by the debates over “repugnant markets” (think of markets for organs). In the vocabulary of social choice theory, morality not only enters into people’s preferences but may also affect the choice of the “collective choice rule” (or social welfare function) that is used to aggregate people preferences. Thus, morality intervenes at these two levels. This point has some affinity with John Rawls’ distinction between two concepts of rules: the summary conception and the practice conception. On the former, a rule corresponds to a behavioral pattern and what justifies the rule under some moral system (say, utilitarianism) is the fact that the corresponding behavior is permissible or mandatory (in the case of utilitarianism, it maximizes the sum of utilities in the population). On the latter, the behavior is justified by the very practice it is constitutive of. Take the institution of promise-keeping: on the practice conception, what justifies the fact that I keep my promises is not that it is “good” or “right” but rather that keeping his promises is constitutive of the institution of promise-keeping. What has to be morally evaluated is not the specific behavior but the whole practice.

So is greed really good? The question is of course already morally-loaded. The answer depends on what we call “good” and on our conception of rules. If by “good” we mean some consequentialist criterion and if we hold the summary conception of rules, the answer will depend on the specifics as indicated in my discussion of the FTSS. But on the practice conception, the answer is clearly “yes, as far as it is constitutive of the practice” and the practice is considered as being good. On this view, while we may agree with Milanovic that to be greedy is good (or at least permissible) as long as it stays within the rules (what Gintis calls “Greed 1” in his answer), it is hard to see how being greedy by transgressing the rules (Gintis’ “Greed 2”) can be good whatsoever… unless we stipulate that the very rules are actually bad! The latter is a possibility of course. In any case, an economic system cannot totally “outsource” morality as what you deem to be good and thus permissible through the choice of rules is already a moral issue.