Bayesian Rationality and Utilitarianism

In a recent blog, Bryan Caplan gives his critical views about the “rationality community”, i.e. a group of people and organizations who are actively developing ideas related to cognitive bias, signaling and rationality. Basically, members of the rationality community are applying the rationality norms of Bayesianism to a large range of issues related to individual and social choices. Among Caplan’s complaints figures the alleged propensity of the community’s members to endorse consequentialist ethics and more specifically utilitarianism, essentially for “aesthetic” reasons. In a related Twitter exchange, Caplan states that by utilitarianism he refers to the doctrine that one’s duty is to act as to maximize the sum of happiness in the society. This corresponds to what his generally called hedonic utilitarianism.

Hedonic utilitarianism faces many problems well-known to moral philosophers. I do not know if the members of the rationality community are hedonic utilitarians, but there is another route for Bayesians to be utilitarians. This route is logical rather than aesthetic and is grounded on a theorem exposed by the economist John Harsanyi in the 1950s and since largely discussed by philosophically-minded economists and mathematically-minded philosophers. Harsanyi’s initial demonstration was grounded on the von Neumann and Morgenstern’s axioms (actually Marshak’s version of them) of decision under risk but has since been extended to other versions of decision theory, especially Savage’s axioms for decision under uncertainty. The theorem can be briefly stated in the following way. Denote S the set of states of nature, i.e. morally-relevant features that are outside the control of the decision-makers and O the set of outcomes. Intuitively, an outcome is a possible world specifying everything that is morally relevant for the individuals: their wealth, their health, their history, and so on. Finally, denote X the set of “prospects”, i.e. social alternatives or public policies mapping any state s onto an outcome o. We assume that the n members of the population have preferences over the set of prospects and that these preferences satisfy Savage’s axioms. Therefore, the preferences of any individual i can be represented by an expectational utility function: each prospect x is ascribed a utility number ui(x) that cardinally represent i’s preferences. ui(x) corresponds to the probability weighted-sum of utility of all possible outcomes (which correspond to “sure” prospects). Hence, each individual also has beliefs regarding the likelihood of the states of nature that are captured by a probability function pi(.).

Given the individuals’ preferences, each prospect x is assigned a vector of utility numbers (u1(x), …, un(x)). Now, we assume that there is a “benevolent dictator” k (possibly one of the member of the population) whose preferences over X also satisfy Savage’s axioms. It follows that the dictator’s preferences can also be represented by an expectational utility function with each prospect x mapped into a number uk(x). Last assumption: the individuals’ and dictator’s preferences over X are related by a Pareto principle: if every individual prefers (resp. is indifferent) prospect x to prospect y, then the dictator prefers (resp. is indifferent) x to y. Harsanyi’s theorem states that the dictator’s preferences can then be represented by a utility function corresponding to the weighted-sum of the individuals’ utilities for any prospect x. Suppose moreover than utilities are interpersonally comparable and that the dictator’s preferences are impartial (they do not arbitrarily weight more a person’s utility than another’s one), then for any x

uk(x) = u1(x) + … + un(x).

Of course, this is the utilitarian formulae but stated in utility rather than hedonic terms. Note that here utility does not correspond to happiness or pleasure but rather to preference-satisfaction. Harsanyi’s utilitarianism is preference-based. The point of the theorem is to show that consistent Bayesians should be utilitarians in this sense.

It should be acknowledged that what the theorem demonstrates is actually far weaker. A first reason (discussed by Sen among others) is that the cardinal representation of the individuals’ preferences is not imposed by Savage’s theorem. Obviously, the use of other representations of individuals’ preferences will have the effect of making the additive structure unable to represent the dictator’s preferences. Some authors like John Broome have argued however that the expectational representational is the most natural one and fits well with some notion of goodness. There is another, different kind of difficulty related to the Pareto principle. It can be shown that the assumption that the dictator’s preferences are transitive (which is imposed by Savage’s axioms) combined with the Pareto principle imply “probabilistic agreement”, i.e. that all individuals agree regarding their probabilistic assessment over the likelihood of the states of nature. Otherwise, probabilistic disagreement and the Pareto principle would lead to cases where the dictator’s preferences are inconsistent and thus unamenable to a utility representation. Probabilistic agreement is of course a very strong assumption, an assumption that Harsanyi would have been ready to defend without doubt (see the “Harsanyi doctrine” in game theory). Objective Bayesians may indeed argue that rationality entails a unique correct probabilistic assessment. But subjective Bayesians will of course disagree.

What happen if we give up the Pareto principle for prospects (not for outcomes however)? Then, the dictator’s preferences are amenable to being represented by an ex post prioritarian social welfare function such that

uk(x) = ∑spk(s)∑iv(ui(x(s)=o))

where v(.) is a strictly increasing and concave function. This corresponds to what Derek Parfit called the “priority view” and leads to giving priority to the satisfaction of preferences of the less well-off in the population.