Rational and social choice theory (RCT and SCT respectively) in economics are broadly *consequentialists*. Consequentialism can be characterized as the view that all choice alternatives should be evaluated in terms of their consequences and that the best alternatives are those which have the best consequences. This is a very general view which allows for many different approaches and frameworks. In SCT, welfarism is for example a particular form of consequentialism largely dominant in economics and utilitarianism is a specific instance of welfarism. In RCT, expected utility theory and revealed preference theory are two accounts of rational decision-making that assume that choices are made on the basis of their consequences.

Consequentialism is also characterized by a variety of principles or axioms that take different and more or less strong forms depending on the specific domain of application. The most important are the following:

*Complete ordering (CO)***: **The element of any set *A* of alternatives can be completely ordered on the basis of a reflexive and transitive binary relation ≥.

*Independence (I)***: **The ranking of any pair of alternatives is unaffected by a change in the likelihood of consequences which are identical across the two alternatives.

*Normal/sequential form equivalence (NSE)***: **The ordering of alternatives is the same whether the decision problem is represented in normal form (the alternative is directly associated to a consequence or a probability distribution of consequences) or in sequential form (the alternative is a sequence of actions leading to a terminal node associated to a consequence or a probability distribution of consequences).

*Sequential separability (SS)***: **For any decision tree T and any subtree T* _{n}* starting at node

*n*of T, the ordering of the subset of consequences accessible in T

*is the same in T than in T*

_{n}*.*

_{n}*Pareto (P)***: **If two alternatives have the same or equivalent consequences across some set of locations (events, persons), then there must be indifference between the two alternatives.

*Independence of irrelevant alternatives (IIA)***:** The ordering of any pair of alternatives is independent of the set of available alternatives.

All these axioms are used either in RCT or in SCT, sometimes in both. ** CO**,

**,**

*I***,**

*NSE***and**

*SS***are almost always imposed on individual choice as criteria of rationality.**

*IIA***and**

*CO***, together with**

*IIA***, are generally regarded as conditions that Arrowian social welfare functions must satisfy.**

*P***is also sometimes considered as a requirement for social welfare functionals, especially in the context of discussions over utilitarianism and prioritarianism.**

*I*It should be noted that they are not completely independent: for instance, ** CO **will generally require the satisfaction of

**or of**

*IIA***. Regarding the former for instance, define a choice function**

*NSE**C(.)*such that, for any set

*S*of alternatives,

*C(S)*= {

*x*|

*x ≥ y*for all

*y*

*S*}, i.e. the alternatives that can be chosen are those and only those which are not ranked below any other alternative in terms of their consequences. Consider a set of three alternatives

*x*,

*y*,

*z*and suppose that

*C(x, y)*= {

*x*} but

*C(x, y, z)*= {

*y, z*}. This is a violation of

**since while**

*IIA**x*≥

*y*and (not

*y*≥

*x*) when

*S*= (

*x, y*), we have

*y*≥

*x*and (not

*x*≥

*y*) when

*S*= (

*x, y, z*). Now suppose that

*C(x, z*) = {

*z*}. We have a violation of the transitivity of the negation of binary relation ≥ since while we have (not

*z*≥

*y*) and (not

*y*≥

*x*), we nevertheless have

*z*≥

*x*. However, this is not possible if

**is satisfied.**

*CO*All these axioms have traditionally been given a normative interpretation. By this, I mean that they are seen as normative criteria of individual and collective rationality: a rational agent *should* or *must* have completely ordered preferences over the set of all available alternatives, he cannot on pain of inconsistency violate ** I **or

**, and so on. Similarly, collective rationality entails that any aggregation of the individuals’ evaluations of the available alternatives generates a complete ordering satisfying**

*NSE***and**

*P***and possibly**

*IIA***. Understood this way, these axioms characterize consequentialism as a normative doctrine setting constraints on rational and social choices. For instance, in the moral realm, consequentialism rules out various forms of egalitarian accounts which violate**

*I***and sometimes**

*I***. In the domain of individual choice, it will regard criteria such as minimization of maximum regret or maximin as irrational. Consequentialists have to face however several problems. The first and most evident one is that reasonable individuals regularly fail to meet the criteria of rationality imposed by consequentialism. This has been well-documented in economics, starting with axiom**

*P***in Allais’ paradox and Ellsberg’s paradox. A second problem is that the axioms of consequentialism sometimes lead to counterintuitive and disturbing moral implications. It has been suggested that criterion of individual rationality should not apply to collective rationality, especially**

*I***and**

*CO***(but also**

*I***and**

*P***).**

*IIA*These difficulties have led consequentialists to develop defensive strategies to preserve most of the axioms. Most of these strategies refer to what I will call *formalism*: in a nutshell, they consist as regarding the axioms as structural or formal constraints for *representing*, rather than assessing, individual and collective choices. In other words, rather than a normative doctrine, consequentialism is instead best viewed as a methodological and theoretical framework to account for the underlying values that ground individual and collective choices. As this may sound quite abstract, I will discuss two examples, one related to individual rational choice the other to social choice, both concerned with axiom ** I**. The first example is simply the well-known Ellsberg’s paradox. Assume you are presented with two consecutive decision-problems, each time between a pair of alternatives. In the first one, we suppose that an urn contains 30 red balls and 60 other balls which can be either black or yellow. You are presented with two alternatives: alternative A gives you 100$ in case a red ball is drawn and alternative B gives you 100$ in case a black ball is drawn. In the second decision-problem, the content of the urn is assumed to be the same, but this time alternative C gives you 100$ in case you draw either a red or yellow ball and alternative D gives you 100$ in case you draw either a black or yellow ball.

Alternative/event |
E1: Red ball is drawn |
E2: Black ball is drawn |
E3: Yellow ball is drawn |

A |
100$ | 0$ | 0$ |

B |
0$ | 100$ | 0$ |

Alternative/event |
E1: Red ball is drawn |
E2: Black ball is drawn |
E3: Yellow ball is drawn |

C |
100$ | 0$ | 100$ |

D |
0$ | 100$ | 100$ |

Axiom ** I** entails that if the decision-maker prefers A to B, then he should prefer C to D. The intuition is that if one prefers A to B, that must mean that the decision-maker ascribes a higher probability to event E1 than to event E2. Since the content of the urn is assumed to be the same in both decision-problems, this should imply that the expected gain of C (measured either in money or in utility) should be higher than D’s. The decision-maker’s ranking of alternatives should be independent of what happen in case event E3 holds, since in each decision-problem the alternatives have the same outcome. However, as Ellsberg’s experiment shows, while most persons prefer A to B, they prefer D to C which is sometimes interpreted as the result of some ambiguity-aversion.

The second example has been suggested by Peter Diamond in a discussion of John Harsanyi’s utilitarian aggregation theorem. Suppose a doctor has two patients waiting for kidney transplantation. Unfortunately, only one kidney is available and it is not expected that another one will be before quite some time. We assume that the doctor, endorsing the social preference of the society, is indifferent between giving the kidney to one or the other patient. The doctor is considering choosing between three allocation mechanisms: mechanism S1 gives the kidney to patient 1 for sure, mechanism S2 gives the kidney to patient 2 for sure, while in mechanism R he tosses a fair coin and gives the kidney to patient 1 if tails but to patient 2 if heads.

Alternative/event |
E1: Coin toss falls Tails |
E2: Coin toss falls Heads |

S1 |
Kidney is given to patient 1 | Kidney is given to patient 1 |

S2 |
Kidney is given to patient 2 | Kidney is given to patient 2 |

R |
Kidney is given to patient 1 | Kidney is given to patient 2 |

Given that it is assumed that the society (and the doctor) is indifferent between giving the kidney to patient 1 or 2, axiom ** I** implies that the three alternatives should be ranked as indifferent. Most people have the strong intuition however that allocation mechanism R is better because it is

*fairer*.

Instead of giving up axiom ** I**, several consequentialists have suggested instead to reconcile our intuitions with consequentialism through a refinement of the description of outcomes. The basic idea is that, following consequentialism, everything in the individual or collective choice should be featured in the description of outcomes. Consider Ellsberg’s paradox first. If we assume that the violation of

**is due to the decision-makers’ aversion to probabilistic ambiguity, then we modify the tables in the following way:**

*I*Alternative/event |
E1: Red ball is drawn |
E2: Black ball is drawn |
E3: Yellow ball is drawn |

A |
100$ + sure to have a 1/3 probability of winning | 0$ + sure to have a 1/3 probability of winning | 0$ + sure to have a 1/3 probability of winning |

B |
0$ + unsure of the probability of winning | 100$ + unsure of the probability of winning | 0$ + unsure of the probability of winning |

Alternative/event |
E1: Red ball is drawn |
E2: Black ball is drawn |
E3: Yellow ball is drawn |

C |
100$ + unsure of the probability of winning | 0$ + unsure of the probability of winning | 100$ + unsure of the probability of winning |

D |
0$ + sure to have a 2/3 probability of winning | 100$ + sure to have a 2/3 probability of winning | 100$ + sure to have a 2/3 probability of winning |

The point is simple. If we consider that being unsure of one’s probability of winning the 100$ is something that makes an alternative less desirable everything else equals, then this has to be reflected in the description and valuation of outcomes. It is then easy to see that ranking A over B but D over C no longer entails a violation of ** I **because the outcomes associated to event E3 are no longer the same in each pair of alternatives. A similar logic can be applied to the second example. If it is collectively considered that the fairness of the allocation mechanism is something valuable, then this must be reflected in the description of outcomes. Then, we have

Alternative/event |
E1: Coin toss falls Tails |
E2: Coin toss falls Heads |

S1 |
Kidney is given to patient 1 | Kidney is given to patient 1 |

S2 |
Kidney is given to patient 2 | Kidney is given to patient 2 |

R |
Kidney is given to patient 1 + both patients are fairly treated | Kidney is given to patient 2 + both patients are fairly treated |

Once again, this new description allows to rank R strictly above S1 and S2 without violating ** I**. Hence, the consequentialist’s motto in all the cases where one axiom seems to be problematic is simply “get the outcome descriptions right!”.

A natural objection to this strategy is of course that it seems to make things too easy for the consequentialist. On the one hand, it makes the axioms virtually unfalsifiable as any choice behavior can be trivially accounted for by a sufficiently fine grain partition of the outcome space. On the other hand, all moral intuitions and principles can be made compatible with a consequentialist perspective, once again provided that we have the right partition of the outcome space. However, one can argue that this is precisely the point of the formalist strategy. The consequentialist will argue that this is unproblematic as long as consequentialism is not seen as a normative doctrine about rationality and morality, but rather as a methodological and theoretical framework to account for the implications of various values and principles on rational and social choices. More precisely, what can be called *formal consequentialism* can be seen as a framework to uncover the principles and values underlying our moral and rational behavior and judgments.

Of course, this defense is not completely satisfactory. Indeed, most consequentialists will not be comfortable with the removal of all the normative content from their approach. As a consequentialist, one wants to be able to argue what it is rational to do and to say what morality commends in specific circumstances. If one wants to preserve some normative content, then the only solution is to impose normative constraints on the permissible partitions of the outcome space. This is indeed what John Broome has suggested in several of his writings with the notion of “individuation of outcomes by justifiers”: the partition of the outcome space should distinguish outcomes if and only if they differ in a way that makes it rational to not be indifferent between them. It follows then that theories of rational choice and social choice are in need of a *substantive* account of rational preferences and goodness. Such an account is notoriously difficult to conceive. A second difficulty is that the formalist strategy will sometimes be implausible or may even lead to some form of inconsistency. For instance, in the context of expected utility theory, Broome’s individuation of outcomes depends on the crucial and implausible assumption that all “constant acts” are available. This leads to a “richness” axiom (made by Savage for instance) according to which all probabilistic distribution of outcomes should figure in the set of available alternatives, including logically or materially impossible alternatives (e.g. being dead and in a good health). In sequential decision-problems, the formalist strategy is bounded to fail as soon as the path taken to reach a given outcome is relevant for the decision-maker. In this case, to include the path taken in the description of outcomes will not be always possible without leading to inconsistent descriptions of what is supposed to be the same outcome.

These difficulties indicate that formalism cannot fully vindicate consequentialism. Still, it remains an interesting perspective both in rational and social choice theory.