Parfit on How to Avoid the Repugnant Conclusion (And Some Additional Personal Considerations)

Derek Parfit, one of the most influential contemporary philosophers, died last January. The day before his death, he submitted what seems to be his last paper to the philosophy journal Philosophy and Public Affairs. In this paper, Parfit tackles the famous “non-identity problem” that he has himself settled in Reasons and Persons almost 35 years ago. Though unachieved, the paper is quite interesting because it appears to offer a way to avoid the not less famous “repugnant conclusion”. I describe below Parfit’s tentative solution and also add some comments on the role played by moral intuitions in Parfit’s (and other moral philosophers’) argumentation.

Parfit is concerned with cases where we have to compare the goodness of two or more outcomes where different people exist across these outcomes. Start first with Same Number cases, i.e. cases where at least one person exists in one outcome but no in the other but where the total number of people is the same. Example 1 is an instance of such a case (numbers denote quality of life according to some cardinal and interpersonally comparable measure):

Example 1

Outcome A Ann 80 Bob 60 ——–
Outcome B ——— Bob 70 Chris 20
Outcome C Ann 20 ——— Chris 30

 

How should we compare these three outcomes? Many moral philosophers entertain one kind or another of “person-affecting principles” according to which betterness or worseness necessarily depend on some persons being better (worse) in an outcome than in another one. Consider in particular the Weak Narrow Principle:

Weak Narrow Principle: One of two outcomes would be in one way worse if this outcome would be worse for people.

 

Since it is generally accepted that we cannot make someone worse by not making her exist, outcome A should be regarded as worse (in one way) than outcome B by the Weak Narrow Principle. Indeed, Bob is worse in A than in B while the fact that Ann does not exist in B cannot make her worse than in A (even though Ann would have a pretty good life if A were to happen). By the same reasoning, C should be considered as worse than A and B worse than C. Thus the ‘worse than’ relation is not transitive. Lack of transitivity may be seen as dubious but is not in itself sufficient to reject the Weak Narrow Principle. Note though that if we have to compare the goodness of the three outcomes together, we are left without any determinate answer. Consider however:

Example 2

Outcome D Dani 70 Matt 50 ——– ——–
Outcome E ——— Matt 60 Luke 30 ——–
Outcome F ——— ——— Luke 35 Jessica 10

 

According to the Weak Narrow Principle, D is worse than E and E is worse than F. If we impose transitivity on the ‘worse than’ relation, then D is worse than F. Parfit regards this kind of conclusion as implausible. Even if we deny transitivity, the conclusion than E is worse than F is also hard to accept.

Given that the Weak Narrow Principle leads to implausible conclusion in Same Number cases, it is desirable to find alternative principles. In Reasons and Persons, Parfit suggested adopting impersonal principles that do not appeal to facts about what would affect particular people. For instance,

Impersonal Principle: In Same Number cases, it would be worse if the people who existed would be people whose quality of life would be lower.

 

According to this principle, we can claim that F is worse than E which is worse than D. Obviously, ‘worse than’ is transitive. What about Different Number cases (i.e. when the number of people who exist in one outcome is higher or lower than in another one)? In Reasons and Persons, Parfit originally explored an extension of the Impersonal Principle:

The Impersonal Total Principle: It would always be better if there was a greater sum of well-being.

 

Parfit ultimately rejected this last principle because it leads to the Repugnant Conclusion:

The Repugnant Conclusion: Compared with the existence of many people hose quality of life would be very high, there is some much larger number of people whose existence would be better, even though these people’s lives would be barely worth living.

 

In his book Rethinking the Good, the philosopher Larry Temkin suggests avoiding the repugnant conclusion by arguing that the ‘all things considered better than relation’ is essentially comparative. In other words, the goodness of a given outcome depends on the set of outcomes with which it is compared. But this has the obvious consequence that the ‘better than’ relation is not necessarily transitive (Temkin claims that transitivity applies only to a limited part of our normative realm). Parfits instead sticks to the view that goodness is intrinsic and suggests an alternative approach through another principle:

Wide Dual Person-Affecting Principle: One of two outcomes would be in one way better if this outcome would together benefit people more, and in another way better if this outcome would benefit each person more.

 

Compare outcomes G and H on the basis of this principle:

Outcome G: N persons will exist and each will live a life whose quality is at 80.

Outcome H: 2N persons will exist and each will live a life whose quality is at 50.

 

According to the Wide Dual Person-Affecting Principle, G is better than H in at least one way because it benefits each person more, assuming that you cannot be made worse by not existing. H may be argued to be better than G on another way, by benefiting people more, at least on the basis of some additive rule. Which outcome is all things considered better remains debatable. But consider

Outcome I: N persons will exist and each will live a life whose quality is at 100.

Outcome J: 1000N persons will exist and each will live a life whose quality is at 1.

 

Here, although each outcome is better than the other on one respect, it may be plausibly claimed that I is better all things considered because the lives in J are barely worth living. This may be regarded as sufficient to more than compensate for the fact that the sum of well-being is far superior in J than in I. This leads to the following conclusion:

Analogous Conclusion: Compared with the existence of many people whose lives would be barely worth living, there is some much higher quality of life whose being had by everyone would be better, even though the numbers of people who exist would be much small.

This conclusion is consistent with the view that goodness is intrinsic and obviously avoids the repugnant conclusion.

 

I would like to end this post with some remarks with the role played by moral intuitions in Parfit’s reasoning. This issue had already came to my mind when reading Partit’s Reasons and Persons as well as Temkin’s Rethinking the Good. Basically, both Parfit and Temkin (and many other moral philosophers) ground their moral reasoning on intuitions about what is good/bad or right/wrong. For instance, Parfit’s initial rejection of impersonality principles in Reasons and Persons was entirely grounded on the fact that they seem to lead to the repugnant conclusion which Parfit regarded as morally unacceptable. The same is true for Temkin’s arguments against the transitivity of the ‘all things considered better than’ relation. Moral philosophers seem mostly to use a form of backward reasoning about moral matters: take some conclusions as intuitively acceptable/unacceptable or plausible/implausible and then try to find principles that may rationalize our intuitions about these conclusions.

As a scholar in economics & philosophy with the background of an economist, this way of reasoning is somehow surprising me. Economists who are thinking about moral matters are generally doing so from a social choice perspective. The latter almost completely turns the philosopher’s reasoning on its head. Basically, a social choice theorists will start from a small set of axioms that encapsulate basic principles that may be plausibly regarding as constraints that should bind any acceptable moral view. For instance, Pareto principles are generally imposed because we take as a basic moral constraint the fact that everyone is better (in some sense) in a given outcome than in another one make the former better than the latter. The social choice approach then consists in determining which social choice functions (i.e. moral views) are compatible with these constraints. In most of the case, this approach will not be able to tell which moral view is obligatory; but it will tell which moral views are and are not permissible given our accepted set of constraints. The repugnant conclusion provides a good illustration: in one of the best social choice treatment of issues related to population ethics, John Broome (a philosopher but a former economist) rightly notes that if the “repugnant” conclusion follows from acceptable premises, then we should not reject it on the ground that we regarded as counterintuitive. The same is true for transitivity: the fact that it entails counterintuitive conclusion is not sufficient to reject it (at least, independent argument for rejection are needed).

There are two ways to justify the social choice approach to moral matters. The first is the fact that we generally have a better understanding of “basic principles” than of more complex conclusions that depend on a (not always well-identified) set of premises. It is far easier to discuss the plausibility of transitivity or of Pareto principles in general than to assess moral views and their more or less counterintuitive implications. Of course, we may also have a poor understanding of basic principles but the attractiveness of the social choice approach is precisely that it helps to focus the discussion on axioms (think of the literature on Arrow’s impossibility theorem). The second reason to endorse the social choice approach on moral issues is that we now start to understand where our moral intuitions and judgments are coming from. Moral psychology and experimental philosophy tend to indicate that our moral views are deeply rooted in our evolutionary history. Far from vindicating them, this should quite the contrary encourage us to be skeptical about their truth-value. Modern forms of moral skepticism point out that whatever the ontological status of morality, the naturalistic origins of moral judgments do not guarantee and actually make highly doubtful that whatever we believe about morality is epistemically well-grounded.

 

Advertisements

Consequentialism and Formalism in Rational and Social Choice Theory

Rational and social choice theory (RCT and SCT respectively) in economics are broadly consequentialists. Consequentialism can be characterized as the view that all choice alternatives should be evaluated in terms of their consequences and that the best alternatives are those which have the best consequences. This is a very general view which allows for many different approaches and frameworks. In SCT, welfarism is for example a particular form of consequentialism largely dominant in economics and utilitarianism is a specific instance of welfarism. In RCT, expected utility theory and revealed preference theory are two accounts of rational decision-making that assume that choices are made on the basis of their consequences.

Consequentialism is also characterized by a variety of principles or axioms that take different and more or less strong forms depending on the specific domain of application. The most important are the following:

Complete ordering (CO): The element of any set A of alternatives can be completely ordered on the basis of a reflexive and transitive binary relation ≥.

Independence (I): The ranking of any pair of alternatives is unaffected by a change in the likelihood of consequences which are identical across the two alternatives.

Normal/sequential form equivalence (NSE): The ordering of alternatives is the same whether the decision problem is represented in normal form (the alternative is directly associated to a consequence or a probability distribution of consequences) or in sequential form (the alternative is a sequence of actions leading to a terminal node associated to a consequence or a probability distribution of consequences).

Sequential separability (SS): For any decision tree T and any subtree Tn starting at node n of T, the ordering of the subset of consequences accessible in Tn is the same in T than in Tn.

Pareto (P): If two alternatives have the same or equivalent consequences across some set of locations (events, persons), then there must be indifference between the two alternatives.

Independence of irrelevant alternatives (IIA): The ordering of any pair of alternatives is independent of the set of available alternatives.

All these axioms are used either in RCT or in SCT, sometimes in both. CO, I, NSE, SS and IIA are almost always imposed on individual choice as criteria of rationality. CO and IIA, together with P, are generally regarded as conditions that Arrowian social welfare functions must satisfy. I is also sometimes considered as a requirement for social welfare functionals, especially in the context of discussions over utilitarianism and prioritarianism.

It should be noted that they are not completely independent: for instance, CO will generally require the satisfaction of IIA or of NSE. Regarding the former for instance, define a choice function C(.) such that, for any set S of alternatives, C(S) = {x|x ≥ y for all y  S}, i.e. the alternatives that can be chosen are those and only those which are not ranked below any other alternative in terms of their consequences. Consider a set of three alternatives x, y, z and suppose that C(x, y) = {x} but C(x, y, z) = {y, z}. This is a violation of IIA since while x y and (not y x) when S = (x, y), we have y x and (not x y) when S = (x, y, z). Now suppose that C(x, z) = {z}. We have a violation of the transitivity of the negation of binary relation ≥ since while we have (not z y) and (not y x), we nevertheless have z x. However, this is not possible if CO is satisfied.

All these axioms have traditionally been given a normative interpretation. By this, I mean that they are seen as normative criteria of individual and collective rationality: a rational agent should or must have completely ordered preferences over the set of all available alternatives, he cannot on pain of inconsistency violate I or NSE, and so on. Similarly, collective rationality entails that any aggregation of the individuals’ evaluations of the available alternatives generates a complete ordering satisfying P and IIA and possibly I. Understood this way, these axioms characterize consequentialism as a normative doctrine setting constraints on rational and social choices. For instance, in the moral realm, consequentialism rules out various forms of egalitarian accounts which violate I and sometimes P. In the domain of individual choice, it will regard criteria such as minimization of maximum regret or maximin as irrational. Consequentialists have to face however several problems. The first and most evident one is that reasonable individuals regularly fail to meet the criteria of rationality imposed by consequentialism. This has been well-documented in economics, starting with axiom I in Allais’ paradox and Ellsberg’s paradox. A second problem is that the axioms of consequentialism sometimes lead to counterintuitive and disturbing moral implications. It has been suggested that criterion of individual rationality should not apply to collective rationality, especially CO and I (but also P and IIA).

These difficulties have led consequentialists to develop defensive strategies to preserve most of the axioms. Most of these strategies refer to what I will call formalism: in a nutshell, they consist as regarding the axioms as structural or formal constraints for representing, rather than assessing, individual and collective choices. In other words, rather than a normative doctrine, consequentialism is instead best viewed as a methodological and theoretical framework to account for the underlying values that ground individual and collective choices. As this may sound quite abstract, I will discuss two examples, one related to individual rational choice the other to social choice, both concerned with axiom I. The first example is simply the well-known Ellsberg’s paradox. Assume you are presented with two consecutive decision-problems, each time between a pair of alternatives. In the first one, we suppose that an urn contains 30 red balls and 60 other balls which can be either black or yellow. You are presented with two alternatives: alternative A gives you 100$ in case a red ball is drawn and alternative B gives you 100$ in case a black ball is drawn. In the second decision-problem, the content of the urn is assumed to be the same, but this time alternative C gives you 100$ in case you draw either a red or yellow ball and alternative D gives you 100$ in case you draw either a black or yellow ball.

Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
A 100$ 0$ 0$
B 0$ 100$ 0$
Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
C 100$ 0$ 100$
D 0$ 100$ 100$

Axiom I entails that if the decision-maker prefers A to B, then he should prefer C to D. The intuition is that if one prefers A to B, that must mean that the decision-maker ascribes a higher probability to event E1 than to event E2. Since the content of the urn is assumed to be the same in both decision-problems, this should imply that the expected gain of C (measured either in money or in utility) should be higher than D’s. The decision-maker’s ranking of alternatives should be independent of what happen in case event E3 holds, since in each decision-problem the alternatives have the same outcome. However, as Ellsberg’s experiment shows, while most persons prefer A to B, they prefer D to C which is sometimes interpreted as the result of some ambiguity-aversion.

The second example has been suggested by Peter Diamond in a discussion of John Harsanyi’s utilitarian aggregation theorem. Suppose a doctor has two patients waiting for kidney transplantation. Unfortunately, only one kidney is available and it is not expected that another one will be before quite some time. We assume that the doctor, endorsing the social preference of the society, is indifferent between giving the kidney to one or the other patient. The doctor is considering choosing between three allocation mechanisms: mechanism S1 gives the kidney to patient 1 for sure, mechanism S2 gives the kidney to patient 2 for sure, while in mechanism R he tosses a fair coin and gives the kidney to patient 1 if tails but to patient 2 if heads.

Alternative/event E1: Coin toss falls Tails E2: Coin toss falls Heads
S1 Kidney is given to patient 1 Kidney is given to patient 1
S2 Kidney is given to patient 2 Kidney is given to patient 2
R Kidney is given to patient 1 Kidney is given to patient 2

Given that it is assumed that the society (and the doctor) is indifferent between giving the kidney to patient 1 or 2, axiom I implies that the three alternatives should be ranked as indifferent. Most people have the strong intuition however that allocation mechanism R is better because it is fairer.

Instead of giving up axiom I, several consequentialists have suggested instead to reconcile our intuitions with consequentialism through a refinement of the description of outcomes. The basic idea is that, following consequentialism, everything in the individual or collective choice should be featured in the description of outcomes. Consider Ellsberg’s paradox first. If we assume that the violation of I is due to the decision-makers’ aversion to probabilistic ambiguity, then we modify the tables in the following way:

Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
A 100$ + sure to have a 1/3 probability of winning 0$ + sure to have a 1/3 probability of winning 0$ + sure to have a 1/3 probability of winning
B 0$ + unsure of the probability of winning 100$ + unsure of the probability of winning 0$ + unsure of the probability of winning
Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
C 100$ + unsure of the probability of winning 0$ + unsure of the probability of winning 100$ + unsure of the probability of winning
D 0$ + sure to have a 2/3 probability of winning 100$ + sure to have a 2/3 probability of winning 100$ + sure to have a 2/3 probability of winning

The point is simple. If we consider that being unsure of one’s probability of winning the 100$ is something that makes an alternative less desirable everything else equals, then this has to be reflected in the description and valuation of outcomes. It is then easy to see that ranking A over B but D over C no longer entails a violation of I because the outcomes associated to event E3 are no longer the same in each pair of alternatives. A similar logic can be applied to the second example. If it is collectively considered that the fairness of the allocation mechanism is something valuable, then this must be reflected in the description of outcomes. Then, we have

Alternative/event E1: Coin toss falls Tails E2: Coin toss falls Heads
S1 Kidney is given to patient 1 Kidney is given to patient 1
S2 Kidney is given to patient 2 Kidney is given to patient 2
R Kidney is given to patient 1 + both patients are fairly treated Kidney is given to patient 2 + both patients are fairly treated

Once again, this new description allows to rank R strictly above S1 and S2 without violating I. Hence, the consequentialist’s motto in all the cases where one axiom seems to be problematic is simply “get the outcome descriptions right!”.

A natural objection to this strategy is of course that it seems to make things too easy for the consequentialist. On the one hand, it makes the axioms virtually unfalsifiable as any choice behavior can be trivially accounted for by a sufficiently fine grain partition of the outcome space. On the other hand, all moral intuitions and principles can be made compatible with a consequentialist perspective, once again provided that we have the right partition of the outcome space. However, one can argue that this is precisely the point of the formalist strategy. The consequentialist will argue that this is unproblematic as long as consequentialism is not seen as a normative doctrine about rationality and morality, but rather as a methodological and theoretical framework to account for the implications of various values and principles on rational and social choices. More precisely, what can be called formal consequentialism can be seen as a framework to uncover the principles and values underlying our moral and rational behavior and judgments.

Of course, this defense is not completely satisfactory. Indeed, most consequentialists will not be comfortable with the removal of all the normative content from their approach. As a consequentialist, one wants to be able to argue what it is rational to do and to say what morality commends in specific circumstances. If one wants to preserve some normative content, then the only solution is to impose normative constraints on the permissible partitions of the outcome space. This is indeed what John Broome has suggested in several of his writings with the notion of “individuation of outcomes by justifiers”: the partition of the outcome space should distinguish outcomes if and only if they differ in a way that makes it rational to not be indifferent between them. It follows then that theories of rational choice and social choice are in need of a substantive account of rational preferences and goodness. Such an account is notoriously difficult to conceive. A second difficulty is that the formalist strategy will sometimes be implausible or may even lead to some form of inconsistency. For instance, in the context of expected utility theory, Broome’s individuation of outcomes depends on the crucial and implausible assumption that all “constant acts” are available. This leads to a “richness” axiom (made by Savage for instance) according to which all probabilistic distribution of outcomes should figure in the set of available alternatives, including logically or materially impossible alternatives (e.g. being dead and in a good health). In sequential decision-problems, the formalist strategy is bounded to fail as soon as the path taken to reach a given outcome is relevant for the decision-maker. In this case, to include the path taken in the description of outcomes will not be always possible without leading to inconsistent descriptions of what is supposed to be the same outcome.

These difficulties indicate that formalism cannot fully vindicate consequentialism. Still, it remains an interesting perspective both in rational and social choice theory.