Parfit on How to Avoid the Repugnant Conclusion (And Some Additional Personal Considerations)

Derek Parfit, one of the most influential contemporary philosophers, died last January. The day before his death, he submitted what seems to be his last paper to the philosophy journal Philosophy and Public Affairs. In this paper, Parfit tackles the famous “non-identity problem” that he has himself settled in Reasons and Persons almost 35 years ago. Though unachieved, the paper is quite interesting because it appears to offer a way to avoid the not less famous “repugnant conclusion”. I describe below Parfit’s tentative solution and also add some comments on the role played by moral intuitions in Parfit’s (and other moral philosophers’) argumentation.

Parfit is concerned with cases where we have to compare the goodness of two or more outcomes where different people exist across these outcomes. Start first with Same Number cases, i.e. cases where at least one person exists in one outcome but no in the other but where the total number of people is the same. Example 1 is an instance of such a case (numbers denote quality of life according to some cardinal and interpersonally comparable measure):

Example 1

Outcome A Ann 80 Bob 60 ——–
Outcome B ——— Bob 70 Chris 20
Outcome C Ann 20 ——— Chris 30

 

How should we compare these three outcomes? Many moral philosophers entertain one kind or another of “person-affecting principles” according to which betterness or worseness necessarily depend on some persons being better (worse) in an outcome than in another one. Consider in particular the Weak Narrow Principle:

Weak Narrow Principle: One of two outcomes would be in one way worse if this outcome would be worse for people.

 

Since it is generally accepted that we cannot make someone worse by not making her exist, outcome A should be regarded as worse (in one way) than outcome B by the Weak Narrow Principle. Indeed, Bob is worse in A than in B while the fact that Ann does not exist in B cannot make her worse than in A (even though Ann would have a pretty good life if A were to happen). By the same reasoning, C should be considered as worse than A and B worse than C. Thus the ‘worse than’ relation is not transitive. Lack of transitivity may be seen as dubious but is not in itself sufficient to reject the Weak Narrow Principle. Note though that if we have to compare the goodness of the three outcomes together, we are left without any determinate answer. Consider however:

Example 2

Outcome D Dani 70 Matt 50 ——– ——–
Outcome E ——— Matt 60 Luke 30 ——–
Outcome F ——— ——— Luke 35 Jessica 10

 

According to the Weak Narrow Principle, D is worse than E and E is worse than F. If we impose transitivity on the ‘worse than’ relation, then D is worse than F. Parfit regards this kind of conclusion as implausible. Even if we deny transitivity, the conclusion than E is worse than F is also hard to accept.

Given that the Weak Narrow Principle leads to implausible conclusion in Same Number cases, it is desirable to find alternative principles. In Reasons and Persons, Parfit suggested adopting impersonal principles that do not appeal to facts about what would affect particular people. For instance,

Impersonal Principle: In Same Number cases, it would be worse if the people who existed would be people whose quality of life would be lower.

 

According to this principle, we can claim that F is worse than E which is worse than D. Obviously, ‘worse than’ is transitive. What about Different Number cases (i.e. when the number of people who exist in one outcome is higher or lower than in another one)? In Reasons and Persons, Parfit originally explored an extension of the Impersonal Principle:

The Impersonal Total Principle: It would always be better if there was a greater sum of well-being.

 

Parfit ultimately rejected this last principle because it leads to the Repugnant Conclusion:

The Repugnant Conclusion: Compared with the existence of many people hose quality of life would be very high, there is some much larger number of people whose existence would be better, even though these people’s lives would be barely worth living.

 

In his book Rethinking the Good, the philosopher Larry Temkin suggests avoiding the repugnant conclusion by arguing that the ‘all things considered better than relation’ is essentially comparative. In other words, the goodness of a given outcome depends on the set of outcomes with which it is compared. But this has the obvious consequence that the ‘better than’ relation is not necessarily transitive (Temkin claims that transitivity applies only to a limited part of our normative realm). Parfits instead sticks to the view that goodness is intrinsic and suggests an alternative approach through another principle:

Wide Dual Person-Affecting Principle: One of two outcomes would be in one way better if this outcome would together benefit people more, and in another way better if this outcome would benefit each person more.

 

Compare outcomes G and H on the basis of this principle:

Outcome G: N persons will exist and each will live a life whose quality is at 80.

Outcome H: 2N persons will exist and each will live a life whose quality is at 50.

 

According to the Wide Dual Person-Affecting Principle, G is better than H in at least one way because it benefits each person more, assuming that you cannot be made worse by not existing. H may be argued to be better than G on another way, by benefiting people more, at least on the basis of some additive rule. Which outcome is all things considered better remains debatable. But consider

Outcome I: N persons will exist and each will live a life whose quality is at 100.

Outcome J: 1000N persons will exist and each will live a life whose quality is at 1.

 

Here, although each outcome is better than the other on one respect, it may be plausibly claimed that I is better all things considered because the lives in J are barely worth living. This may be regarded as sufficient to more than compensate for the fact that the sum of well-being is far superior in J than in I. This leads to the following conclusion:

Analogous Conclusion: Compared with the existence of many people whose lives would be barely worth living, there is some much higher quality of life whose being had by everyone would be better, even though the numbers of people who exist would be much small.

This conclusion is consistent with the view that goodness is intrinsic and obviously avoids the repugnant conclusion.

 

I would like to end this post with some remarks with the role played by moral intuitions in Parfit’s reasoning. This issue had already came to my mind when reading Partit’s Reasons and Persons as well as Temkin’s Rethinking the Good. Basically, both Parfit and Temkin (and many other moral philosophers) ground their moral reasoning on intuitions about what is good/bad or right/wrong. For instance, Parfit’s initial rejection of impersonality principles in Reasons and Persons was entirely grounded on the fact that they seem to lead to the repugnant conclusion which Parfit regarded as morally unacceptable. The same is true for Temkin’s arguments against the transitivity of the ‘all things considered better than’ relation. Moral philosophers seem mostly to use a form of backward reasoning about moral matters: take some conclusions as intuitively acceptable/unacceptable or plausible/implausible and then try to find principles that may rationalize our intuitions about these conclusions.

As a scholar in economics & philosophy with the background of an economist, this way of reasoning is somehow surprising me. Economists who are thinking about moral matters are generally doing so from a social choice perspective. The latter almost completely turns the philosopher’s reasoning on its head. Basically, a social choice theorists will start from a small set of axioms that encapsulate basic principles that may be plausibly regarding as constraints that should bind any acceptable moral view. For instance, Pareto principles are generally imposed because we take as a basic moral constraint the fact that everyone is better (in some sense) in a given outcome than in another one make the former better than the latter. The social choice approach then consists in determining which social choice functions (i.e. moral views) are compatible with these constraints. In most of the case, this approach will not be able to tell which moral view is obligatory; but it will tell which moral views are and are not permissible given our accepted set of constraints. The repugnant conclusion provides a good illustration: in one of the best social choice treatment of issues related to population ethics, John Broome (a philosopher but a former economist) rightly notes that if the “repugnant” conclusion follows from acceptable premises, then we should not reject it on the ground that we regarded as counterintuitive. The same is true for transitivity: the fact that it entails counterintuitive conclusion is not sufficient to reject it (at least, independent argument for rejection are needed).

There are two ways to justify the social choice approach to moral matters. The first is the fact that we generally have a better understanding of “basic principles” than of more complex conclusions that depend on a (not always well-identified) set of premises. It is far easier to discuss the plausibility of transitivity or of Pareto principles in general than to assess moral views and their more or less counterintuitive implications. Of course, we may also have a poor understanding of basic principles but the attractiveness of the social choice approach is precisely that it helps to focus the discussion on axioms (think of the literature on Arrow’s impossibility theorem). The second reason to endorse the social choice approach on moral issues is that we now start to understand where our moral intuitions and judgments are coming from. Moral psychology and experimental philosophy tend to indicate that our moral views are deeply rooted in our evolutionary history. Far from vindicating them, this should quite the contrary encourage us to be skeptical about their truth-value. Modern forms of moral skepticism point out that whatever the ontological status of morality, the naturalistic origins of moral judgments do not guarantee and actually make highly doubtful that whatever we believe about morality is epistemically well-grounded.

 

Advertisements

Effective Altruism and the Unavoidability of Ethical Trade-Offs

The so-called “effective altruism” (EA) movement has recently received a significant attention in the press. Many articles have been critical of EA for various reasons that significantly overlap over the general theme that too much quantification and calculus implies the risk of losing the “big picture” of the issues related to charity and poverty. The last example is an article published on the website “The Conversation”. The author’s main argument is that as a reason-based approach to charity and poverty, EA proponents ignore the fact that ethics and morality cannot be reduced to some “cold” utilitarian calculus:

“[EA proponents] have PhDs in the disciplines requiring the highest level of analytical intelligence, but are they clever enough to understand the limits of reason? Do they have an inner alarm bell that goes off when the chain of logical deductions produces a result that in most people causes revulsion?”

According to the author, a society full of “effectively altruist” people would be a society where any ethical issues would be dealt with through cold-minded computations actually eliminating any role for emotions and gut instincts.

“To be an effective altruist one must override the urge to give when one’s heart is opened up and instead engage in a process of data gathering and computation to decide whether the planned donation could be better spent elsewhere.

If effective altruists adopt this kind of utilitarian calculus as the basis for daily life (for it would be irrational to confine it to acts of charity) then good luck to them. The problem is that they believe everyone should behave in the same hyper-rational way; in other words, they believe society should be remade in their own image.”

The author then makes a link with free-market economists like Gary Becker, suspecting “that, for most people, following the rules of effective altruism would be like being married to Gary Becker, a highly efficient arrangement between contracting parties, but one deprived of all human warmth and compassion.”

There are surely many aspects of EA that can be argued against but I think that this kind of critique is pretty weak. Moreover, it is grounded on a deep misunderstanding of the contribution that social sciences (and especially economics) can make to dealing with ethical issues. As a starting point, I think that any discussion on the virtues and dangers of EA should start on a basic premise that I propose to call the “Hard Fact of Ethical Reasoning”:

Hard Fact of Ethical Reasoning (HFER) – Any ethical issue involves a decision problem with trade-offs to be made.

Giving to a charity to alleviate the sufferings due to poverty is a decision problem with a strong ethical component. What the HFER claims is that when considering how to alleviate those sufferings, you have to make a choice regarding how to use scarce resources in such a way your objective is reached. This a classical means-ends relationship the study of which has been at the core of modern economics for the last hundred years. If one accepts the HFER (and it is hard to see how one could deny it), then I would argue that EA has the general merit of leading us to reflect on and to make explicit the values and the axiological/deontic criteria that underlie our ethical judgments regarding what is considered to be good or right. As I interpret it, a key message of EA is that these ethical judgments should/cannot exclusively depend on our gut feelings and emotions but should also be the subject of rational scrutiny. Now, some of us may be indeed uncomfortable with the substantive claims made by EA proponents, such as Peter Singer’s remark that  “if you do the sums” then “you can provide one guide dog for one blind American or you could cure between 400 and 2,000 people of blindness [in developing countries]”. Here, I think the point is to distinguish between two kinds of EA that I would call formal EA and substantive EA respectively.

Formal EA provides a general framework to think of ethical issues related to charity and poverty. It can be characterized by the following two principles:

Formal EA P1: Giving to different charities leads to different states of affairs that can be compared and ranked according to their goodness following some axiological principles, possibly given deontic constraints.

Formal EA P2: The overall goodness of states of affairs is a (increasing) function of their goodness for the individuals concerned.

Principles P1 and P2 are very general ones. P2 corresponds to what is sometime called the Pareto principle and seems, in this context, to be hardly disputable. It basically states that if you have the choice between giving to two charities and that everyone is equally well-off in the two resulting states of affairs except for at least one person that is better in one of them, then the latter state of affairs is the best. P1 states that it is possible to compare and rank states of affairs, which of course still allow for indifference. Note that we allow the possibility for the ranking to be constrained by any deontological principle that is considered as relevant. Under these two principles, formal EA essentially consists in a methodological roadmap: compute individual goodness in the different possible states of affairs that may result from charity donation, aggregate individual goodness according to some principles (captured by an Arrowian social welfare function in social choice theory) and finally rank the states of affairs according to their resulting overall goodness. This version of EA is thus essentially formal because it is silent regarding i) the content of individual goodness and ii) which social welfare function should be used. However, we may plausibly think of two additional principles that that make substantive claims regarding these two features:

Formal EA P3: Individual goodness is cardinally measurable and comparable.

Formal EA P4: Number counts: for any state of affairs with n persons whose individual goodness is increased by u by charity giving, there is in principle a better state of affairs with m > n persons whose individual goodness is increased by v < u by charity giving.

I will not comment on P3 as it is basically required to conduct any sensible ethical discussion. P4 is essential and I will return on it below. Before, compare formal EA with substantive EA. By substantive EA, I mean any combination of P1-P4 that adds at least one substantive assumption regarding a) the nature of individual goodness and/or b) the constraints the social welfare function must satisfy. Clearly, substantive EA is underdetermined by formal EA. There are many ways to pass from the latter to the former. For instance, one possibility is to use standard cost-benefit analysis to define and measure individual goodness. A utilitarian version of substantive EA which more or less captures Singer’s claims is obtained by assuming that the social welfare function must satisfy a strong independence principle such that overall goodness is additively separable. The possibilities are indeed almost infinite. This is the main virtue of formal EA as a theoretical and practical tool: it forces us to reflect on and to make explicit the principles that sustain our ethical judgments, acknowledging the fact that such judgments are required due to the HFER. Note moreover that in spite of its name, on this reading EA needs not be exclusively concerned with efficiency: fairness may be also taken into account by adding the appropriate principles when passing from formal to substantive EA. What remains true is that a proponent of EA will always claim that one should give to the charity that leads to the best state of affairs in terms of the relevant ordering. There is thus still a notion of “efficiency” but more loosely defined.

My discussion parallels an important discussion in moral philosophy between formal aggregation and substantive aggregation which has been thoroughly discussed in a recent book of Iwao Hirose. Hirose provides a convincing defense of formal aggregation as a general framework in moral philosophy. It is also similar to the distinction made by Marc Fleurbaey between formal welfarism and substantive welfarism. A key feature of formal aggregation is the substantive assumption that numbers count (principle P4 above). Consider the following example due to Thomas Scanlon and extensively discussed by Irose:

“Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over?”

According to formal aggregation, there exists some number n* of persons watching the match such that for any n > n* it is better to wait the end of the match to rescue Jones. Scanlon and many others have argued against this conclusion and claimed that we cannot aggregate individual goodness this way. Hirose thoroughly discusses the various objection against formal aggregation but in the end concludes that none of them are fully convincing. The point here is that if someone wants to argue against EA as I have characterized it, then one must make a more general point against formal aggregation. This is a possibility of course, but that has nothing to do with rejecting the role of reason and of “cold calculus” in the realm of ethics.

Greed, Cooperation and the “Fundamental Theorem of Social Sciences”

An interesting debate has taken place on the website Evonomics over the issue of whether or not economists think greed is socially good. The debate features well-known economists Branko Milanovic, Herb Gintis and Robert Frank as well as the biologist and anthropologist Peter Turchin. Milanovic claims that there is no personal ethics and that morals is embodied into impersonal rules and laws that are built such that it is socially optimal to follow his personal interest as long as one plays along the rule. Actually, Milanovic goes farther than that: it is perfectly right to try to break the rules since if I succeed the responsibility falls on those who have failed to catch me. Such a point of view fits perfectly with the “get the rules right” ideology that dominates microeconomic engineering (market design, mechanism design) and where people’s preferences are taken as given. The point is to set the right rules and incentives mechanisms such as to reach the (second-) best equilibrium.

Not all economists agree with this and Gintis’ and Frank’s answers both qualify some of Milanovic’s claims. Turchin’s answer is also very interesting. At one point, he refers to what he calls the “fundamental theorem of social sciences” (FTSS for short):

In economics and evolution we have a well-defined concept of public goods. Production of public goods is individually costly, while benefits are shared among all. I think you see where I am going. As we all know, selfish agents will never cooperate to produce costly public goods. I think this mathematical result should have the status of “the fundamental theorem of social sciences.”

The FTSS is indeed quite important but formulated this way it is not quite right. Economists (and biologists) have known for long that the so-called “folk theorems” of game theory establish that cooperation is possible in virtually possible in any kind of strategic interactions. To be precise, the folk theorems state that as long as an interaction infinitely repeats with a sufficiently high probability and/or that players have a not too strong preference for the present, then any outcome guaranteeing the players at least their minimax gain in an equilibrium in the corresponding repeated game. This works with all kinds of games, including the prisoner’s dilemma and the related public good game: actually, selfish people will cooperate and produce the public good if they realize that this is in their long term interest to do so (see also Mancur Olson’s “stationary bandits” story for a similar point). So, the true FTSS is rather that “anything goes”: as there are an infinity of equilibria in infinitely repeated games, which one is selected depends on a long list of more or less contingent features (chance, learning/evolutionary dynamics, focal points…). So, contrary to what Turchin claims, the right institutions can in principle incentivize selfish people to cooperate and this prospect may even incentivize selfish people to set up these institutions as a first step!

Does this mean that morality is unnecessary for economic efficiency or that there is no “personal ethics”? Not quite so. First, Turchin’s version of the FTSS becomes more plausible as we recognize that information is imperfect and incomplete. The folk theorems depend on the ability of players to monitor others’ actions and to punish them in case they deviate from the equilibrium. Actually, at the equilibrium we should not observe deviations (except for “trembling hand mistakes”) but this is only because one expects that he will be punished if he defects. It is relatively easy to figure out that imperfect monitoring makes the conditions for universal cooperation to be an equilibrium far more stringent. Of course, how to deal with imperfect and incomplete information is precisely the point of microeconomic engineering (see the “revelation principle”): the right institutions are those that incentivize people to reveal their true preferences. But such mechanisms can be difficult to implement in practice or even to design. The point is that while revelation mechanisms are plausible at some limited scales (say, a corporation) they are far more costly to build and implement at the level of the whole society (if that means anything).

There are reasons here to think that social preferences and morality may play a role to foster cooperation. But there are some confusions regarding the terminology. Social preferences do not imply that one is morally or ethically motivated and the reverse is probably not true altogether. Altruism is a good illustration: animals and insects behave altruistically for reasons that have nothing to do with morals. Basically, they are genetically programmed to cooperate at a cost for themselves because (this is an ultimate cause) it maximizes their inclusive fitness. As a result, these organisms possessed phenotypic characteristics (these are proximate causes) that make them behaving altruistically. Of course, animals and insects are not ethical beings in the standard sense. Systems of morals are quite different. It may be true that morality translates at the choice and preference levels: I may give to a charity not because of an instinctive impulse but because I have a firm moral belief that this is “good” or “right”. For the behaviorism-minded economist, this does not make any difference: whatever the proximate cause that leads you to give some money, the result regarding the allocations of resources is the same. But this can make a difference in terms of institutional design because “moral preferences” (if we can call them like that) may be incommensurable with standard preferences (leading to cases of incompleteness difficult to deal with) or to so-called crowding-out effects when they interact with pecuniary incentives. In any case, moral preferences may make cooperative outcomes easier to achieve, as they lower the monitoring costs.

However, morals is not only embedded at the level of preferences but also at the level of the rules themselves as pointed out by Milanovic: the choice of rules itself may be morally motivated as witnessed by the debates over “repugnant markets” (think of markets for organs). In the vocabulary of social choice theory, morality not only enters into people’s preferences but may also affect the choice of the “collective choice rule” (or social welfare function) that is used to aggregate people preferences. Thus, morality intervenes at these two levels. This point has some affinity with John Rawls’ distinction between two concepts of rules: the summary conception and the practice conception. On the former, a rule corresponds to a behavioral pattern and what justifies the rule under some moral system (say, utilitarianism) is the fact that the corresponding behavior is permissible or mandatory (in the case of utilitarianism, it maximizes the sum of utilities in the population). On the latter, the behavior is justified by the very practice it is constitutive of. Take the institution of promise-keeping: on the practice conception, what justifies the fact that I keep my promises is not that it is “good” or “right” but rather that keeping his promises is constitutive of the institution of promise-keeping. What has to be morally evaluated is not the specific behavior but the whole practice.

So is greed really good? The question is of course already morally-loaded. The answer depends on what we call “good” and on our conception of rules. If by “good” we mean some consequentialist criterion and if we hold the summary conception of rules, the answer will depend on the specifics as indicated in my discussion of the FTSS. But on the practice conception, the answer is clearly “yes, as far as it is constitutive of the practice” and the practice is considered as being good. On this view, while we may agree with Milanovic that to be greedy is good (or at least permissible) as long as it stays within the rules (what Gintis calls “Greed 1” in his answer), it is hard to see how being greedy by transgressing the rules (Gintis’ “Greed 2”) can be good whatsoever… unless we stipulate that the very rules are actually bad! The latter is a possibility of course. In any case, an economic system cannot totally “outsource” morality as what you deem to be good and thus permissible through the choice of rules is already a moral issue.