Effective Altruism and the Unavoidability of Ethical Trade-Offs

The so-called “effective altruism” (EA) movement has recently received a significant attention in the press. Many articles have been critical of EA for various reasons that significantly overlap over the general theme that too much quantification and calculus implies the risk of losing the “big picture” of the issues related to charity and poverty. The last example is an article published on the website “The Conversation”. The author’s main argument is that as a reason-based approach to charity and poverty, EA proponents ignore the fact that ethics and morality cannot be reduced to some “cold” utilitarian calculus:

“[EA proponents] have PhDs in the disciplines requiring the highest level of analytical intelligence, but are they clever enough to understand the limits of reason? Do they have an inner alarm bell that goes off when the chain of logical deductions produces a result that in most people causes revulsion?”

According to the author, a society full of “effectively altruist” people would be a society where any ethical issues would be dealt with through cold-minded computations actually eliminating any role for emotions and gut instincts.

“To be an effective altruist one must override the urge to give when one’s heart is opened up and instead engage in a process of data gathering and computation to decide whether the planned donation could be better spent elsewhere.

If effective altruists adopt this kind of utilitarian calculus as the basis for daily life (for it would be irrational to confine it to acts of charity) then good luck to them. The problem is that they believe everyone should behave in the same hyper-rational way; in other words, they believe society should be remade in their own image.”

The author then makes a link with free-market economists like Gary Becker, suspecting “that, for most people, following the rules of effective altruism would be like being married to Gary Becker, a highly efficient arrangement between contracting parties, but one deprived of all human warmth and compassion.”

There are surely many aspects of EA that can be argued against but I think that this kind of critique is pretty weak. Moreover, it is grounded on a deep misunderstanding of the contribution that social sciences (and especially economics) can make to dealing with ethical issues. As a starting point, I think that any discussion on the virtues and dangers of EA should start on a basic premise that I propose to call the “Hard Fact of Ethical Reasoning”:

Hard Fact of Ethical Reasoning (HFER) – Any ethical issue involves a decision problem with trade-offs to be made.

Giving to a charity to alleviate the sufferings due to poverty is a decision problem with a strong ethical component. What the HFER claims is that when considering how to alleviate those sufferings, you have to make a choice regarding how to use scarce resources in such a way your objective is reached. This a classical means-ends relationship the study of which has been at the core of modern economics for the last hundred years. If one accepts the HFER (and it is hard to see how one could deny it), then I would argue that EA has the general merit of leading us to reflect on and to make explicit the values and the axiological/deontic criteria that underlie our ethical judgments regarding what is considered to be good or right. As I interpret it, a key message of EA is that these ethical judgments should/cannot exclusively depend on our gut feelings and emotions but should also be the subject of rational scrutiny. Now, some of us may be indeed uncomfortable with the substantive claims made by EA proponents, such as Peter Singer’s remark that  “if you do the sums” then “you can provide one guide dog for one blind American or you could cure between 400 and 2,000 people of blindness [in developing countries]”. Here, I think the point is to distinguish between two kinds of EA that I would call formal EA and substantive EA respectively.

Formal EA provides a general framework to think of ethical issues related to charity and poverty. It can be characterized by the following two principles:

Formal EA P1: Giving to different charities leads to different states of affairs that can be compared and ranked according to their goodness following some axiological principles, possibly given deontic constraints.

Formal EA P2: The overall goodness of states of affairs is a (increasing) function of their goodness for the individuals concerned.

Principles P1 and P2 are very general ones. P2 corresponds to what is sometime called the Pareto principle and seems, in this context, to be hardly disputable. It basically states that if you have the choice between giving to two charities and that everyone is equally well-off in the two resulting states of affairs except for at least one person that is better in one of them, then the latter state of affairs is the best. P1 states that it is possible to compare and rank states of affairs, which of course still allow for indifference. Note that we allow the possibility for the ranking to be constrained by any deontological principle that is considered as relevant. Under these two principles, formal EA essentially consists in a methodological roadmap: compute individual goodness in the different possible states of affairs that may result from charity donation, aggregate individual goodness according to some principles (captured by an Arrowian social welfare function in social choice theory) and finally rank the states of affairs according to their resulting overall goodness. This version of EA is thus essentially formal because it is silent regarding i) the content of individual goodness and ii) which social welfare function should be used. However, we may plausibly think of two additional principles that that make substantive claims regarding these two features:

Formal EA P3: Individual goodness is cardinally measurable and comparable.

Formal EA P4: Number counts: for any state of affairs with n persons whose individual goodness is increased by u by charity giving, there is in principle a better state of affairs with m > n persons whose individual goodness is increased by v < u by charity giving.

I will not comment on P3 as it is basically required to conduct any sensible ethical discussion. P4 is essential and I will return on it below. Before, compare formal EA with substantive EA. By substantive EA, I mean any combination of P1-P4 that adds at least one substantive assumption regarding a) the nature of individual goodness and/or b) the constraints the social welfare function must satisfy. Clearly, substantive EA is underdetermined by formal EA. There are many ways to pass from the latter to the former. For instance, one possibility is to use standard cost-benefit analysis to define and measure individual goodness. A utilitarian version of substantive EA which more or less captures Singer’s claims is obtained by assuming that the social welfare function must satisfy a strong independence principle such that overall goodness is additively separable. The possibilities are indeed almost infinite. This is the main virtue of formal EA as a theoretical and practical tool: it forces us to reflect on and to make explicit the principles that sustain our ethical judgments, acknowledging the fact that such judgments are required due to the HFER. Note moreover that in spite of its name, on this reading EA needs not be exclusively concerned with efficiency: fairness may be also taken into account by adding the appropriate principles when passing from formal to substantive EA. What remains true is that a proponent of EA will always claim that one should give to the charity that leads to the best state of affairs in terms of the relevant ordering. There is thus still a notion of “efficiency” but more loosely defined.

My discussion parallels an important discussion in moral philosophy between formal aggregation and substantive aggregation which has been thoroughly discussed in a recent book of Iwao Hirose. Hirose provides a convincing defense of formal aggregation as a general framework in moral philosophy. It is also similar to the distinction made by Marc Fleurbaey between formal welfarism and substantive welfarism. A key feature of formal aggregation is the substantive assumption that numbers count (principle P4 above). Consider the following example due to Thomas Scanlon and extensively discussed by Irose:

“Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over?”

According to formal aggregation, there exists some number n* of persons watching the match such that for any n > n* it is better to wait the end of the match to rescue Jones. Scanlon and many others have argued against this conclusion and claimed that we cannot aggregate individual goodness this way. Hirose thoroughly discusses the various objection against formal aggregation but in the end concludes that none of them are fully convincing. The point here is that if someone wants to argue against EA as I have characterized it, then one must make a more general point against formal aggregation. This is a possibility of course, but that has nothing to do with rejecting the role of reason and of “cold calculus” in the realm of ethics.

Greed, Cooperation and the “Fundamental Theorem of Social Sciences”

An interesting debate has taken place on the website Evonomics over the issue of whether or not economists think greed is socially good. The debate features well-known economists Branko Milanovic, Herb Gintis and Robert Frank as well as the biologist and anthropologist Peter Turchin. Milanovic claims that there is no personal ethics and that morals is embodied into impersonal rules and laws that are built such that it is socially optimal to follow his personal interest as long as one plays along the rule. Actually, Milanovic goes farther than that: it is perfectly right to try to break the rules since if I succeed the responsibility falls on those who have failed to catch me. Such a point of view fits perfectly with the “get the rules right” ideology that dominates microeconomic engineering (market design, mechanism design) and where people’s preferences are taken as given. The point is to set the right rules and incentives mechanisms such as to reach the (second-) best equilibrium.

Not all economists agree with this and Gintis’ and Frank’s answers both qualify some of Milanovic’s claims. Turchin’s answer is also very interesting. At one point, he refers to what he calls the “fundamental theorem of social sciences” (FTSS for short):

In economics and evolution we have a well-defined concept of public goods. Production of public goods is individually costly, while benefits are shared among all. I think you see where I am going. As we all know, selfish agents will never cooperate to produce costly public goods. I think this mathematical result should have the status of “the fundamental theorem of social sciences.”

The FTSS is indeed quite important but formulated this way it is not quite right. Economists (and biologists) have known for long that the so-called “folk theorems” of game theory establish that cooperation is possible in virtually possible in any kind of strategic interactions. To be precise, the folk theorems state that as long as an interaction infinitely repeats with a sufficiently high probability and/or that players have a not too strong preference for the present, then any outcome guaranteeing the players at least their minimax gain in an equilibrium in the corresponding repeated game. This works with all kinds of games, including the prisoner’s dilemma and the related public good game: actually, selfish people will cooperate and produce the public good if they realize that this is in their long term interest to do so (see also Mancur Olson’s “stationary bandits” story for a similar point). So, the true FTSS is rather that “anything goes”: as there are an infinity of equilibria in infinitely repeated games, which one is selected depends on a long list of more or less contingent features (chance, learning/evolutionary dynamics, focal points…). So, contrary to what Turchin claims, the right institutions can in principle incentivize selfish people to cooperate and this prospect may even incentivize selfish people to set up these institutions as a first step!

Does this mean that morality is unnecessary for economic efficiency or that there is no “personal ethics”? Not quite so. First, Turchin’s version of the FTSS becomes more plausible as we recognize that information is imperfect and incomplete. The folk theorems depend on the ability of players to monitor others’ actions and to punish them in case they deviate from the equilibrium. Actually, at the equilibrium we should not observe deviations (except for “trembling hand mistakes”) but this is only because one expects that he will be punished if he defects. It is relatively easy to figure out that imperfect monitoring makes the conditions for universal cooperation to be an equilibrium far more stringent. Of course, how to deal with imperfect and incomplete information is precisely the point of microeconomic engineering (see the “revelation principle”): the right institutions are those that incentivize people to reveal their true preferences. But such mechanisms can be difficult to implement in practice or even to design. The point is that while revelation mechanisms are plausible at some limited scales (say, a corporation) they are far more costly to build and implement at the level of the whole society (if that means anything).

There are reasons here to think that social preferences and morality may play a role to foster cooperation. But there are some confusions regarding the terminology. Social preferences do not imply that one is morally or ethically motivated and the reverse is probably not true altogether. Altruism is a good illustration: animals and insects behave altruistically for reasons that have nothing to do with morals. Basically, they are genetically programmed to cooperate at a cost for themselves because (this is an ultimate cause) it maximizes their inclusive fitness. As a result, these organisms possessed phenotypic characteristics (these are proximate causes) that make them behaving altruistically. Of course, animals and insects are not ethical beings in the standard sense. Systems of morals are quite different. It may be true that morality translates at the choice and preference levels: I may give to a charity not because of an instinctive impulse but because I have a firm moral belief that this is “good” or “right”. For the behaviorism-minded economist, this does not make any difference: whatever the proximate cause that leads you to give some money, the result regarding the allocations of resources is the same. But this can make a difference in terms of institutional design because “moral preferences” (if we can call them like that) may be incommensurable with standard preferences (leading to cases of incompleteness difficult to deal with) or to so-called crowding-out effects when they interact with pecuniary incentives. In any case, moral preferences may make cooperative outcomes easier to achieve, as they lower the monitoring costs.

However, morals is not only embedded at the level of preferences but also at the level of the rules themselves as pointed out by Milanovic: the choice of rules itself may be morally motivated as witnessed by the debates over “repugnant markets” (think of markets for organs). In the vocabulary of social choice theory, morality not only enters into people’s preferences but may also affect the choice of the “collective choice rule” (or social welfare function) that is used to aggregate people preferences. Thus, morality intervenes at these two levels. This point has some affinity with John Rawls’ distinction between two concepts of rules: the summary conception and the practice conception. On the former, a rule corresponds to a behavioral pattern and what justifies the rule under some moral system (say, utilitarianism) is the fact that the corresponding behavior is permissible or mandatory (in the case of utilitarianism, it maximizes the sum of utilities in the population). On the latter, the behavior is justified by the very practice it is constitutive of. Take the institution of promise-keeping: on the practice conception, what justifies the fact that I keep my promises is not that it is “good” or “right” but rather that keeping his promises is constitutive of the institution of promise-keeping. What has to be morally evaluated is not the specific behavior but the whole practice.

So is greed really good? The question is of course already morally-loaded. The answer depends on what we call “good” and on our conception of rules. If by “good” we mean some consequentialist criterion and if we hold the summary conception of rules, the answer will depend on the specifics as indicated in my discussion of the FTSS. But on the practice conception, the answer is clearly “yes, as far as it is constitutive of the practice” and the practice is considered as being good. On this view, while we may agree with Milanovic that to be greedy is good (or at least permissible) as long as it stays within the rules (what Gintis calls “Greed 1” in his answer), it is hard to see how being greedy by transgressing the rules (Gintis’ “Greed 2”) can be good whatsoever… unless we stipulate that the very rules are actually bad! The latter is a possibility of course. In any case, an economic system cannot totally “outsource” morality as what you deem to be good and thus permissible through the choice of rules is already a moral issue.