How to Produce and Justify Knowledge in Ethics?

As I have been working on the relationships between economics and ethics for a couple of years now, I have had several times the opportunity to reflect on the way scholars are producing knowledge on ethical issues. In a former blog, I was already contemplating the role played by moral intuitions in Derek Parfit’s moral reasoning on population ethics issues. As I am now reading Parfit’s huge masterpiece On What Matters and Lazari-Radek and Singer’s book The Point of View of the Universe, this issue once again is brought to my attention as both Parfit and Lazari-Radek and Singer explicitly tackle it.

My current readings have led me to somehow revise my view on this issue. Comparing the way economists (especially social choice theorists) and philosophers deal with ethical problems, I have been used to make a distinction between what can be called an ‘axiomatic approach’ and a ‘constructivist approach’ to ethical problems. The former tackles ethical issues first by identifying basic principles (‘axioms’) which are thought as requirements that any moral doctrine or proposition must satisfy and then by determining (most often through logical reasoning) implications regarding what is morally necessary, possible, permissible, forbidden and so on. The latter deals with ethical issues through thought experiments which most often consist in more or less artificial decision problems. There is an abundance of examples in philosophy: from Rawls’s ‘veil of ignorance’ to Parfit’s various spectrum and teleportation thought experiments and variants of the so-called ‘trolley problem’, philosophers routinely construct decision problems to determine what is intuitively regarded as morally permitted, mandatory or forbidden. A good example of these two approaches is provided by John Harsanyi’s two utilitarian theorems: his ‘impartial observer’ theorem and his ‘aggregation theorem’. The former corresponds a constructivist approach and builds on a thought experiment using the veil of ignorance device. Harsanyi asked which society a rational agent put under a thin veil of ignorance would choose to live in. Behind such a veil, the agent would ignore both his social position and his personal identity, including his personal preferences. Harsanyi famously argued that under this veil, rational agent should ascribe the same probability of being any member of the population and should therefore choose the society that maximizes the average expected utility of the ‘impartial observer’. Harsanyi’s aggregation theorem also provides a defense of utilitarianism but in quite a different way. It shows that if the members of the population have preferences over prospects (i.e. societies) that satisfy the axioms of expected utility theory, if a ‘benevolent dictator’ also has preferences satisfying these axioms, and if the relationship between both sets of preferences satisfies a Pareto condition, then the benevolent dictator’s preferences can be represented by a unweighted additive social welfare function.

Moral philosophers generally refer to another distinction that I originally thought was essentially equivalent to the axiomatic/constructivist one. Considering how moral claims can be justified, moral philosophers divide between ‘foundationalists’ and ‘intuitionists’. Intuitionism is grounded on the method that Rawls labeled ‘reflective equilibrium’. Basically, it consists in considering that our moral intuitions provide both the starting point of moral reasoning and are the ultimate datum against which moral claims should be evaluated. Starting from such intuitions, moral reasoning will lead to claims that may contradict our initial intuitions. Intuitions and moral reasoning are then both iteratively revised until they ultimately match. Foundationalism proceeds in a quite different way. Here, moral claims are defended and accepted on the basis of basic, self-evident principles from which moral implications are deduced. Parfit’s discussion of issues related to personal identity or Larry Temkin’s critique of the transitivity principle in moral reasoning are instances of accounts that proceed along intuitionist lines. By contrast, Sidgwick’s defense of utilitarianism was rather foundationalist in essence, as it depended on a set of ‘axioms’ (justice, rational benevolence, prudence) from which utilitarian conclusions were derived.

There is an apparent affinity between the axiomatic approach and foundationalism on the one hand, and between the constructivist approach and intuitionism on the other hand. Until recently, I have considered that the former pair was essentially characteristic of the way normative economists and social choice theorists were tackling ethical issues, while the latter was rather consistent with the way philosophers were proceeding. However, I realize that if this affinity is indeed real, this cannot be due to the mere fact that the axiomatic/constructivist and intuitionist/foundationalist distinctions are isomorphic. Indeed, it now seems to me that they do not concern the same aspect of moral reasoning: the former distinction concerns the issue of how ethical knowledge is produced, the latter concerns the issue of how moral claims are justified. While production and justification are somehow related, they still are quite different things. Therefore, there is no a priori reason for rejecting the possibility of combining foundationalism with constructivism and (perhaps less obviously) intuitionism with the axiomatic approach. We would then have the following four possibilities:

Ethics

I think that ‘Axiomatic Foundationalism’ and ‘Constructivist Intuitionism’ are unproblematic categories. Examples of the former are Harsanyi’s aggregation theorem, John Broome’s utilitarian account based on separability assumptions or, at least as understood initially, Arrow’s impossibility theorem. All build on an axiomatic approach to derive moral/ethical/ social choice results taking the form either of necessity claims (Harsanyi, Broome) or impossibility claims (Arrow). Moreover, these examples are precisely interesting because they lead to essentially counterintuitive results and have been argued by their proponents to require us to give up our original intuitions. Examples of ‘Constructivist Intuitionism’ are abundant in moral philosophy. As mentioned above, Temkin’s claims against transitivity and aggregation and Parfit’s reductionist account of personhood are great examples of a constructivist approach. They build on a thought experiments about a decision problem and essentially ask us to consider what is the solution that is consistent with our intuitions. These are also instances of intuitionism because, though intuitions are fueling moral reasoning from the start, the possibility is left to reconsider them (at least in principle).

Harsanyi’s impartial observer theorem is an instance of ‘Constructivist Foundationalism’. Harsanyi’s use of the veil of ignorance device makes it corresponding to a constructivist approach. At the same time, Harsanyi also assumes that choosing in accordance with the criteria of expected utility theory should be taken as a foundational assumption of moral reasoning. This is the combination of this foundational assumption with the construction of a highly artificial decision problem that leads to the utilitarian conclusion. Finally, we can wonder if there really are cases of ‘Axiomatic Intuitionism’. I would suggest that Sen’s Paretian liberal paradox may be interpreted this way. Admittedly, the Paretian liberal paradox could also be seen as a case of Axiomatic Foundationalism as Sen’s initial intention was to lead economists to reconsider their intuitions regarding the consistency of freedom and efficiency. However, the discussion that has followed Sen’s result, rather than endorsing the claim of the inconsistency between freedom and efficiency, has focused on redefining the way freedom was axiomatically defined by Sen in such a way that the initial intuition was preserved. It remains true that the contrast between Axiomatic Foundationalism and Axiomatic Intuitionism is not that sharp. This probably reflects the fact that, as more and more moral philosophers are recognizing, the distinction between intuitionism and foundationalism has been historically exaggerated. However, I would suggest that the constructivist/axiomatic distinction is a more solid and transparent one.

Advertisements

Greed, Cooperation and the “Fundamental Theorem of Social Sciences”

An interesting debate has taken place on the website Evonomics over the issue of whether or not economists think greed is socially good. The debate features well-known economists Branko Milanovic, Herb Gintis and Robert Frank as well as the biologist and anthropologist Peter Turchin. Milanovic claims that there is no personal ethics and that morals is embodied into impersonal rules and laws that are built such that it is socially optimal to follow his personal interest as long as one plays along the rule. Actually, Milanovic goes farther than that: it is perfectly right to try to break the rules since if I succeed the responsibility falls on those who have failed to catch me. Such a point of view fits perfectly with the “get the rules right” ideology that dominates microeconomic engineering (market design, mechanism design) and where people’s preferences are taken as given. The point is to set the right rules and incentives mechanisms such as to reach the (second-) best equilibrium.

Not all economists agree with this and Gintis’ and Frank’s answers both qualify some of Milanovic’s claims. Turchin’s answer is also very interesting. At one point, he refers to what he calls the “fundamental theorem of social sciences” (FTSS for short):

In economics and evolution we have a well-defined concept of public goods. Production of public goods is individually costly, while benefits are shared among all. I think you see where I am going. As we all know, selfish agents will never cooperate to produce costly public goods. I think this mathematical result should have the status of “the fundamental theorem of social sciences.”

The FTSS is indeed quite important but formulated this way it is not quite right. Economists (and biologists) have known for long that the so-called “folk theorems” of game theory establish that cooperation is possible in virtually possible in any kind of strategic interactions. To be precise, the folk theorems state that as long as an interaction infinitely repeats with a sufficiently high probability and/or that players have a not too strong preference for the present, then any outcome guaranteeing the players at least their minimax gain in an equilibrium in the corresponding repeated game. This works with all kinds of games, including the prisoner’s dilemma and the related public good game: actually, selfish people will cooperate and produce the public good if they realize that this is in their long term interest to do so (see also Mancur Olson’s “stationary bandits” story for a similar point). So, the true FTSS is rather that “anything goes”: as there are an infinity of equilibria in infinitely repeated games, which one is selected depends on a long list of more or less contingent features (chance, learning/evolutionary dynamics, focal points…). So, contrary to what Turchin claims, the right institutions can in principle incentivize selfish people to cooperate and this prospect may even incentivize selfish people to set up these institutions as a first step!

Does this mean that morality is unnecessary for economic efficiency or that there is no “personal ethics”? Not quite so. First, Turchin’s version of the FTSS becomes more plausible as we recognize that information is imperfect and incomplete. The folk theorems depend on the ability of players to monitor others’ actions and to punish them in case they deviate from the equilibrium. Actually, at the equilibrium we should not observe deviations (except for “trembling hand mistakes”) but this is only because one expects that he will be punished if he defects. It is relatively easy to figure out that imperfect monitoring makes the conditions for universal cooperation to be an equilibrium far more stringent. Of course, how to deal with imperfect and incomplete information is precisely the point of microeconomic engineering (see the “revelation principle”): the right institutions are those that incentivize people to reveal their true preferences. But such mechanisms can be difficult to implement in practice or even to design. The point is that while revelation mechanisms are plausible at some limited scales (say, a corporation) they are far more costly to build and implement at the level of the whole society (if that means anything).

There are reasons here to think that social preferences and morality may play a role to foster cooperation. But there are some confusions regarding the terminology. Social preferences do not imply that one is morally or ethically motivated and the reverse is probably not true altogether. Altruism is a good illustration: animals and insects behave altruistically for reasons that have nothing to do with morals. Basically, they are genetically programmed to cooperate at a cost for themselves because (this is an ultimate cause) it maximizes their inclusive fitness. As a result, these organisms possessed phenotypic characteristics (these are proximate causes) that make them behaving altruistically. Of course, animals and insects are not ethical beings in the standard sense. Systems of morals are quite different. It may be true that morality translates at the choice and preference levels: I may give to a charity not because of an instinctive impulse but because I have a firm moral belief that this is “good” or “right”. For the behaviorism-minded economist, this does not make any difference: whatever the proximate cause that leads you to give some money, the result regarding the allocations of resources is the same. But this can make a difference in terms of institutional design because “moral preferences” (if we can call them like that) may be incommensurable with standard preferences (leading to cases of incompleteness difficult to deal with) or to so-called crowding-out effects when they interact with pecuniary incentives. In any case, moral preferences may make cooperative outcomes easier to achieve, as they lower the monitoring costs.

However, morals is not only embedded at the level of preferences but also at the level of the rules themselves as pointed out by Milanovic: the choice of rules itself may be morally motivated as witnessed by the debates over “repugnant markets” (think of markets for organs). In the vocabulary of social choice theory, morality not only enters into people’s preferences but may also affect the choice of the “collective choice rule” (or social welfare function) that is used to aggregate people preferences. Thus, morality intervenes at these two levels. This point has some affinity with John Rawls’ distinction between two concepts of rules: the summary conception and the practice conception. On the former, a rule corresponds to a behavioral pattern and what justifies the rule under some moral system (say, utilitarianism) is the fact that the corresponding behavior is permissible or mandatory (in the case of utilitarianism, it maximizes the sum of utilities in the population). On the latter, the behavior is justified by the very practice it is constitutive of. Take the institution of promise-keeping: on the practice conception, what justifies the fact that I keep my promises is not that it is “good” or “right” but rather that keeping his promises is constitutive of the institution of promise-keeping. What has to be morally evaluated is not the specific behavior but the whole practice.

So is greed really good? The question is of course already morally-loaded. The answer depends on what we call “good” and on our conception of rules. If by “good” we mean some consequentialist criterion and if we hold the summary conception of rules, the answer will depend on the specifics as indicated in my discussion of the FTSS. But on the practice conception, the answer is clearly “yes, as far as it is constitutive of the practice” and the practice is considered as being good. On this view, while we may agree with Milanovic that to be greedy is good (or at least permissible) as long as it stays within the rules (what Gintis calls “Greed 1” in his answer), it is hard to see how being greedy by transgressing the rules (Gintis’ “Greed 2”) can be good whatsoever… unless we stipulate that the very rules are actually bad! The latter is a possibility of course. In any case, an economic system cannot totally “outsource” morality as what you deem to be good and thus permissible through the choice of rules is already a moral issue.