Should we Get Rid of the Preference Concept in Normative Economics?

Douglas Bernheim (with his co-author Antonio Rangel) is a major contributor of the recent developments within the so-called ‘behavioral welfare economics’ (BWE) research program. BWE can be viewed as an attempt to lay the theoretical foundations for the normative turn of behavioral economics that has been engaged in the late 1990s. Contrary to (too) many contributions within behavioral economics that merely take a casual approach to normative issues, the BWE pioneered by Bernheim and others builds on and extends the formal apparatus of welfare economics to cases of various behavioral inconsistencies. In this light, Bernheim’s recent paper “The Good, the Bad, and the Ugly: A Unified Approach to Behavioral Welfare Economics” published in the Journal of Benefit and Cost Analysis in which he attempts to develop a unified approach to BWE is worth reading for anyone interested in behavioral and normative economics.

Bernheim’s paper covers a lot of ground and makes many important points that I cannot discuss in detail here. I would retain however a very general idea that Bernheim never explicitly develops but which is nonetheless at the core of a key difficulty the normative turn of behavioral economics has given rise to, i.e. the problem related to the use of the preference concept in the normative analysis. Welfare economics has indeed historically made used of an account of welfare in terms of preference satisfaction. The pervasiveness of this account is due to several factors, at least two of them being especially prominent. First, the preference-satisfaction view of welfare is tightly related to a form of consumer sovereignty principle according to which economists should defer to individuals’ judgments regarding what is good or best for them. This principle may itself be justified on different grounds, for instance epistemic (individuals have a privileged access to the knowledge of what is good for them) or ethical (we must defer to individuals’ judgments because individuals are autonomous agents). Whatever the specific justification, the consumer sovereignty principle indicates that there are presumably strong reasons to view the satisfaction of preferences as being constitutive (or at least a good proxy) of welfare. Second, these reasons are strengthened by the formal isomorphism between the binary preference relation that is at the core of the notion of rationality in economics and the ‘better than’ relation that underlies any welfare judgment. As all microeconomic textbooks explain, rationality in economics correspond to the existence of well-ordered preferences over some domain of objects. In particular, this presupposes that the binary preference relation is transitive. On the other hand, there are good reasons to take the ‘better than’ relation as being intrinsically transitive – in the same way as the ‘taller than’ relation for instance. The fact that preferences and welfare judgments are well-ordered obviously makes easier (though obviously does not require) to assume that one can be directly mapped into the other. Problems however arise once it appears that individuals do not have well-ordered preferences. This is precisely the motivation behind the whole BWE approach.

As it is well-known, standard welfare economics essentially adopt a revealed preference stance and therefore associates welfare and choice. According to Bernheim, such an association is done on the basis of three premises:

Premise 1: Each agent is the best judge of our own well-being.

Premise 2: Our judgments are governed by coherent, stable preferences.

Premise 3: Our preferences guide our choices: when we seek to benefit ourselves.

Bernheim’s unified approach accepts (with several qualifications) premises 1 and 3 but rejects premise 2. This is important because, as I indicate just above, premise 2 has been essential in the economists’ endorsement of the preference-satisfaction view of welfare. In the normative writings of behavioral economists, premise 2 tends to be kept even in the face of experimental results by postulating that individuals have ‘true’ (or ‘latent or ‘inner’) preferences that failed to be revealed by choices. Crucially, these are these preferences rather than the actually revealed ones that are assumed to be well-ordered. Alternatively, it is sometimes assumed that individuals are endowed with several and mutually inconsistent preference orderings which, depending on the choice context, may determine one’s choice. Bernheim rejects, rightfully I believe, these views. He rather endorses the ‘constructed preference’ view according to which “I aggregate the many diverse aspects of my experience only when called upon to do so for a given purpose, such as making a choice or answering a question about my well-being”.

Bernheim’s endorsement of the ‘constructed preference’ view is important to understand why he continues to accept premises 1 and 3, even with qualifications. Regarding premise 1, he basically grounds it on the consumer sovereignty principle I discuss above. The qualification of premise 1 rests on the fact however that not all individuals’ judgments provide a correct assessment of their welfare. Berhneim argues for the distinction between ‘direct’ judgments, referring to ‘ultimate objectives’, and ‘indirect’ judgments, referring to the determination of means to achieve ultimate objectives. Only the latter but not the former are susceptible of mistakes that should be accounted for in the welfare analysis. Another way to state Bernheim’s claim is that behavioral economics cannot provide evidence for the fact that individuals are eating too much or insufficiently saving. But it can demonstrate that given their aims, individuals fail to make the best choices. Premise 3 then indicates that individuals’ aims, under appropriate conditions, indeed guide their choices. This makes choices often – but not always – a good indication of individuals’ welfare judgments. On this basis, Bernheim claims that choice-based welfare analysis should be regarded as preferable to approaches relying on self-reported well-being, including hedonistic measures favored by some behavioral economics. The unified approach to BWE Bernheim is suggesting proceeds through two steps: first, the behavioral welfare economists should determine the ‘welfare-relevant domain’ by identifying which choices merit deference (i.e. which choices are actually guided by welfare judgments); second he should construct a welfare criterion based on the properties of choices within that domain. Since there is no presumption that individuals’ preferences and judgments are well-ordered (rejection of premise 2), the resulting welfare-ordering will most of the time happened to be incomplete.

As I say above, the main lesson I retain from Bernheim’s paper is that we should probably completely abandon the preference concept in normative economics. To understand how I arrive at this conclusion, two points should be noted. First, Bernheim’s unified BWE approach does not rely on any assumption of the existence of true or inner preferences. There are choices on which the welfare analysis can build (the welfare-relevant domain) but this is completely unrelated to the existence of true preferences. The determination of the welfare-relevant domain will rather depend on the identification of mistakes or cases of lack of information and even once this is done, there is no guarantee that the choices will be coherent. Second, rather than preferences, Berhneim rather uses the term ‘judgments’ to refer to what is reflected by choices. Bernheim shows that ultimately the individual’s welfare judgments are reflected by a welfare relation that is isomorphic with a choice-based binary relation that has minimal formal properties (especially acyclicity). Interestingly, even if one rejects Bernheim’s choice-based approach, the alternative SRWB approach can also completely dispense with the preference concept since it does not need to stipulate that individual have well-ordered inner preferences.

As a conclusion, it may be worth remarking that Bernheim’s unified approach is somewhat ambiguous regarding the relationship between welfare judgments and choices. In a revealed-preference perspective, we may suppose that (possibly incoherent) choices reveal (possibly incoherent) welfare judgments. But at the same time, there is a sense in which Bernheim’s whole argument builds on the idea that choice-based welfare analysis is preferable because our choices are guided by our welfare judgments. In particular, the identification of mistakes is possible only because we can – at least in principle – identified either a gap between ultimate judgments of welfare and actual choices (what Bernheim calls ‘characterization mistakes’) or inconsistencies in judgments.  In either case, the relevant criterion for the welfare analysis is not that individuals are making inconsistent choices per se but rather how judgments and choices are articulated. Added to the fact that the unified approach naturally calls for the use of non-choice data, this indicates a real departure from the revealed-preference view that is still pervasive in welfare economics. All of this as interesting implications for another topic related to the normative turn of behavioral economics, i.e. libertarian paternalism, but I leave that for another post.

Metaethics is a Mess. A Modest Attempt to Sorting It Out

I have a forthcoming paper in the Erasmus Journal for Philosophy and Economics on Ken Binmore’s theory of the social contract. This article especially focuses on the naturalistic features of Binmore’s account and discusses its implications regarding the status of morality and moral propositions. Writing the paper and especially revising it have forced me to enter into the complex literature on metaethics. Metaethics is dedicated to a vast array of issues related to the meaning and nature of moral propositions. Of course, given the topic of my paper, I have been especially interested in the debate related to the so-called “evolutionary debunking of morality”, i.e. the claim that the naturalistic origins of morality (biological evolution through evolutionary processes like natural selection) undermine the justification of our moral beliefs and judgments. On this issue, Richard Joyce’s book The Evolution of Morality figures as one of the most important recent contributions. Sharon Street’s widely cited paper “A Darwinian Dilemma for Realist Theories of Value” (Philosophical Studies, 2006) is also an important reference on this topic. The evolutionary debunking argument is also largely discussed by Derek Parfit in On What Matters (especially Part 6). In the course of my readings in metaethics, I have found out that it is often difficult to compare and to articulate various arguments and claims coming from different authors because concepts are not always consistently used. This post is a modest attempt to achieve conceptual clarification in the very specific case of the evolutionary debunking argument, and even more specifically on the basis of the three references just cited.

Part 6 of On What Matters (OWM) offers a long and sophisticated discussion of the opposition between naturalism and what Parfit calls “non-naturalist cognitivism” (Parfit has not much to say on non-cognitivism). Interestingly, Parfit eschews labels like “realism” or “moral realism” which are sometimes used to refer to the position which endorses cognitivism (moral propositions express judgments that may either be true or false). In relationship with the opposition between “objectivism” and “subjectivism” in the theory of normative reasons that he tackles in part 1 of OWM, Parfit defends what can be called an “objectivist-non-naturalist-cognitivist” account of morality and normativity. In a nutshell, reasons for actions are provided by values which are independent from the persons’ attitudes (beliefs, desires) – the objectivist part – and there exist authentic normative properties in the world that are independent from naturalistic properties – the non-naturalist part. Though Parfit treats these two features separately, he recognizes that they are highly related and indeed, what is the nature of this relationship is what will be at stake below. Parfit’s position is thus that are there are things in the world that make some actions or events “bad”, “good”, “right”, “desirable” and so on. These things are properties that do not depend on our attitudes or even on our existence. Moreover, these properties cannot reduced in any meaningful way to naturalistic properties. For instance, the property of “X being good for Y in circumstances Z” is neither identical with nor supervene on a naturalistic property (e.g. X is pleasurable for Y in Z). In Parfit’s famous example of Tuesday Indifference, it is irrational plain and simple to be indifferent to any suffering endured any Thursday but not from sufferings endured any other day of the week because there is no reason – in an objectivist sense – for such an indifference. Moreover, this lacks of reason is due to the normative properties in the world that makes events or actions good or rational.

Of course, as a corollary, Parfit rejects both subjectivism about reasons and all forms of (cognitive) naturalism.  The former holds that reasons for action are provided by one’s desires. The latter can be separated between an analytical and a non-analytical versions. Analytical naturalism holds not only that normative properties are identical with naturalistic properties, but also that all normative concepts can ultimately be defined in terms of naturalistic concepts. Non-analytical naturalism claims that normative concepts are different than naturalistic concepts, but accept the naturalistic contention that normative properties can be reduced to (or supervene on) naturalistic properties. Joyce and Street are both explicitly naturalists. Their positions with respect to the evolutionary debunking argument are not however the same. In his book, Joyce provides a forceful defense of moral skepticism, i.e. the view that the naturalistic origins of morality fail to justify our moral beliefs and judgments. On this view, knowing from where our moral beliefs come from, we do not have any positive reason to consider that they are justified. Joyce’s claim is that a naturalist must be a moral skeptic. In other words, what he calls “moral naturalism” (which can be defined as the view that there moral properties while acknowledging their naturalistic foundations) is an untenable doctrine. Street’s position, as I understand it, is quite different. In her paper, she presents the “value realist” with the following dilemma. Either the value realist contends that there is no relationship (causal or other) between independent moral truths and evolutionary influences on our attitudes. In this case, it is highly unlikely but for some extraordinary coincidence that our beliefs about moral truths are correct.  Or there is indeed a relationship between moral truth and evolutionary influences on our attitudes. However, in this case, the value naturalist must hold that natural selection are selected our attitudes for their ability to track the moral truth, a claim which cannot be supported on an evolutionary basis. Against this “Darwinian dilemma”, Street’s conclusion is however not moral skepticism. In a recent contribution in a book dedicated to discussing OWM’s sixth part, Street argues that we can be anti-realist about values while denying that skeptic conclusions are unavoidable.

I have had a hard time making sense of this whole discussion because Parfit, Joyce and Street seem not always to speak of the same thing. Parfit cautiously avoids using the term “realism” while Street makes it her major focus. Joyce opposes moral realism to moral naturalism but his use of the latter seems not to be incompatible with some form of realism. Moreover, Joyce’s notion of naturalism is broad and not fully consistent with Parfit’s use of the term. Ultimately, I have arrived at the following understanding. Parfit’s opposition between objectivism and subjectivism corresponds to Street’s opposition between realism and anti-realism. It concerns the issue of whether normative reasons are attitude-independent or attitude-dependent. The most satisfactory understanding I have of Joyce’s distinction between moral realism and moral naturalism is that it does not exclude in principle (“analytically”) that there is an overlapping area between the two. Indeed, I interpret Joyce as saying that we can have three different metaethical stances:

  1. Moral realism with attitude-independent normative reasons: since this view is not naturalist, Joyce takes it to be irrelevant.
  2. Moral naturalism with attitude-dependent normative reasons: Joyce argues that this must lead to moral skepticism.
  3. Realism-compatible moral naturalism: normative reasons are attitude-dependent and naturalistic and normative properties are ultimately identical, but normative properties are still part of our world. Joyce argues that this view is untenable because the attitude-dependent normative reasons are actually not really normative (i.e. lack a form of practical authority).

I have no space to present and discuss in details Joyce’s argument against this “hybrid” form of moral naturalism but my point is that this seems to correspond to the form of anti-realistic value naturalism that Street is endorsing. The table below summarizes the various metaethical positions with respect to the evolution debunking argument, using Parfit’s way of classifying views:

metaethics

If one is worried with moral skepticism, then this table indicates that the alternatives are either to endorse a strong form of moral realism, grounded on the combination of non-naturalism and objectivism, or to defend a form of non-analytic naturalism. The combination of objectivism (thus claiming the existence of attitude-independent normative reasons) and non-analytic naturalism would answer Joyce’s skeptic critic. However, such a position seems hard to entertain, if not contradictory. The issue is then whether an anti-realistic value naturalism is substantively (rather than conceptually) possible. But this is a topic for another post.

Personal Identity and the Rationality of Anticipating Others’ Experiences

Consider the two following cases:

Fission – Your right cerebral hemisphere and your left cerebral hemisphere are simultaneously and separately transplanted into the empty skulls of your two identical triplet siblings. The rest of your body is destroyed. Each hemisphere functions normally with both siblings are endowed with normal cognitive functions.

Replication – Your entire body is scanned by a machine which records all the information about the type and location of its molecules. The machine uses this information to construct two replicas of you. Your body is entirely vaporized in the process.

In his recent article “Personal Identity, Substantial Change and the Significance of Becoming”, the philosopher Michael Otsuka asks whether it is rational for you to anticipate and to grant significance to the experiences of your two siblings in the Fission Case while denying that it is rational in the Replication Case. Otsuka argues for a positive answer on the basis of a criterion of “substantial change”. In the Fission Case, you become each of two other persons, while in the Replication Case, you are merely replaced by two other persons. Arguably, in the Fission Case, there is a substantial change in your identity because two streams of consciousness are brought into existence instead of one before the transplantation. There is however a “substance-connection” that makes it rational for you – before the transplantation – to care about the experiences that your two siblings will have after the transplantation. In the Replication Case, no such substance-connection prevails and therefore it may be argued that what will happen to the replicas is rationally irrelevant for you.

Otsuka contrasts his account with other views of about personal identity, especially Parfit’s reductionist account. According to Parfit, psychological continuity and connectedness are the relevant criteria to employ when asking “what matters” in the prudential sense. On these criteria, Fission and Replication should be dealt with in the same way: assuming that your two replicas have memories of your past experiences, especially those just preceding the vaporization of your body, you should care about their experiences almost as much as about yours. The divergence between Otsuka’s substance account and Parfit’s substance account lies in the fact that the former is grounded in an ontology of things while the latter depends on an ontology of events and processes. Which one is the most convincing is of course a question difficult to answer but it seems that most of us – as Otsuka suggests – will not regard the Fission Case and the Replication Case as perfectly symmetric. This provides a reason – not necessarily a decisive one, though – to favor Otsuka’s view.

Consider however a third case:

Downloading – Your stream of consciousness is uploaded and stored in a non-biological hardware and almost instantly downloaded in a new biological brain placed inside a human body with no direct genetic relatedness with your original body. Your original body is destroyed. You have intact memory of your experiences before the upload and downloading operations.

This case is a standard science-fiction one and is for instance the one that Netflix’s series Altered Carbon uses in its narrative plot (though in the series the main character’s stream of consciousness is “reactivated” 250 years after having been uploaded). By assumption, psychological continuity and connectedness are satisfied and hence an account like Parfit’s would treat this case in the same way as the two preceding ones. What about Otsuka’s? This probably depends on what is taken as the appropriate view of the mind-body problem. On the dominant, materialist and functionalist views, there is probably a case to consider that there is some form of substance connection between your previous and current selves. After all, considering that consciousness is the result of functional operations whose realization is independent of the “hardware” on which they are “running” (in the sense that they could be realized with a different hardware), it may be argued that the appropriate sort of connection prevails. Consider however

Multiple Downloading – Your stream of consciousness is uploaded and stored in a non-biological hardware and almost instantly downloaded in a new biological brain placed inside a human body with no direct genetic relatedness with your original body. Your original body is destroyed. You have intact memory of your experiences before the upload and downloading operations. Due to a mistake, the operation has been done twice, with two non-genetically related bodies.

Multiple Downloading is qualitatively similar with Replica, except for the fact that in the former the two selves that result from the operation have a body that is genetically unrelated to yours. This difference can only reinforce Otsuka’s conclusion that you should not care about the two new selves’ experiences. I regard this result as counterintuitive as it is not clear why and how a mistake leading to duplicate your stream of consciousness should be prudentially relevant. There is no obvious reason for treating Downloading and Multiple Downloading differently. Someone rejecting the functionalist view in the mind-body problem may still argue that in Downloading the appropriate connection between the two selves do not prevail. One may then deny that the future selves’ experiences are prudentially relevant in both Downloading and Multiple Downloading. This is consistent with the ontology of things that Otsuka is defending. This conclusion still strikes me as counterintuitive but this is probably due to my functionalist leanings!

How to Produce and Justify Knowledge in Ethics?

As I have been working on the relationships between economics and ethics for a couple of years now, I have had several times the opportunity to reflect on the way scholars are producing knowledge on ethical issues. In a former blog, I was already contemplating the role played by moral intuitions in Derek Parfit’s moral reasoning on population ethics issues. As I am now reading Parfit’s huge masterpiece On What Matters and Lazari-Radek and Singer’s book The Point of View of the Universe, this issue once again is brought to my attention as both Parfit and Lazari-Radek and Singer explicitly tackle it.

My current readings have led me to somehow revise my view on this issue. Comparing the way economists (especially social choice theorists) and philosophers deal with ethical problems, I have been used to make a distinction between what can be called an ‘axiomatic approach’ and a ‘constructivist approach’ to ethical problems. The former tackles ethical issues first by identifying basic principles (‘axioms’) which are thought as requirements that any moral doctrine or proposition must satisfy and then by determining (most often through logical reasoning) implications regarding what is morally necessary, possible, permissible, forbidden and so on. The latter deals with ethical issues through thought experiments which most often consist in more or less artificial decision problems. There is an abundance of examples in philosophy: from Rawls’s ‘veil of ignorance’ to Parfit’s various spectrum and teleportation thought experiments and variants of the so-called ‘trolley problem’, philosophers routinely construct decision problems to determine what is intuitively regarded as morally permitted, mandatory or forbidden. A good example of these two approaches is provided by John Harsanyi’s two utilitarian theorems: his ‘impartial observer’ theorem and his ‘aggregation theorem’. The former corresponds a constructivist approach and builds on a thought experiment using the veil of ignorance device. Harsanyi asked which society a rational agent put under a thin veil of ignorance would choose to live in. Behind such a veil, the agent would ignore both his social position and his personal identity, including his personal preferences. Harsanyi famously argued that under this veil, rational agent should ascribe the same probability of being any member of the population and should therefore choose the society that maximizes the average expected utility of the ‘impartial observer’. Harsanyi’s aggregation theorem also provides a defense of utilitarianism but in quite a different way. It shows that if the members of the population have preferences over prospects (i.e. societies) that satisfy the axioms of expected utility theory, if a ‘benevolent dictator’ also has preferences satisfying these axioms, and if the relationship between both sets of preferences satisfies a Pareto condition, then the benevolent dictator’s preferences can be represented by a unweighted additive social welfare function.

Moral philosophers generally refer to another distinction that I originally thought was essentially equivalent to the axiomatic/constructivist one. Considering how moral claims can be justified, moral philosophers divide between ‘foundationalists’ and ‘intuitionists’. Intuitionism is grounded on the method that Rawls labeled ‘reflective equilibrium’. Basically, it consists in considering that our moral intuitions provide both the starting point of moral reasoning and are the ultimate datum against which moral claims should be evaluated. Starting from such intuitions, moral reasoning will lead to claims that may contradict our initial intuitions. Intuitions and moral reasoning are then both iteratively revised until they ultimately match. Foundationalism proceeds in a quite different way. Here, moral claims are defended and accepted on the basis of basic, self-evident principles from which moral implications are deduced. Parfit’s discussion of issues related to personal identity or Larry Temkin’s critique of the transitivity principle in moral reasoning are instances of accounts that proceed along intuitionist lines. By contrast, Sidgwick’s defense of utilitarianism was rather foundationalist in essence, as it depended on a set of ‘axioms’ (justice, rational benevolence, prudence) from which utilitarian conclusions were derived.

There is an apparent affinity between the axiomatic approach and foundationalism on the one hand, and between the constructivist approach and intuitionism on the other hand. Until recently, I have considered that the former pair was essentially characteristic of the way normative economists and social choice theorists were tackling ethical issues, while the latter was rather consistent with the way philosophers were proceeding. However, I realize that if this affinity is indeed real, this cannot be due to the mere fact that the axiomatic/constructivist and intuitionist/foundationalist distinctions are isomorphic. Indeed, it now seems to me that they do not concern the same aspect of moral reasoning: the former distinction concerns the issue of how ethical knowledge is produced, the latter concerns the issue of how moral claims are justified. While production and justification are somehow related, they still are quite different things. Therefore, there is no a priori reason for rejecting the possibility of combining foundationalism with constructivism and (perhaps less obviously) intuitionism with the axiomatic approach. We would then have the following four possibilities:

Ethics

I think that ‘Axiomatic Foundationalism’ and ‘Constructivist Intuitionism’ are unproblematic categories. Examples of the former are Harsanyi’s aggregation theorem, John Broome’s utilitarian account based on separability assumptions or, at least as understood initially, Arrow’s impossibility theorem. All build on an axiomatic approach to derive moral/ethical/ social choice results taking the form either of necessity claims (Harsanyi, Broome) or impossibility claims (Arrow). Moreover, these examples are precisely interesting because they lead to essentially counterintuitive results and have been argued by their proponents to require us to give up our original intuitions. Examples of ‘Constructivist Intuitionism’ are abundant in moral philosophy. As mentioned above, Temkin’s claims against transitivity and aggregation and Parfit’s reductionist account of personhood are great examples of a constructivist approach. They build on a thought experiments about a decision problem and essentially ask us to consider what is the solution that is consistent with our intuitions. These are also instances of intuitionism because, though intuitions are fueling moral reasoning from the start, the possibility is left to reconsider them (at least in principle).

Harsanyi’s impartial observer theorem is an instance of ‘Constructivist Foundationalism’. Harsanyi’s use of the veil of ignorance device makes it corresponding to a constructivist approach. At the same time, Harsanyi also assumes that choosing in accordance with the criteria of expected utility theory should be taken as a foundational assumption of moral reasoning. This is the combination of this foundational assumption with the construction of a highly artificial decision problem that leads to the utilitarian conclusion. Finally, we can wonder if there really are cases of ‘Axiomatic Intuitionism’. I would suggest that Sen’s Paretian liberal paradox may be interpreted this way. Admittedly, the Paretian liberal paradox could also be seen as a case of Axiomatic Foundationalism as Sen’s initial intention was to lead economists to reconsider their intuitions regarding the consistency of freedom and efficiency. However, the discussion that has followed Sen’s result, rather than endorsing the claim of the inconsistency between freedom and efficiency, has focused on redefining the way freedom was axiomatically defined by Sen in such a way that the initial intuition was preserved. It remains true that the contrast between Axiomatic Foundationalism and Axiomatic Intuitionism is not that sharp. This probably reflects the fact that, as more and more moral philosophers are recognizing, the distinction between intuitionism and foundationalism has been historically exaggerated. However, I would suggest that the constructivist/axiomatic distinction is a more solid and transparent one.

Is it Rational to be Bayesian Rational?

Most economists and many decision theorists equate the notion of rationality with Bayesian rationality. While the assumption that individuals actually are Bayesian rational has been largely disputed and is now virtually rejected, the conviction that Bayesianism defines the normative standard of rational behavior remains fairly entrenched among economists. However, even the normative relevance of Bayesianism has been questioned. In this post, I briefly survey one particular and interesting kind of argument that has been particularly developed by the decision theorist Itzhak Gilboa with different co-authors in several papers.

First, it is useful to start with a definition of Bayesianism in the context of economic theory: the doctrine according to which it is always rational to behave according to the axioms of Bayesian decision theory. Bayesianism is a broad church with many competing views (e.g. radical subjectivism, objective Bayesianism, imprecise Bayesianism…) but it will be sufficient to retain a generic characterization through the two following principles:

Probabilism: Bayesian rational agents have beliefs that can be characterized through a probability function whose domain is some state space.

Expected Utility Maximization: The choices of Bayesian rational agents can be represented by the maximization of the expectation of a utility function according to some probability function.

Gilboa’s critique of Bayesianism is uniquely concerned with probabilism though some of its aspects could be easily extended to the expected utility maximization principle. Probabilism can itself be characterized as the conjunction of three tenets:

(i) Grand State Space: each atom (“state of nature”) in the state space is assumed to resolve all uncertainty, i.e. everything that is relevant for the modeler is specified, included all causal relationships. Though in Savage’s version of Bayesian decision theory, states of nature where understood as “small worlds” corresponding to some coarse partition of the state space, in practice most economists implicitly interpret states of nature as “large worlds”, i.e. as resulting from the finest partition of the state space.

(ii) Prior Probability: Rational agents have probabilistic beliefs over the state space which are captured by a single probability measure.

(iii) Bayesian updating: In light of new information, rational agents update their prior to a posterior belief according to Bayes’s rule.

While the third tenet may be disputed, included within the realm of Bayesianism (see for instance Jeffrey’s probability kinematics or views entertained by some objective Bayesians), it is the first two that are targeted by Gilboa. More exactly, while each tenet taken separately seems pretty reasonable normatively speaking, problems arise as soon as one decides to combine them.

Consider an arbitrary decision problem where it is assumed (as economists routinely do) that all uncertainty is captured through a Grand State Space. Say, you have to decide between choosing to bet on what is presented to you as a fair coin falling on heads and betting on the fact that the next winner of the US presidential will be a Republican. There seem to be only four obvious states of nature: [Heads, Republican], [Heads, Democrat], [Tail, Republican], [Tail, Democrat]. Depending on your prior beliefs that the coin toss will fall on Heads (maybe a 1:1 odd) and that the next US president will be a Republican (and assuming monotonic preferences in money), your choice will reveal your preference for one of the two bets. Even if ascribing probabilities to some of the events may be difficult, it seems that the requirements of Bayesian rationality cannot be said to be unreasonable here. But matters are actually more complicated because there are many things that may causally affect the likelihood of each event. For instance, while you have been said that the coin is fair, maybe you have reason to doubt this affirmation. This will depend for instance on who has made the statement. Obviously, the result of the next US presidential elections will depend on the many factual and counterfactual events that may happen. To form a belief about the result of the US elections not only you have to form a belief over these events but also over the nature of the causal relationships between them and the result of the US election. Computationally, the task quickly becomes tremendous as the number of states of nature to consider is quite huge. Assuming that a rational agent should be able to assign a prior over all of them is normatively unreasonable.

An obvious answer (at least for economists and behaviorists-minded philosophers) is to remark that prior beliefs need not be “in the mind” of the decision-maker. What matters is that the betting behavior of the decision-maker reveals preferences over prospects that can be represented by a unique probability measure over as larger a state space as needed to make sense of it. There are many things to be said against this standard defense but for the sake of the argument we may momentarily accept it. What happens however of the behavior of the agents fail to reveal the adequate preferences? Must we conclude then that the decision-maker is irrational? A well-known case leading to such questions is Ellsberg’s paradox. Under a plausible interpretation, the latter indicates that most actual agents reveal through their choices an aversion for probabilistic ambiguity which directly led to the violation of the independence axiom of Bayesian decision theory. In this case, the choice behavior of agents cannot be consistently represented by a unique probability measure. Rather than arguing that such a choice behavior is irrational, a solution (which I have already discussed here) is to adopt the Grand State Space approach. It is then possible to show that with an augmented state space there is nothing “paradoxical” in Ellsber’s paradox. The problem however with this strategy is twofold. On the one hand, many choices are “unobservable” by definition, which fits uneasily in the behaviorist interpretation of Bayesian axioms. On the other hand,  it downplays the reasons that explain the choices that actual agents are actually making.

To understand this last point, it must be acknowledged that Bayesianism defines rationality merely in terms of consistency with respect to a set of axioms. As a result, such an approach completely disregards the way agents form their beliefs (as well as their preferences) and – more importantly – abstains from making any normative statement regarding the content of beliefs. “Irrational” beliefs are merely beliefs that fail to qualify for a representation through a unique probability measure. Now, consider whether it is irrational to fail or to refuse to have such beliefs in cases where some alternatives but not others suffer from probabilistic ambiguity. Also, consider whether it is irrational to firmly believe (eventually to degree 1) that smoking presents no risk for health. Standard Bayesianism will answer positively in the first case but negatively in the second. Not only this is unintuitive but it also seems to be pretty unreasonable. Consider the following alternative definition of rationality proposed by Gilboa:

A mode of behavior is irrational for a decision maker, if, when the latter is exposed to the analysis of her choices, she would have liked to change her decision, or to make different choices in similar future circumstances.

This definition of rationality appeals to the reflexive abilities of human agents and, crucially, to our capacity to motivate our choices through reasons. This suggests first that the axioms of Bayesian decision theory can be submitted both as reasons to make specific choices but also has the subject of the normative evaluation. This also indicates that whatever may be thought of these axioms, Bayesianism lacks an adequate account of beliefs formation. In other words, Bayesianism cannot pretend to constitute a normative theory of rationality because it does not offer any justification neither for the way an agent should partition the state space nor for deciding which prior to adopt. The larger the state space is made to capture all the relevant features explaining an agent’s prior, the lesser it seems reasonable to expect rational agents to be able or to be willing to entertain such a prior.

Behavioral Welfare Economics and the ‘View from Nowhere’

As Richard Thaler has just received a well-deserved ‘Nobel prize’ for his pioneering contribution in behavioral economics and behavioral finance, many commentators are reflecting over the scientific and ethical significance of Thaler’s work and more generally of behavioral economics regarding policy matters. Thaler is of course well-known for having developed with legal scholar Cass Sunstein the whole nudge idea as well as the seemingly oxymoronic “libertarian paternalism” notion. In a somehow challenging review of Thaler’s contribution, Kevin Bryan expresses some worries regarding the ethical implications relating to the nudging practice:

“Let’s discuss ethics first. Simply arguing that organizations “must” make a choice (as Thaler and Sunstein do) is insufficient; we would not say a firm that defaults consumers into an autorenewal for a product they rarely renew when making an active choice is acting “neutrally”. Nudges can be used for “good” or “evil”. Worse, whether a nudge is good or evil depends on the planner’s evaluation of the agent’s “inner rational self”, as Infante and Sugden, among others, have noted many times. That is, claiming paternalism is “only a nudge” does not excuse the paternalist from the usual moral philosophic critiques! Indeed, as Chetty and friends have argued, the more you believe behavioral biases exist and are “nudgeable”, the more careful you need to be as a policymaker about inadvertently reducing welfare. There is, I think, less controversy when we use nudges rather than coercion to reach some policy goal. For instance, if a policymaker wants to reduce energy usage, and is worried about distortionary taxation, nudges may (depending on how you think about social welfare with non-rational preferences!) be a better way to achieve the desired outcomes. But this goal is very different that common justification that nudges somehow are pushing people toward policies they actually like in their heart of hearts. Carroll et al have a very nice theoretical paper trying to untangle exactly what “better” means for behavioral agents, and exactly when the imprecision of nudges or defaults given our imperfect knowledge of individual’s heterogeneous preferences makes attempts at libertarian paternalism worse than laissez faire.”

As Noah Smith however rightly notes, that is not a problem that is peculiar to the nudge approach nor more generally to behavioral welfare economics:

“There are, indeed, very real problems with behavioral welfare economics. But the same is true of standard welfare economics. Should we treat utilities as cardinal, and sum them to get our welfare function, when analyzing a typical non-behavioral model? Should we sum the utilities nonlinearly? Should we consider only the worst-off individual in society, as John Rawls might have us do?
Those are nontrivial questions. And they apply to pretty much every economic policy question in existence. But for some reason, Kevin chooses to raise ethical concerns only for behavioral econ. Do we see Kevin worrying about whether efficient contracts will lead to inequality that’s unacceptable from a welfare perspective? No. Kevin seems to be very very very worried about paternalism, and generally pretty cavalier about inequality.”

According to Robert Sugden, what standard and behavioral welfare economics have in common is that they endorse – if implicitly – the ‘view from nowhere’ in ethics. The latter – whose name has been coined by Thomas Nagel – is the view that goodness or rightness is to be judged according to criteria set by some exogenous impartial or benevolent dictator. In welfare economics, the criteria imposed by the benevolent dictator are instantiated through a (Arrowian or Bergsonian) social welfare function (SWF). An SWF is itself traditionally obtained through the definition of the relevant informational basis (which kind of information should be taken into account in the normative analysis) and aggregation rule (how to use this information to make social evaluations and comparisons).

In this perspective, it is right that standard and behavioral welfare economics share a feature that some may regard as problematic: the very definition of the relevant SWF is left to a putatively impartial and benevolent being who is thought to lay outside the group of persons to whom the normative evaluation is addressed. The problem with the view from nowhere is that it creates a divide between the one making the impartial ethical judgments and evaluations and the persons whose welfare, rights and so on, are the objects of these judgments and evaluations. That means that welfare economics as a whole is somehow paternalistic in its very foundations.  Arguably there is a difference between standard and behavioral welfare economics: the latter is more restrictive regarding the relevant informational basis. In the classical preference-satisfaction account of welfare that most welfare economists endorse (including most but not all behavioral economists), preferences whatever their content are considered as relevant from a welfare point of view. Behavioral welfare economists argue however that it is legitimate to ignore preferences that are revealed by choices resulting from cognitive biases, lack of awareness, errors and so on. This only contributes to strengthen the paternalistic tendencies of welfare economics. In other words, the difference between standard and behavioral welfare economics is not one of nature but rather “merely” of degree.

Ultimately, it is important to acknowledge that welfare economics as a whole is not fitted to discuss most issues related to (libertarian) paternalism, especially the problems of manipulation and autonomy. Welfare economics is nowadays essentially a theoretical framework to make social evaluations given exogenous welfare criteria but cannot be a substitute for moral and ethical reasoning (though the related social choice approach can be a way to reflect on ethical problems).

Parfit on How to Avoid the Repugnant Conclusion (And Some Additional Personal Considerations)

Derek Parfit, one of the most influential contemporary philosophers, died last January. The day before his death, he submitted what seems to be his last paper to the philosophy journal Philosophy and Public Affairs. In this paper, Parfit tackles the famous “non-identity problem” that he has himself settled in Reasons and Persons almost 35 years ago. Though unachieved, the paper is quite interesting because it appears to offer a way to avoid the not less famous “repugnant conclusion”. I describe below Parfit’s tentative solution and also add some comments on the role played by moral intuitions in Parfit’s (and other moral philosophers’) argumentation.

Parfit is concerned with cases where we have to compare the goodness of two or more outcomes where different people exist across these outcomes. Start first with Same Number cases, i.e. cases where at least one person exists in one outcome but no in the other but where the total number of people is the same. Example 1 is an instance of such a case (numbers denote quality of life according to some cardinal and interpersonally comparable measure):

Example 1

Outcome A Ann 80 Bob 60 ——–
Outcome B ——— Bob 70 Chris 20
Outcome C Ann 20 ——— Chris 30

 

How should we compare these three outcomes? Many moral philosophers entertain one kind or another of “person-affecting principles” according to which betterness or worseness necessarily depend on some persons being better (worse) in an outcome than in another one. Consider in particular the Weak Narrow Principle:

Weak Narrow Principle: One of two outcomes would be in one way worse if this outcome would be worse for people.

 

Since it is generally accepted that we cannot make someone worse by not making her exist, outcome A should be regarded as worse (in one way) than outcome B by the Weak Narrow Principle. Indeed, Bob is worse in A than in B while the fact that Ann does not exist in B cannot make her worse than in A (even though Ann would have a pretty good life if A were to happen). By the same reasoning, C should be considered as worse than A and B worse than C. Thus the ‘worse than’ relation is not transitive. Lack of transitivity may be seen as dubious but is not in itself sufficient to reject the Weak Narrow Principle. Note though that if we have to compare the goodness of the three outcomes together, we are left without any determinate answer. Consider however:

Example 2

Outcome D Dani 70 Matt 50 ——– ——–
Outcome E ——— Matt 60 Luke 30 ——–
Outcome F ——— ——— Luke 35 Jessica 10

 

According to the Weak Narrow Principle, D is worse than E and E is worse than F. If we impose transitivity on the ‘worse than’ relation, then D is worse than F. Parfit regards this kind of conclusion as implausible. Even if we deny transitivity, the conclusion than E is worse than F is also hard to accept.

Given that the Weak Narrow Principle leads to implausible conclusion in Same Number cases, it is desirable to find alternative principles. In Reasons and Persons, Parfit suggested adopting impersonal principles that do not appeal to facts about what would affect particular people. For instance,

Impersonal Principle: In Same Number cases, it would be worse if the people who existed would be people whose quality of life would be lower.

 

According to this principle, we can claim that F is worse than E which is worse than D. Obviously, ‘worse than’ is transitive. What about Different Number cases (i.e. when the number of people who exist in one outcome is higher or lower than in another one)? In Reasons and Persons, Parfit originally explored an extension of the Impersonal Principle:

The Impersonal Total Principle: It would always be better if there was a greater sum of well-being.

 

Parfit ultimately rejected this last principle because it leads to the Repugnant Conclusion:

The Repugnant Conclusion: Compared with the existence of many people hose quality of life would be very high, there is some much larger number of people whose existence would be better, even though these people’s lives would be barely worth living.

 

In his book Rethinking the Good, the philosopher Larry Temkin suggests avoiding the repugnant conclusion by arguing that the ‘all things considered better than relation’ is essentially comparative. In other words, the goodness of a given outcome depends on the set of outcomes with which it is compared. But this has the obvious consequence that the ‘better than’ relation is not necessarily transitive (Temkin claims that transitivity applies only to a limited part of our normative realm). Parfits instead sticks to the view that goodness is intrinsic and suggests an alternative approach through another principle:

Wide Dual Person-Affecting Principle: One of two outcomes would be in one way better if this outcome would together benefit people more, and in another way better if this outcome would benefit each person more.

 

Compare outcomes G and H on the basis of this principle:

Outcome G: N persons will exist and each will live a life whose quality is at 80.

Outcome H: 2N persons will exist and each will live a life whose quality is at 50.

 

According to the Wide Dual Person-Affecting Principle, G is better than H in at least one way because it benefits each person more, assuming that you cannot be made worse by not existing. H may be argued to be better than G on another way, by benefiting people more, at least on the basis of some additive rule. Which outcome is all things considered better remains debatable. But consider

Outcome I: N persons will exist and each will live a life whose quality is at 100.

Outcome J: 1000N persons will exist and each will live a life whose quality is at 1.

 

Here, although each outcome is better than the other on one respect, it may be plausibly claimed that I is better all things considered because the lives in J are barely worth living. This may be regarded as sufficient to more than compensate for the fact that the sum of well-being is far superior in J than in I. This leads to the following conclusion:

Analogous Conclusion: Compared with the existence of many people whose lives would be barely worth living, there is some much higher quality of life whose being had by everyone would be better, even though the numbers of people who exist would be much small.

This conclusion is consistent with the view that goodness is intrinsic and obviously avoids the repugnant conclusion.

 

I would like to end this post with some remarks with the role played by moral intuitions in Parfit’s reasoning. This issue had already came to my mind when reading Partit’s Reasons and Persons as well as Temkin’s Rethinking the Good. Basically, both Parfit and Temkin (and many other moral philosophers) ground their moral reasoning on intuitions about what is good/bad or right/wrong. For instance, Parfit’s initial rejection of impersonality principles in Reasons and Persons was entirely grounded on the fact that they seem to lead to the repugnant conclusion which Parfit regarded as morally unacceptable. The same is true for Temkin’s arguments against the transitivity of the ‘all things considered better than’ relation. Moral philosophers seem mostly to use a form of backward reasoning about moral matters: take some conclusions as intuitively acceptable/unacceptable or plausible/implausible and then try to find principles that may rationalize our intuitions about these conclusions.

As a scholar in economics & philosophy with the background of an economist, this way of reasoning is somehow surprising me. Economists who are thinking about moral matters are generally doing so from a social choice perspective. The latter almost completely turns the philosopher’s reasoning on its head. Basically, a social choice theorists will start from a small set of axioms that encapsulate basic principles that may be plausibly regarding as constraints that should bind any acceptable moral view. For instance, Pareto principles are generally imposed because we take as a basic moral constraint the fact that everyone is better (in some sense) in a given outcome than in another one make the former better than the latter. The social choice approach then consists in determining which social choice functions (i.e. moral views) are compatible with these constraints. In most of the case, this approach will not be able to tell which moral view is obligatory; but it will tell which moral views are and are not permissible given our accepted set of constraints. The repugnant conclusion provides a good illustration: in one of the best social choice treatment of issues related to population ethics, John Broome (a philosopher but a former economist) rightly notes that if the “repugnant” conclusion follows from acceptable premises, then we should not reject it on the ground that we regarded as counterintuitive. The same is true for transitivity: the fact that it entails counterintuitive conclusion is not sufficient to reject it (at least, independent argument for rejection are needed).

There are two ways to justify the social choice approach to moral matters. The first is the fact that we generally have a better understanding of “basic principles” than of more complex conclusions that depend on a (not always well-identified) set of premises. It is far easier to discuss the plausibility of transitivity or of Pareto principles in general than to assess moral views and their more or less counterintuitive implications. Of course, we may also have a poor understanding of basic principles but the attractiveness of the social choice approach is precisely that it helps to focus the discussion on axioms (think of the literature on Arrow’s impossibility theorem). The second reason to endorse the social choice approach on moral issues is that we now start to understand where our moral intuitions and judgments are coming from. Moral psychology and experimental philosophy tend to indicate that our moral views are deeply rooted in our evolutionary history. Far from vindicating them, this should quite the contrary encourage us to be skeptical about their truth-value. Modern forms of moral skepticism point out that whatever the ontological status of morality, the naturalistic origins of moral judgments do not guarantee and actually make highly doubtful that whatever we believe about morality is epistemically well-grounded.

 

Hard Obscurantism and Unrealistic Models in Economics

The philosopher and social scientist Jon Elster is well-known for his critical and insightful views about the (ir)relevance of rational choice theory (RCT) in the social sciences. Among his recent writings on the subject, Elster has published last year a paper in the philosophy journal Synthese concerning what he calls “hard obscurantism” in economic modeling (gated version here). By hard obscurantism, Elster essentially refers to a practice where “ends and procedures become ends in themselves, dissociated form their explanatory functions” (p. 2163). This includes many rational choice models, but also a part of agent-based modeling, behavioral economics and statistical analysis in economics.

Elster’s paper focuses on the case of rational choice models and builds on several “case studies” that are thought to illustrate the practice of hard obscurantism. These case studies include Akerlof & Dickens’s and Rabin’s use of cognitive dissonance theory, Becker and Mulligan’s accounts of altruism as well as Acemoglu & Robinson’s theory of political transitions. Beyond these examples, Elster underlines two general problems with rational choice models and more generally with RAT: first, theory is indeterminate, second it ignores the irrationality of the agents. Indetermination is indeed a well-known problem that is partly (though not equivalent) related to the existence of multiple equilibria in many rational choice models. According to Elster, it has three sources: (i) the fact that the determination of the optimal amount of information leads to an infinite regress (i.e. to compute the marginal utility of information requires to collect the information but whether or not to collect the information necessitates to know its marginal utility), (ii) brute and strategic uncertainty (the latter is of course closely related to the existence of multiple equilibria) and (iii) the agents’ cognitive limitations. The latter is regarded by Elster as the most important source and is somewhat related to the irrationality problem. In Elster’s words,

“How can we impute to real-life agents the capacity to make in real time the calculations that occupy many pages of mathematical appendixes in the leading journals and that can be acquired only through years of professional training?” (p. 2166)

Elster’s objection is hardly new and many different responses have been developed. It is not my intention to survey them. I shall rather on one issue that follows from Elster’s critique: can we learn anything with unrealistic models and how? There is an empirical disagreement among economists regarding the degree at which individual agents are truly irrational. Against the behavioral economists’ claim that individuals’ behavior and reasoning exhibit a long list of biases, other economists claim that this depends on the institutional setting in which individuals’ choices take place (for instance, it is probably not true that hyperbolic discounting is dominant in many market and many biases seem to diminish in importance if agents have the opportunity to learn). It is a fact however that individuals’ behaviors do not have the consistency properties that most rational choice models assume they have. Moreover, most rational choice models are unrealistic beyond their “behavioral” assumptions about agents’ reasoning abilities. They also make rather unrealistic “structural” assumptions such as for instance the number of players, the homogeneity of their preferences, the fact that features of the game are common knowledge, and so on. A good example among the case discussed by Elster is Acemoglu & Robinson’s theory of political transitions. The latter builds on a game-theoretic model with only two players which are thought to be representative of two groups of actors, the elites and the citizens. The preferences of the members of each group are assumed to be homogenous and, for the citizens group, to correspond to the median voter’s preferences. The model also makes several strong assumptions regarding what the players are knowing.

So, can we learn anything about real world mechanisms from such unrealistic models? The philosopher of social science Harold Kincaid has recently made an interesting suggestion for a (partially) positive answer. Kincaid rightly starts by indicating that it is vain to search for a general defense of unrealistic models in the social sciences and that each evaluation must be made on a case-by-case basis. Regarding perfect competition and game-theoretic models, Kincaid argues that may offer relevant explanations in spite of the fact that they build on highly unrealistic assumptions:

“The insight is that assumptions of the perfect competition and game theory models may just be assumptions the analyst – the economist or political scientist – uses to identify equilibria. However, in certain empirical applications, the explanations are equilibrium explanations that make no commitment to what process leads individuals to find equilibrium”

In my view, this account of the relevance of unrealistic models particularly works well in the case of mechanism design which is at the same time a highly theoretical but also applied branch of microeconomics. A typical approach in mechanism design is to consider that the right institutional design will entail equilibrium play from the players, even if the designer ignores the players’ actual preferences. The modeler does not make any commitment regarding how the players will find their way to the equilibrium. The model simply indicates that if the institutional set up has such or such characteristics (e.g. a continuous double bid auction), then the outcome will have such or such characteristics (e.g. allocative efficiency). It is then possible to check for this conjecture through experiments.

On this account, the model is thus merely a device to identify the equilibrium but has no use for explaining the mechanism through which the equilibrium is reached. It is not sure however that this account applies to rational choice models used in other settings, especially if experiments are impossible. For instance, Acemoglu & Robinson’s model highlight the importance of commitment to explain political transitions. Indeed, their theory aims at accounting for the change from a dictatorial equilibrium toward a democratic equilibrium. The elites’ ability to commit not to raise taxes in the future is the key feature that determines whether or not the political transition will occur. The model thus suggests that a highly general mechanism is at play but it is unsure which level of confidence we can have in this explanation given the highly unrealistic assumptions on which it builds. An alternative defense would be that the model’s value comes from the fact that it highlights a mechanism that may possibly partially explain political transitions. Thanks to the model, we perfectly understand how this mechanism works, even though we cannot be sure that this mechanism is actually responsible for the relevant phenomenon to be explained. In other words, the relevance of the model comes from the fact that it depicts a possible world which we are able to fully explore and that this world bears some (even remote) resemblance with the actual world. As I have argued elsewhere, many models in economics seem to be valued for this reason.

The problem with this last account is that, while it may explain why economists give credence to rational choice models, it is highly unlikely to convince skeptics like Elster that they are explanatory relevant. Indeed, as Elster has argued elsewhere, the academic value given to these models may itself result from the fact that the economic profession is trapped in a bad equilibrium.

Recent Working Papers

You will find below several working papers I have written recently on different (but somewhat related) topics. Comments are welcome!

A Bayesian Conundrum: From Pragmatism to Mentalism in Bayesian Decision and Game Theory

Abstract: This paper discusses the implications for Bayesian game theory of the behaviorism-versus-mentalism debate regarding the understanding of foundational notions of decision theory. I argue that actually the dominant view among decision theorists and economists is neither mentalism nor behaviorism, but rather pragmatism. Pragmatism takes preferences as primitives and builds on three claims: i) preferences and choices are analytically distinguishable, ii) qualitative attitudes have priority over quantitative attitudes and iii) practical reason has priority over theoretical reason. Crucially, the plausibility of pragmatism depends on the availability of the representation theorems of Bayesian decision theory. As an extension of decision-theoretic principles to the study of strategic interactions, Bayesian game theory also essentially endorses the pragmatist view. However, I claim that the fact that representation theorems are not available in games makes this view implausible. Moreover, I argue that pragmatism cannot properly account for the the generation of belief hierarchies in games. If the epistemic program in game theory is to be pursued, this should probably be along mentalistic lines.

Keywords: Bayesian synthesis – Bayesian game theory – Pragmatism – Mentalism – Preferences

 

Neo-Samuelsonian Welfare Economics: From Economic to Normative Agency

Abstract: This paper explores possible foundations and directions for “Neo-Samuelsonian Welfare Economics” (NSWE). I argue that neo-Samuelsonian economics entails a reconciliation problem between positive and normative economics due to the fact that it cuts the relationship between economic agency (i.e. what and who the economic agent is) and normative agency (i.e. what should be the locus of welfare analysis). Developing a NSWE thus implies to find a way to articulate economic and normative agency. I explore two possibilities and argue that both are attractive but have radically different implications for the status of normative economics. The first possibility consists in fully endorsing a normative approach in terms of “formal welfarism” which is completely neutral regarding both the locus and the unit measure of welfare analysis. The main implication is then to make welfare economics a branch of positive economics. The second possibility is to consider that human persons should be regarded as axiologically relevant because while they are not prototypical economic agents, they have the ability to represent them both to themselves and to others as reasonable and reliable beings through narrative construction processes. This gives a justification for viewing well-being as being constituted by the persons’ preferences, but only because these preferences are grounded on reasons and values defining the identity of the persons. This view is somehow compatible with recent accounts of well-being in terms of value-based life satisfaction and implies a sensible reconsideration of the foundations of welfare economics.

Keywords: Neo-Samuelsonian economics – Welfare Economics – Revealed preference theory – Preference-satisfaction view of welfare – Economic agency

 

History, Analytic Narratives and the Rules-in-Equilibrium View of Institutions

Abstract: Analytic narratives are case studies of historical events and/or institutions that are formed by the combination of the narrative method characteristic of historical and historiographical works with analytic tools, especially game theory, traditionally used in economics and political science. The purpose of this paper is to give a philosophy-of-science view of the relevance of analytical narratives for institutional analysis. The main claim is that the AN methodology is especially appealing in the context of a non-behaviorist and non-individualist account of institutions. Such an account is fully compatible with the “rules-in-equilibrium” view of institutions. On this basis, two supporting claims are made: first, I argue that within analytical narrative game-theoretic models play a key role in the identification of institutional mechanisms as the explanans for economic phenomena, the latter being irreducible to so-called “micro-foundations”. Second, I claim that the “rules-in-equilibrium” view of institutions provides justification for the importance given to non-observables in the institutional analysis. Hence, institutional analysis building on analytical narrative typically emphasizes the role of derived (i.e. non-directly observed) intentional states (preferences, intentions, beliefs).

Keywords: Analytic narratives – Rules-in-equilibrium view of institutions – Institutional analysis – Game theory

Accounting for Choices in Economics

Economics is sometimes characterized as the “science of rational choices over the allocation of scarce resources” or even more straightforwardly as the “science of choices”. In a recent blog, Chris Dillow makes some interesting remarks about people’s economic behavior. He notes that our behavior is often partially unconscious and/or habit-based. Moreover, the set of available options is quite frequently severely restricted such that there is few room to make voluntary choices. Finally, many decisions are actually more or less random and grounded on social norms, conventions and other factors on which we barely reflect. The conclusion is then that

“when we ask “why did he do that?” we must look beyond “max U” stories about mythical individuals abstracted from society and look at the role of habit, cultural persistence and constraints.”

These are interesting and important remarks because they directly concern the scope of economics as well as the meaning of the key concept of choice. It seems that Dillow is using the choice concept according to its folk meaning. According to the latter, to properly say “she chooses x” requires at least that (a) one has several available options at her disposal to choose between and (b) she opts for one of the available option consciously and voluntarily. However, I would argue that this is not how economists generally use and understand the choice concept. They rather use a concept of choice* in a technical sense. To put it using some jargon, in economics choices* are basically behavioral patterns that correlate with changes in opportunity costs. In other words, when we say that economics is the science of choices*, what is actually meant is that it studies how some particular variable reflecting for instance the consumption level of a given good, changes as the good’s relative price or consumers’ information change. This definition of choice* has at least two noteworthy implications:

1) Economists are not interested in individual choices per se. Economists almost always work at some aggregated level and they do not aim at explaining the choices made by specific individuals or firms. They are rather interested in the properties of aggregate demand and supply.

2) Economists are agnostic regarding the specific mechanisms through which economic agents are making choices. In particular, there is no presumption that these choices are conscious and not habit-based. The U-Max framework only assumes that individual choices are responsive to change in opportunity costs, not how and why they are responsive.

These two implications work in conjunctions. Choices* need not be conscious nor based on any form of complex calculus but they are however intentional: choices (in both the folk and technical meanings) are about something and they are the product of the agents’ intentional states (desires, beliefs, wants…). As philosophers of mind have emphasized, there is nothing paradoxical in the combination of unconsciousness and intentionality. The U-Max framework, as well as decision and game theory as a whole are tools that are particularly well-fitted to study intentional behavior, whether conscious or not. These tools indeed assume that individual choices are responsive to changes in opportunity costs which, in special cases (e.g. addictive behavior), may not be true. However, this is mostly irrelevant as long as responsiveness is preserved at some market level. Gary Becker’s paper “Irrational Behavior and Economic Theory” provides an extreme example of this point. It shows how we can derive “well-behaved” demand and supply functions with individual agents (households and firms) using “irrational” decision rules. This result is by no way a necessity: there are cases where irrational behavior will lead to unconventional demand and supply functions and because of income effects even rational behavior at the individual level can generate upward-slopping demand curves. Generally speaking, institutions matter: the way exchanges are organized will determine the aggregate outcome for a given profile of preferences and production costs.

All of this depends on the claim that economists are not interested in explaining individual choices. Economists with the strongest revealed-preference stance are likely to agree with this claim. But there are many economists who are likely to disagree, considering that accounting for individual choices is necessary to understand aggregate outcomes such as a financial crisis. More generally, I would argue that attempting to explain individual choices can hardly be avoided in the numerous cases where multiple equilibria exist. The point is that to explain why a given equilibrium has been selected, it will most of the time be required to understand how individuals make choices. Here, whether choices are habit- or calculus-based, conscious or automatic, and so on, may matter. For instance, Thomas Schelling famously pointed out in The Strategy of Conflict the important of focal points to account for the way people are able to coordinate without communicating. As Schelling made it clear, focal points are not determined by the mathematical properties of the game nor by purely instrumental considerations. They depend on cultural, social and aesthetic features.

A slightly more complex example but which is even more relevant, especially in industrial organization, is the existence of multiple (Bayesian perfect) equilibria in incomplete information games. In incomplete information games, one player (the “principal”) ignores the other players’ (the “agent”) type. The agent’s choice may sometimes convey an information to the principal and helps him to identify the agent’s type. Such games typically have multiple equilibria with some of them separating and other pooling ones. Which equilibrium is implemented is partially determined by the way the principal interprets the agent’s choice. Under a separating equilibrium, the principal interprets the agent’s choice in such a way that it provides him with an information about the agent’s type. This is not the case under a pooling equilibrium. Of course, since under a pooling equilibrium all agents behave the same way whatever their type, observed behavior cannot serve as a basis to infer agents’ type. But the fact that all agents behave the same is itself a rational response to their own understanding of the way the principal will interpret their choice at the equilibrium.

My point is thus that in strategic interactions where players have to think about how other players are thinking, it is less clear that economists can safely ignore how people make choices. Given the same set of “fundamentals” (preferences, technology, information distribution), different behavioral patterns may arise and these differences are likely to be due to the way individual agents are choosing.