What Are Rational Preferences

Scott Sumner has an interesting post on Econlog about the economists’ use of what can be called the “Max U” framework, i.e. the approach consisting in describing and/or explaining people’s behavior as a utility maximization. As he points out, there are many behaviors (offering gifts at Christmas, voting, buying lottery tickets, smoking) that most economists are ready to deem “irrational” while actually they seem amenable to some kind of rationalization. Sumner then argues that the problem is not with the Max U framework but rather lies in the economists’ “lack of imagination” regarding the ways people can derive utility.

Sumner’s post singles out an issue that lies at the heart of economic theory since the “Marginalist revolution”: what is the nature of utility and of the related concept of preferences? I will not return here on the fascinating history of this issue that passes through Pareto’s ordinalist reinterpretation of the utility concept to Samuelson’s revealed preference account whose purpose was to frame the ordinalist framework in purely behaviorist terms. These debates had also much influence on normative economics as they underlie Robbins’ argument for the rejection of interpersonal comparisons of utility that ultimately led to Arrow’s impossibility theorem and the somewhat premature announcement of the “death” of welfare economics. From a more contemporary point of view, this issue is directly relevant for modern economics and in particular for the fashionable behavioral economics research program, especially as it has now taken a normative direction. Richard Thaler’s reaction to Sumner’s post on Twitter is thus no surprise:

<blockquote class=”twitter-tweet” lang=”fr”><p lang=”en” dir=”ltr”>Yes. This version of economics is unfalsifiable. If people can &quot;prefer&quot; $5 to $10 then what are preferences? <a href=”https://t.co/Cn1XQoIzsh”>https://t.co/Cn1XQoIzsh</a></p>&mdash; Richard H Thaler (@R_Thaler) <a href=”https://twitter.com/R_Thaler/status/680831304175202305″>26 Décembre 2015</a></blockquote>

Thaler’s point is clear: if we are to accept that all the examples given by Sumner are actual cases of utility maximization, then virtually all kinds of behaviors can be seen as utility maximization. Equivalently, any behavior can be explained by an appropriate set of “rational” preferences with the required properties of consistency and continuity. This point if of course far from being new: many scholars have already argued that rational choice theory (either formulated in terms of utility functions [decision theory for certain and uncertain decision problems] or of choice functions [revealed preference theory]) is unfalsifiable: it is virtually always possible to change the description of a decision problem such as to make the observed behavior consistent with some set of axioms. In the context of revealed preference theory, this point is wonderfully made by Bhattacharyya et al. on the basis on Amartya Sen’s long-standing critique of the rationality-as-consistency approach. As they point out, revealed preference theory suffers from an underdetermination problem: for any set of inconsistent choices (according to some consistency axiom), it is in practice impossible to know whether the inconsistency is due to “true” and intrinsic irrationality or is just the result of an improper specification of the decision problem. In the context of expected utility theory, John Broome’s discussion of the Allais paradox clearly shows that reconciliation is in principle possible on the basis of a redefinition of the outcome space.

Therefore, the fact that rational choice theory may be unfalsifiable is widely acknowledged. Is this a problem? Not so much if we recognize that falsification is no longer recognized as the undisputed demarcation criterion for defining science (as physicists are currently discovering). But even if we ignore this philosophy of science feature, the answer to the above question also depends on what we consider to be the relevant purpose of rational choice theory (and more generally of economics) and relatedly, what should the scientific meaning of the utility and preference concepts. In particular, a key issue is whether or not a theory of individual rationality should be part of economics. Three positions seem to be possible: The “Not at all” thesis, the “weakly positive” thesis and the “strongly positive” thesis:

A) Not at all thesis: Economics is not concerned with individual rationality and therefore does not need a theory of individual rationality. Preferences and utility are concepts used to describe choices (actually or counterfactually) made by economic agents through formal (mathematical) statements useful to deal with authentic economic issues (e.g. under what conditions an equilibrium with such and such properties exists?).

B) Weakly positive thesis: Economics builds on a theory of individual rationality but this theory is purely formal. It equates rationality with consistency of choices and/or preferences. Therefore, it does not specify the content of rational preferences but it sets minimal formal conditions that the preference relation or the choice function should satisfy. Preferences and utility are more likely (but not necessarily) to be defined in terms of choices.

C) Strongly positive thesis: Economics builds on a theory of individual rationality and actually parts of economics consist in developing such a theory. The theory is substantive: it should state what are rational preferences, not only define consistency properties for the preference relation. Preferences and in particular utility cannot be defined exclusively in terms of choices, they should refer to inner states of mind (e.g. “experienced utility”) which are accessible in principle through psychological and neurological techniques and methods.

Intuitively, I would say that if asked most economists would entertain something like the (B) view. Interestingly however, this is probably the only view that is completely unsustainable after careful inspection! The problem is the one emphasized by Thaler and others: if rational choice theory is a theory of individual rationality, then it is empirically empty. The only way to circumvent the problem is the following: consider any decision problem Di faced by some agent i. Denote T the theory or model used by the economist to describe this decision problem (T can be either formulated in an expected utility framework or in a revealed preference framework). A theory T specifies, for any Di, what are the permissible implications in terms of behavior (i.e. what i can do given the minimal conditions and constraints defined in T). Denote I the set of such implications and S any subset of these implications. Then, a theory T corresponds to a mapping T: D –> I with D the set of all decision problems or, equivalently, T(Di) = S. Suppose that for a theory T and a decision problem Di we observe a behavior b such that b is not in S. This is not exceptional as any behavioral economist will tell you. What can we do? The first solution is the (naïve) Popperian one: discard T and adopt an alternative theory T’. This is the behavioral economists’ solution when they defend cumulative prospect theory against expected utility theory. The other solution is to stipulate that actually i is not facing decision problem Di but rather decision problem Di’, where T(Di’) = S’ and b ∈ S’. If we adopt this solution, then the only way to make T falsifiable is to limit the range of admissible redefinitions of any decision problem. If theory T is not able to account to some implication b under all the range of admissible descriptions, then it will be falsified. However, it is clear that to define such a range of admissible descriptions necessitates making substantive assumptions over what are rationalizable preferences. Hence, this leads one toward view (C)!

Views (A) and (C) are clearly incompatible. The former has been defended by contemporary proponents of variants of revealed preference theory such as Ken Binmore or Gul and Pesendorfer. Don Ross provides the most sophisticated philosophical defense of this view. View (C) is more likely to be endorsed by behavioral economists and also by some heterodox economists. Both have however a major (and problematic for some scholars) implication once the rationality concept is no longer understood positively (are people rational?) but from an evaluative and normative perspective (what is it to be rational?). Indeed, one virtue of view (B) is that it nicely ties together positive and normative economics. In particular, if it appears that people are sufficiently rational, then the consumer’s sovereignty principle will permit to make welfare judgments on the basis of people’s choices. But this is no longer true under views (A) and (C). Under the former, it is not clear why we should grant any normative significance to the fact that economic agent make consistent choices, in particular because these agents have not to be necessarily flesh-and-bones persons (they can be temporal selves). Welfare judgments can still be formally made but they are not grounded on any theory of rationality. A normative account of agency and personality is likely to be required to make any convincing normative claim. View (C) cannot obviously build on the consumer’s sovereignty principle once it is recognized that people do not always choices in their personal interests. Indeed, this is the very point of the so-called “libertarian paternalism” and more broadly of the normative turn of behavioral economics. It has to face however the difficulty that today positive economics does not offer any theory of “substantively rational preferences”. The latter is rather to be found in moral philosophy and possibly in natural sciences. In any case, economics cannot do the job alone.

Christmas, Economics and the Impossibility of Unexpected Events

surprise

Each year, as Christmas is approaching, economists like to remind everyone that making gifts is socially inefficient. The infamous “Christmas deadweight loss” corresponds to the fact that the resources allocation is suboptimal because people would have chosen to buy different things than the ones they have received as gifts at Christmas if they were given the equivalent value in cash. This is a provocative result but it follows from straightforward (though clearly shortsighted) economic reasoning. I would like here to point out another disturbing result that comes from economic theory. Though it is not specific to the Christmas period it is quite less straightforward, which makes it much more interesting. It is related to the (im)possibility of surprising people.

I will take for granted that one of the points of a Christmas present is to try to surprise the person you’re making the gift to. Of course, many people make wish lists but the point is precisely that 1) one will rarely expect to receive all the items he has indicated on his list and 2) the list may be fairly open or at least give to others an idea of the kind of presents one wish to receive without being too specific. In any case, apart from Christmas, there are several other social institutions whose value is partially derived from the possibility of surprising people (think of April fools). However, on the basis of the standard rationality assumptions made in economics, it is clear that surprising people is simply impossible and even non-sense.

I start with some definitions. An event is a set of states of the world where each person behave in a certain way (e.g. makes some specific gifts to others) and holds some specific conjectures or beliefs about what others are doing and believing. I call an unexpected event an event for which at least one person attributes a null prior probability of realizing. An event is impossible if it is inconsistent with the people’s theory (or model) of the situation they are in. The well-known example of the so-called “surprise exam paradox” gives a great illustration of these definitions. A version of this example is as follows:

The Surprise Exam Paradox: At day D0, the teacher T announces to his students S that he will give them a surprise exam either at D1 or at D2. Denote En the event “the exam is given at day Dn (n = 1, 2)” and assumes that the students S believes the teacher T’s announcement. They also know that T really wants to surprise them and they know that he knows that. Finally, we assume that S and T have common knowledge of their reasoning abilities. On this basis, the students reason the following way:

SR1: If the exam is not given at D1, it will be necessarily given at D2 (i.e. E2 has probability 1 according to S if not E1). Hence, S will not be surprised.
SR2: S knows that T knows SR1.
SR3: Therefore, T will give the exam at D1 (i.e. E1 has probability 1 according to S). Hence, S will not be surprised.
SR4: S knows that T knows SR3.
SR5: S knows that T knows SR1-SR4, hence the initial announcement is impossible.

The final step of S’s reasoning (SR5) indicates that there is no event En that is both unexpected and consistent with S’s theory of the situation as represented by the  assumptions stated in the description of the case. Still, suppose that T gives the exam at D2; then indeed the students will be surprised but in a very different sense than the one we have figured out. The surprise exam paradox is a paradox because whatever T decides to do, this is inconsistent with at least one of the premises constitutive of the theory of the situation. In other words, the students are surprised because they have the wrong theory of the situation, but this is quite “unfair” since the theory is the one the modeler has given to them.

Now, the point is that surprise is similarly impossible in economics under the standard assumption of rational expectation. Actually, this directly follows from how this assumption is stated in macroeconomics: an agent’s expectations are rational if they correspond to the actual state of the world on average. The last clause “on average” means that for any given variable X, the difference between the agent’s expectation of the value of X and the actual value of X is captured by a random error variable of mean 0. This variable is assumed to follow some probabilistic distribution that is known by the agent. Hence, while the agent’s rational expectation may actually be wrong, he will never be surprised whatever the actual value of X. This is due to the fact that he knows the probability distribution of the error term and hence he expects to be wrong according to this probability distribution even though he expects to be right on average.

However, things are more interesting in the strategic case, i.e. when the value of X depends on the behavior of each person in the population, the latter depending itself on one’s expectations about others’ behavior and expectations. Then, the rational expectations hypothesis is akin to assuming some kind of consistency between the persons’ conjectures (see this previous post on this point). At the most general level, we assume that the value of X (deterministically or stochastically) depends on the profile of actions s = (s1, s2, …, sn) of the n agents in the population, i.e. X = f(s). We also assume that there is mutual knowledge that each person is rational: she chooses the action that maximizes her expected utility given her beliefs about others’ actions, hence si = si(bi) for all agents i in the population, with bi agent i’s conjecture about others’ actions. It follows that it is mutual knowledge that X = f(b1, b2, …, bn). An agent i’s conjecture is rational if bi* = (s1*, …, si-1*, si+1*, …, sn*) with sj* the actual behavior of agent j. Denote s* = (s1*(b1*), s2*(b2*), …, sn*(bn*)) the resulting strategy profile. Since there is mutual knowledge of rationality, the fact that one knows s* implies that he knows each bi* (assuming that there is a one-to-one mapping between conjecture and action); hence the profile of rational conjectures b* = (b1*, b2*, …, bn*) is also mutually known. By the same reasoning, a k order of mutual knowledge of rationality entails a k order of mutual knowledge of b* and common knowledge of rationality entails common knowledge of b*. Therefore, everyone correctly predicts X and this is common knowledge.

Another way to put this point is proposed by Robert Aumann and Jacques Dreze in an important paper where they show the formal equivalence between the common prior assumption and the rational expectation hypothesis. Basically, they show that a rational expectation equilibrium is equivalent to a correlated equilibrium, i.e. a (mix-)strategy profile determined by the probabilistic distribution of some random device and where players maximize expected utility. As shown in another important paper by Aumann, two sufficient conditions for obtaining a correlated equilibrium are common knowledge of Bayesian rationality and a common prior over the strategy profiles that can be implemented (the common prior reflects the commonly known probabilistic distribution of the random device). This ultimately leads to another important result proved by Aumann: persons with a common prior and a common knowledge of their ex post conjectures cannot “agree to disagree”. In a world where people have a common prior over some state space and a common knowledge of their rationality or of their ex post conjectures (which here is the same thing), unexpected events are simply impossible. One already knows all that can happen and thus will ascribe a strictly positive probability to any possible event. This is nothing but the rational expectation hypothesis.

Logicians and game theorists who have dealt with Aumann’s theorems have proven that the latter build on a formal structure that is equivalent to the well-known S5 formal system in modal logic. The axioms of this formal system imply, among other things, logical omniscience (an agent knows all logical truths and the logical implications of what he knows) and, more controversially, negative introspection (when one does not know something, he knows it). Added to the fact that everything is captured in terms of knowledge (i.e. true beliefs), it is intuitive that such a system is unable to deal with unexpected events and surprise. From a logical point of view, this problem can be answered simply by changing the axioms of and assumptions of the formal system. Consider the surprise exam story once again. The paradox seems to disappear if we give up the assumption of common knowledge of reasoning abilities. For instance, we may suppose that the teacher knows the reasoning abilities of the students but not that the students knows that he knows that. In this case, steps SR2, SR3 and SR4 cannot occur. Or we may suppose that the teacher knows the reasoning abilities of the students and that the students knows that he knows that, but that the teacher does not know that they know that he knows. In this case, step SR5 in the students’ reasoning cannot occur. In both cases, the announcement is no longer inconsistent with the students’ and teacher’s knowledge. This is not completely satisfactory however for at least two reasons: first, the plausibility of the result depends on epistemic assumptions which are completely ad hoc. Second, the very nature of the formal systems of standard modal logic implies that the agent’s theory of a given situation captures everything that is necessarily true. In the revised version of the surprise exam example above, it is necessarily true that an exam will be given either at day D1 or D2, and thus everyone must know that, and so the exam is not a surprise in the sense of an unexpected event.

The only way to avoid these difficulties is to enter the fascinating but quite complex realm of non-monotonic modal logic and beliefs revision theories. In practice, this consists in giving up the assumption that the agents are logically omniscient in the sense that may not know something that is necessarily true. Faced with an inconsistency, an agent will adopt a belief revision procedure such as to make his belief and knowledge consistent with an unexpected event. In other words, though the agent does not expect to be surprised, it is possible to account for how he deals with unexpected information. As far as I know, there have been very few attempts in economics to build on such kinds of non-monotonic formalization to tackle of expectations formation and revision, in spite of the growing importance of the macroeconomic literature on learning. Game theorists have been more prone to enter into this territory (see this paper of Michael Bacharach for instance) but much remains to be done.