What Are Rational Preferences

Scott Sumner has an interesting post on Econlog about the economists’ use of what can be called the “Max U” framework, i.e. the approach consisting in describing and/or explaining people’s behavior as a utility maximization. As he points out, there are many behaviors (offering gifts at Christmas, voting, buying lottery tickets, smoking) that most economists are ready to deem “irrational” while actually they seem amenable to some kind of rationalization. Sumner then argues that the problem is not with the Max U framework but rather lies in the economists’ “lack of imagination” regarding the ways people can derive utility.

Sumner’s post singles out an issue that lies at the heart of economic theory since the “Marginalist revolution”: what is the nature of utility and of the related concept of preferences? I will not return here on the fascinating history of this issue that passes through Pareto’s ordinalist reinterpretation of the utility concept to Samuelson’s revealed preference account whose purpose was to frame the ordinalist framework in purely behaviorist terms. These debates had also much influence on normative economics as they underlie Robbins’ argument for the rejection of interpersonal comparisons of utility that ultimately led to Arrow’s impossibility theorem and the somewhat premature announcement of the “death” of welfare economics. From a more contemporary point of view, this issue is directly relevant for modern economics and in particular for the fashionable behavioral economics research program, especially as it has now taken a normative direction. Richard Thaler’s reaction to Sumner’s post on Twitter is thus no surprise:

<blockquote class=”twitter-tweet” lang=”fr”><p lang=”en” dir=”ltr”>Yes. This version of economics is unfalsifiable. If people can &quot;prefer&quot; $5 to $10 then what are preferences? <a href=”https://t.co/Cn1XQoIzsh”>https://t.co/Cn1XQoIzsh</a></p>&mdash; Richard H Thaler (@R_Thaler) <a href=”https://twitter.com/R_Thaler/status/680831304175202305″>26 Décembre 2015</a></blockquote>

Thaler’s point is clear: if we are to accept that all the examples given by Sumner are actual cases of utility maximization, then virtually all kinds of behaviors can be seen as utility maximization. Equivalently, any behavior can be explained by an appropriate set of “rational” preferences with the required properties of consistency and continuity. This point if of course far from being new: many scholars have already argued that rational choice theory (either formulated in terms of utility functions [decision theory for certain and uncertain decision problems] or of choice functions [revealed preference theory]) is unfalsifiable: it is virtually always possible to change the description of a decision problem such as to make the observed behavior consistent with some set of axioms. In the context of revealed preference theory, this point is wonderfully made by Bhattacharyya et al. on the basis on Amartya Sen’s long-standing critique of the rationality-as-consistency approach. As they point out, revealed preference theory suffers from an underdetermination problem: for any set of inconsistent choices (according to some consistency axiom), it is in practice impossible to know whether the inconsistency is due to “true” and intrinsic irrationality or is just the result of an improper specification of the decision problem. In the context of expected utility theory, John Broome’s discussion of the Allais paradox clearly shows that reconciliation is in principle possible on the basis of a redefinition of the outcome space.

Therefore, the fact that rational choice theory may be unfalsifiable is widely acknowledged. Is this a problem? Not so much if we recognize that falsification is no longer recognized as the undisputed demarcation criterion for defining science (as physicists are currently discovering). But even if we ignore this philosophy of science feature, the answer to the above question also depends on what we consider to be the relevant purpose of rational choice theory (and more generally of economics) and relatedly, what should the scientific meaning of the utility and preference concepts. In particular, a key issue is whether or not a theory of individual rationality should be part of economics. Three positions seem to be possible: The “Not at all” thesis, the “weakly positive” thesis and the “strongly positive” thesis:

A) Not at all thesis: Economics is not concerned with individual rationality and therefore does not need a theory of individual rationality. Preferences and utility are concepts used to describe choices (actually or counterfactually) made by economic agents through formal (mathematical) statements useful to deal with authentic economic issues (e.g. under what conditions an equilibrium with such and such properties exists?).

B) Weakly positive thesis: Economics builds on a theory of individual rationality but this theory is purely formal. It equates rationality with consistency of choices and/or preferences. Therefore, it does not specify the content of rational preferences but it sets minimal formal conditions that the preference relation or the choice function should satisfy. Preferences and utility are more likely (but not necessarily) to be defined in terms of choices.

C) Strongly positive thesis: Economics builds on a theory of individual rationality and actually parts of economics consist in developing such a theory. The theory is substantive: it should state what are rational preferences, not only define consistency properties for the preference relation. Preferences and in particular utility cannot be defined exclusively in terms of choices, they should refer to inner states of mind (e.g. “experienced utility”) which are accessible in principle through psychological and neurological techniques and methods.

Intuitively, I would say that if asked most economists would entertain something like the (B) view. Interestingly however, this is probably the only view that is completely unsustainable after careful inspection! The problem is the one emphasized by Thaler and others: if rational choice theory is a theory of individual rationality, then it is empirically empty. The only way to circumvent the problem is the following: consider any decision problem Di faced by some agent i. Denote T the theory or model used by the economist to describe this decision problem (T can be either formulated in an expected utility framework or in a revealed preference framework). A theory T specifies, for any Di, what are the permissible implications in terms of behavior (i.e. what i can do given the minimal conditions and constraints defined in T). Denote I the set of such implications and S any subset of these implications. Then, a theory T corresponds to a mapping T: D –> I with D the set of all decision problems or, equivalently, T(Di) = S. Suppose that for a theory T and a decision problem Di we observe a behavior b such that b is not in S. This is not exceptional as any behavioral economist will tell you. What can we do? The first solution is the (naïve) Popperian one: discard T and adopt an alternative theory T’. This is the behavioral economists’ solution when they defend cumulative prospect theory against expected utility theory. The other solution is to stipulate that actually i is not facing decision problem Di but rather decision problem Di’, where T(Di’) = S’ and b ∈ S’. If we adopt this solution, then the only way to make T falsifiable is to limit the range of admissible redefinitions of any decision problem. If theory T is not able to account to some implication b under all the range of admissible descriptions, then it will be falsified. However, it is clear that to define such a range of admissible descriptions necessitates making substantive assumptions over what are rationalizable preferences. Hence, this leads one toward view (C)!

Views (A) and (C) are clearly incompatible. The former has been defended by contemporary proponents of variants of revealed preference theory such as Ken Binmore or Gul and Pesendorfer. Don Ross provides the most sophisticated philosophical defense of this view. View (C) is more likely to be endorsed by behavioral economists and also by some heterodox economists. Both have however a major (and problematic for some scholars) implication once the rationality concept is no longer understood positively (are people rational?) but from an evaluative and normative perspective (what is it to be rational?). Indeed, one virtue of view (B) is that it nicely ties together positive and normative economics. In particular, if it appears that people are sufficiently rational, then the consumer’s sovereignty principle will permit to make welfare judgments on the basis of people’s choices. But this is no longer true under views (A) and (C). Under the former, it is not clear why we should grant any normative significance to the fact that economic agent make consistent choices, in particular because these agents have not to be necessarily flesh-and-bones persons (they can be temporal selves). Welfare judgments can still be formally made but they are not grounded on any theory of rationality. A normative account of agency and personality is likely to be required to make any convincing normative claim. View (C) cannot obviously build on the consumer’s sovereignty principle once it is recognized that people do not always choices in their personal interests. Indeed, this is the very point of the so-called “libertarian paternalism” and more broadly of the normative turn of behavioral economics. It has to face however the difficulty that today positive economics does not offer any theory of “substantively rational preferences”. The latter is rather to be found in moral philosophy and possibly in natural sciences. In any case, economics cannot do the job alone.


Christmas, Economics and the Impossibility of Unexpected Events


Each year, as Christmas is approaching, economists like to remind everyone that making gifts is socially inefficient. The infamous “Christmas deadweight loss” corresponds to the fact that the resources allocation is suboptimal because people would have chosen to buy different things than the ones they have received as gifts at Christmas if they were given the equivalent value in cash. This is a provocative result but it follows from straightforward (though clearly shortsighted) economic reasoning. I would like here to point out another disturbing result that comes from economic theory. Though it is not specific to the Christmas period it is quite less straightforward, which makes it much more interesting. It is related to the (im)possibility of surprising people.

I will take for granted that one of the points of a Christmas present is to try to surprise the person you’re making the gift to. Of course, many people make wish lists but the point is precisely that 1) one will rarely expect to receive all the items he has indicated on his list and 2) the list may be fairly open or at least give to others an idea of the kind of presents one wish to receive without being too specific. In any case, apart from Christmas, there are several other social institutions whose value is partially derived from the possibility of surprising people (think of April fools). However, on the basis of the standard rationality assumptions made in economics, it is clear that surprising people is simply impossible and even non-sense.

I start with some definitions. An event is a set of states of the world where each person behave in a certain way (e.g. makes some specific gifts to others) and holds some specific conjectures or beliefs about what others are doing and believing. I call an unexpected event an event for which at least one person attributes a null prior probability of realizing. An event is impossible if it is inconsistent with the people’s theory (or model) of the situation they are in. The well-known example of the so-called “surprise exam paradox” gives a great illustration of these definitions. A version of this example is as follows:

The Surprise Exam Paradox: At day D0, the teacher T announces to his students S that he will give them a surprise exam either at D1 or at D2. Denote En the event “the exam is given at day Dn (n = 1, 2)” and assumes that the students S believes the teacher T’s announcement. They also know that T really wants to surprise them and they know that he knows that. Finally, we assume that S and T have common knowledge of their reasoning abilities. On this basis, the students reason the following way:

SR1: If the exam is not given at D1, it will be necessarily given at D2 (i.e. E2 has probability 1 according to S if not E1). Hence, S will not be surprised.
SR2: S knows that T knows SR1.
SR3: Therefore, T will give the exam at D1 (i.e. E1 has probability 1 according to S). Hence, S will not be surprised.
SR4: S knows that T knows SR3.
SR5: S knows that T knows SR1-SR4, hence the initial announcement is impossible.

The final step of S’s reasoning (SR5) indicates that there is no event En that is both unexpected and consistent with S’s theory of the situation as represented by the  assumptions stated in the description of the case. Still, suppose that T gives the exam at D2; then indeed the students will be surprised but in a very different sense than the one we have figured out. The surprise exam paradox is a paradox because whatever T decides to do, this is inconsistent with at least one of the premises constitutive of the theory of the situation. In other words, the students are surprised because they have the wrong theory of the situation, but this is quite “unfair” since the theory is the one the modeler has given to them.

Now, the point is that surprise is similarly impossible in economics under the standard assumption of rational expectation. Actually, this directly follows from how this assumption is stated in macroeconomics: an agent’s expectations are rational if they correspond to the actual state of the world on average. The last clause “on average” means that for any given variable X, the difference between the agent’s expectation of the value of X and the actual value of X is captured by a random error variable of mean 0. This variable is assumed to follow some probabilistic distribution that is known by the agent. Hence, while the agent’s rational expectation may actually be wrong, he will never be surprised whatever the actual value of X. This is due to the fact that he knows the probability distribution of the error term and hence he expects to be wrong according to this probability distribution even though he expects to be right on average.

However, things are more interesting in the strategic case, i.e. when the value of X depends on the behavior of each person in the population, the latter depending itself on one’s expectations about others’ behavior and expectations. Then, the rational expectations hypothesis is akin to assuming some kind of consistency between the persons’ conjectures (see this previous post on this point). At the most general level, we assume that the value of X (deterministically or stochastically) depends on the profile of actions s = (s1, s2, …, sn) of the n agents in the population, i.e. X = f(s). We also assume that there is mutual knowledge that each person is rational: she chooses the action that maximizes her expected utility given her beliefs about others’ actions, hence si = si(bi) for all agents i in the population, with bi agent i’s conjecture about others’ actions. It follows that it is mutual knowledge that X = f(b1, b2, …, bn). An agent i’s conjecture is rational if bi* = (s1*, …, si-1*, si+1*, …, sn*) with sj* the actual behavior of agent j. Denote s* = (s1*(b1*), s2*(b2*), …, sn*(bn*)) the resulting strategy profile. Since there is mutual knowledge of rationality, the fact that one knows s* implies that he knows each bi* (assuming that there is a one-to-one mapping between conjecture and action); hence the profile of rational conjectures b* = (b1*, b2*, …, bn*) is also mutually known. By the same reasoning, a k order of mutual knowledge of rationality entails a k order of mutual knowledge of b* and common knowledge of rationality entails common knowledge of b*. Therefore, everyone correctly predicts X and this is common knowledge.

Another way to put this point is proposed by Robert Aumann and Jacques Dreze in an important paper where they show the formal equivalence between the common prior assumption and the rational expectation hypothesis. Basically, they show that a rational expectation equilibrium is equivalent to a correlated equilibrium, i.e. a (mix-)strategy profile determined by the probabilistic distribution of some random device and where players maximize expected utility. As shown in another important paper by Aumann, two sufficient conditions for obtaining a correlated equilibrium are common knowledge of Bayesian rationality and a common prior over the strategy profiles that can be implemented (the common prior reflects the commonly known probabilistic distribution of the random device). This ultimately leads to another important result proved by Aumann: persons with a common prior and a common knowledge of their ex post conjectures cannot “agree to disagree”. In a world where people have a common prior over some state space and a common knowledge of their rationality or of their ex post conjectures (which here is the same thing), unexpected events are simply impossible. One already knows all that can happen and thus will ascribe a strictly positive probability to any possible event. This is nothing but the rational expectation hypothesis.

Logicians and game theorists who have dealt with Aumann’s theorems have proven that the latter build on a formal structure that is equivalent to the well-known S5 formal system in modal logic. The axioms of this formal system imply, among other things, logical omniscience (an agent knows all logical truths and the logical implications of what he knows) and, more controversially, negative introspection (when one does not know something, he knows it). Added to the fact that everything is captured in terms of knowledge (i.e. true beliefs), it is intuitive that such a system is unable to deal with unexpected events and surprise. From a logical point of view, this problem can be answered simply by changing the axioms of and assumptions of the formal system. Consider the surprise exam story once again. The paradox seems to disappear if we give up the assumption of common knowledge of reasoning abilities. For instance, we may suppose that the teacher knows the reasoning abilities of the students but not that the students knows that he knows that. In this case, steps SR2, SR3 and SR4 cannot occur. Or we may suppose that the teacher knows the reasoning abilities of the students and that the students knows that he knows that, but that the teacher does not know that they know that he knows. In this case, step SR5 in the students’ reasoning cannot occur. In both cases, the announcement is no longer inconsistent with the students’ and teacher’s knowledge. This is not completely satisfactory however for at least two reasons: first, the plausibility of the result depends on epistemic assumptions which are completely ad hoc. Second, the very nature of the formal systems of standard modal logic implies that the agent’s theory of a given situation captures everything that is necessarily true. In the revised version of the surprise exam example above, it is necessarily true that an exam will be given either at day D1 or D2, and thus everyone must know that, and so the exam is not a surprise in the sense of an unexpected event.

The only way to avoid these difficulties is to enter the fascinating but quite complex realm of non-monotonic modal logic and beliefs revision theories. In practice, this consists in giving up the assumption that the agents are logically omniscient in the sense that may not know something that is necessarily true. Faced with an inconsistency, an agent will adopt a belief revision procedure such as to make his belief and knowledge consistent with an unexpected event. In other words, though the agent does not expect to be surprised, it is possible to account for how he deals with unexpected information. As far as I know, there have been very few attempts in economics to build on such kinds of non-monotonic formalization to tackle of expectations formation and revision, in spite of the growing importance of the macroeconomic literature on learning. Game theorists have been more prone to enter into this territory (see this paper of Michael Bacharach for instance) but much remains to be done.

Philosophy of Mind and the Case Against Methodological Individualism

David Glasner wrote an interesting post a few weeks ago about the relationship between the Neoclassical synthesis and the mind-body problem in the philosophy of mind. Glasner contends that the mind-body problem vindicates methodological individualism (MI) in economics. He also argues that because the mind-body problem does not imply reductionism (i.e. mental states are identical to brain states), the representative agent assumption in macroeconomics is dubious: the latter basically reduces aggregate phenomena to the optimal plans of some rational agent.

This is quite interesting and I think Glasner is right in his overall parallel. There is indeed quite to learn for economists from the mind-body problem and more generally from the philosophy of mind. However, I do not fully agree with the details of Glasner’s argument and this may have some larger implications concerning his whole conclusion about the representative agent assumption. I start first with a remark about the terminology. Glaser argues that the mind-body problem and MI in economics share a non-commitment toward reductionism because they recognize the reality of “higher-level” entities and phenomena such as beliefs, desires, business cycles and so on. This is not completely true because there are actually approaches in the philosophy of mind that argue that mental states do not exist and/or are merely epiphenomenal. These “eliminativist” approaches claim that we should simply stop to use the notions and concepts of folk psychology in scientific discussions. Some would argue that the representative agent assumption is more eliminativist than simply reductionist: it eliminates the higher-level phenomena in the sense that they actually simply are identical to the choices of a representative agent. Quite the contrary, some versions of MI as well as some treatments of the mind-body problem are indeed reductionists: they recognize the existence of higher-level entities but they claim that they can be fully explained by lower-level entities. A truly microfounded macroeconomics (i.e. without the representative agent assumption) would probably be of this kind.

This is a complex debate because we should distinguish between ontological and explanatory reductionism (in particular, the former does not imply the latter). Moreover, there is much to be said about the relevance of reductionism in science in general. I will not discuss these points here. More relevant to the problem at stake here are treatments of the mind-problem that are both materialist (i.e. mental events are realized by physical events) but non-reductionist. Functionalism, which is currently the dominant paradigm in the philosophy of mind, is of this latter kind. Though there are many variants, they all recognize the basic fact of multiple realizability: the same mental events may be physically realized by different lower-level events (e.g. the firing of different neural areas). More generally, a basic postulate of functionalism is that the same software can be implemented in different hardwares. A second key assumption of functionalism is that mental states are defined by their functions in terms of their causal relations with other states and external factors: when we say that “Mike believes that it rains”, we are saying that there is some physical event in Mike’s brain that is caused by certain sorts of external stimuli and that this causes a certain behavioral response. The function of Mike’s belief is then to cause this behavioral response given the appropriate set of external stimuli. It is then easy to see why functionalism does not necessarily imply reductionism: the same set of causal relations can be realized by a variety of physical hardwares, not only brains but also machines for instance.

What are the implications for economics and in particular MI? The economist and philosopher Don Ross has argued in recent writings that a peculiar kind of functionalism, Daniel Dennett’s intentional stance functionalism, entails the rejection of MI. The point is that from a revealed preference perspective, what matters is that we can define a well-behaved choice function given a set of data about choices on the market or on any other institutional setting. However, what is the “hardware” or the “vehicle” for those choices is irrelevant: these can be flesh-and-bones persons but also intra-personal selves (like in dual or multiple selves models) or aggregate demand functions. This is at this point where Glasner’s claim that the mind-body problem entails a rejection of the representative agent hypothesis is unconvincing: at least in Ross’ interpretation of functionalism, philosophy of mind quite the contrary makes such an hypothesis permissible. The only requirement is that the market demand reveals consistent choices and preferences on the basis of some consistency axiom. From the point of view of functionalism, the only function of an agent’s intentional states is to trigger a behavioral response given some set of circumstances. If these behavioral response and circumstances are described in terms of market data (supply, demand, prices), then there is nothing wrong with assimilating the market demand to a unique representative agent, provided that the consistency requirements are fulfilled.

We may argue over this of course. More straightforward however is the fact that functionalism and MI does not go well together. This can be shown by a simple example. Suppose I am interested in accounting for a particular economic fact, say the over-exploitation of some non-renewable resources and the way this problem is mitigated by some community. As an economist, I frame this fact as an instance of the collective action problem. In trying to produce a theoretical and empirically testable explanation for this fact, I build a game-theoretic model where I specify who are the players, what are the strategies at their disposal, their utility functions (and thus their preferences) and possibly some information structure (e.g. who knows what about other’s actions and rationality). Suppose that my model fits the facts, in the sense that there is an equilibrium (possibly among others) where the collective pattern generates the observed level of use of the resources. I will thus consider that my model is successful in providing an explanation for the fact that interested me. What does my model is representing? It is actually representing an institution (or a set of institutions) that, as a whole, is responsible for the level of use of the resource. We can see this institution as some kind of “machine” that triggers a behavioral response (the behavioral pattern and the associated use of the resources) given a set of circumstances that are implemented in the value of the model’s parameters. Many economists would claim that game-theoretical models in general and this one in particular are an instance of MI. But this is clearly wrong from the point of view of functionalism: I have not explained the economic fact in virtue of the players’ behavior and other properties; my explanation is provided by the whole “machine” (the institution) I have modeled. This machine is a set of formal (functional) relationships that represent a set of causal relationships between given circumstances and a behavioral pattern. Each particular mental state that can be attributed to the players (e.g. their beliefs) takes its meaning from their relationships with the other elements of the larger system that corresponds to the institution. This is not MI as it is traditionally understood.

Is the Choice of a Welfare Criterion a Matter of Opinion?

Noah Smith has a post on Gul and Pesendorfer’s 2005 much-discussed manifesto on the methodology of economics, especially regarding the rise of neuroeconomics. Philosophers of economics tend to compare Gul and Pesendorfer’s essay to Friedman’s 1953 article which provides an instrumentalist defense of the rationality assumption. Their article has generated a significant literature, notably a book edited by Schotter and Caplin. A recent discussion of the significance of Gul and Pesendorfer’s essay regarding the relationship between economics and psychology is provided by Don Ross in book Philosophy of Economics.

Noah Smith highlights two points. First, Gul and Psendorfer’ claim that only choice data are relevant in economics. Second, they argue against the idea that neuroeconomics and behavioral economics are relevant for welfare economics. Regarding the second point, he writes (my emphasis):

To be blunt, all welfare criteria seem fairly arbitrary and made-up to me. Data on choices do not automatically give you a welfare measure – you have to decide how to aggregate those choices. Why simply add up people’s utilities with equal weights to get welfare? Why not use the utility of the minimum-utility individual (a Rawlsian welfare function)? Or why not use a Nash welfare function? There seems no objective principle to select from the vast menu of welfare criteria already available. The selection of a welfare criterion thus seems like a matter of opinion – i.e., a normative question, or what GP call “social activism”. So why not include happiness among the possible welfare criteria? Why restrict our set of possible welfare criteria to choice-based criteria? I don’t see any reason, other than pure tradition and habit.

The notion that the choice of a welfare criterion is a matter of “opinion” seems to me somewhat odd. This is even more the case when “opinion” seems to be synonym with “normative question”. Are all normative issues matter of opinion? Is normative economics devoid of any objective or scientific basis? Or is science as a whole nothing but a matter of opinion?

This issue is even more difficult to discuss in the context of Gul and Pesendorfer’s essay because the latter are somehow heterodox in their view of welfare economics. Contrary to the received view, Gul and Pesendorfer claim that standard welfare economics is not normative economics. Rather, welfare analysis is “a tool for analyzing economic institutions and models” (p. 25 in Caplin and Schotter’s book). They actually reduce the welfare analysis to revealed preference and the Pareto criterion because “it is the only criterion that can be integrated with positive economic analysis” (p. 25). According to them, economists “use welfare analysis to identify the interest of economic agents and to ask whether the understanding of the institutional constraints on policies remains incomplete. This use of welfare analysis requires the standard definition of welfare” (p. 25, my emphasis).

In other words, what Gul and Pesendorfer are doing is to completely ground welfare economics in positive, revealed preference analysis. Welfare economics has no justification outside positive economics. I said above that this is somewhat heterodox because the choice of a welfare criterion is generally regarded as outside the domain of positive economics, or at least this was the view of the pioneers of the “new” welfare economics like Samuelson. However, the suggestion that positive economics imposes constraints on the choice of a welfare criterion is not new: in the 1930’s, the rejection of the then standard Utilitarian view of welfare was made on the basis of a “positive” claim that interpersonal comparisons of utility were outside the realm of science but instead depend on “value judgments”.

As I understand it, Noah Smith’s claim that the selection of a welfare criterion is “a matter of opinion” is that we cannot tie the latter to scientific theories and facts. We can study facts on the basis of the scientific method and reject scientific theories on the basis of evidence but normative issues cannot be settled in this way. This sounds reasonable and indeed that has been the received view in the philosophy of science for decades. The problem is that this view relies on a distinction between the positive and the normative (or between facts and values) which is too strong. Consider Gul and Psendorfer’s argument that links welfare analysis to the revealed preference approach of positive economics. What is problematic or at least debatable is not the idea that welfare analysis should be coherent with or informed by positive analysis. The problem is the tacit presupposition that revealed preference and choice data are the only acceptable scientific input in the analysis (Noah Smith’s first point). What is the status of this presupposition? Well, at the most basic level, it is purely conventional in the sense that it reflects some consensus among economists. In itself however, it is not empirically testable and seems to be nothing more than “a matter of opinion”.

Now, consider the other way around: can (and should) normative economics inform and constrain positive economics as Amartya Sen (among others) contends? Given the fact that positive economics is also grounded on conventional choices, there is absolutely no reason to reject this possibility. But wait: if everything is a matter of opinion (i.e. is conventional), what remains of science? The answer is simply that we must recognize that the scientific endeavor is constituted by values and conventional choices. The point is that these choices are not totally arbitrary though and that we are constantly arguing about them (philosophers of economics more explicitly than others). From this point of view, the choice of a welfare criterion is no more a matter of opinion than the choice of the relevant data for positive analysis. But this is or should be  a “justified” or “reasonably argued” opinion based on facts and theories, but also on ethical considerations. This is no more “social activism” (Gul and Pesendorfer’s term) than a methodological manifesto like Gul and Pesendorfer’s one.

Rules and Possible Worlds

Following my discussion of the rule concept that I started in previous posts, I will here briefly explore an intriguing possibility consisting in conceptualizing rules on the basis of “possible worlds” semantics. More specifically, I will define rules as (soft) constraints on possible worlds. For the intereste reader, this approach is pursued in more details in several papers by the philosopher Jaap Hage (see here and here).

In the previous posts, I have concluded that Searle’s distinction between constitutive rules and regulative rules is a linguistic rather than an ontological one.[1] I have also suggested that Epstein’s frame principles are formally analyzable as rules, already on the basis of a (informal) possible worlds reasoning. As a reminder, Searle and Epstein respectively suggest the following syntax for their constitutive rules and frame principles:

SCR         This X counts as Y in circumstances C

EFP         For any z, the fact “z is X” grounds the fact “z is Y”

By contrast, regulative rules have the generic “if… then” form:

R              If X, then Y

One may be easily lead astray by the fact that though formally equivalent, these definitions leave several things implicit. Consider EFP first. Its statement includes what I will call an object (z), facts or states of affairs (“z is X” and “z is Y”) and properties (X and Y). Quite the contrary, SCR only contains objects (X and Y) as well as a set of conditions C (which is only implicit in EFP). Finally, R is only about facts, though there must be an implicit statement about the set of conditions where R obtains.

The disentanglement of objects, facts and properties is required to see the formal identity of these three definitions. If I use upper letters X, Y, Z to denote properties and small letters a, b, c to denote objects then we have something like

∀a[X(a) → Y(a)]

This statement reads as ‘for any object a, if a has property X then it has property Y’. Consider for instance these two examples:

(a) Pieces of paper that have been engraved by the Federal Reserve are money.

(b) John has made six fouls, therefore he is fouled out.

In example (a), objects are (some) pieces of paper and properties are ‘to have been engraved by the Federal Reserve’ and ‘being money’. In example (b), the object is John and properties are ‘to be ascribed six fouls” and ‘to be fouled out’. Moreover, each X(a) and Y(a) denotes a fact of the type ‘this object has this property’. Finally, both statement are expressed against a background of conditions (e.g. in (a), we assume that we are in the USA, in (b) that John participates to a basketball game). This indicates that rules only work as parts of larger institutions, i.e. are connected with other rules that may specify conditions and implications. For instance, the property “being fouled out” implies something like “one is forbidden to return on the court for the game”. Ultimately, all rules are reducible to statements about what is possible and necessary (and conversely, impossible). This is at this point that it is useful to introduce a possible world framework.

Informally, a possible world can be seen as an exhaustive description of some counterfactual reality[2]. More formally, a possible world corresponds to a set of sentences (propositions and formulae) describing states of affairs, each sentence being ascribed a truth value (e.g. in some world, it may be true that the Golden State Warriors are the 2015 NBA champions but false that the finalist were the Cleveland Cavaliers; both statements are of course true in the actual world). Several kinds of constraints may restrict the range of possible worlds[3]. The most obvious one are logical constraints: for instance, it is not possible to p and not-p to be both true in the same world. More generally, the logical constraints are defined by the various axioms that are imposed on the underlying syntax. Similarly, physical constraints (e.g. a piece of metal cannot being heated without expanding) and conceptual constraints (e.g. an object cannot be a triangle and a circle at the same time) may be imposed on possible worlds. The point is that a set of constraints determine whether or not two or several states of affairs are compatible in a given world. Now consider the general statement of a rule

∀a[X(a) → Y(a)]

Suppose we define an axiom such that if the statement is valid (i.e. the rule actually exists), then the whole sentence combining the antecedent and the consequent is necessarily true (i.e. true in all possible worlds). For instance, take the following rule which is supposed to hold in any official basketball game:

“Any player who makes six fouls is fouled-out”.

Because this sentence is necessarily true, then there cannot be a possible world where a player has made six fouls but is still playing the game. The combination of these two states of affairs is ruled out because we have imposed as an axiom that if the above rule exists, then they cannot hold together. We can be more subtle however by adding an accessibility relation between possible worlds. The interpretation of the accessibility relation depends on the kind of logic we are using but in the present, it would state that any world w’ that is accessible from a world w should have the same set of valid rules (but the reverse needs not to be true – it depends on the property of the accessibility relation). But worlds non accessible from w could have a completely different set of rules.

This formal approach is thus helpful to see that virtually all rules work as constraints on possible worlds irrespective of their syntax. It can also serve as a basis to study the conditions for a set of rules for being consistent (see the papers linked to above). This is an important issue if we acknowledge that institutions are sets of rules. It is likely that for any given institution, some rules cannot be changed easily at the risk of leading to inconsistency, while others may be modified without engaging the coherence of the whole institutional edifice.


[1] In his book Making the Social World, Searle suggests an alternative criterion for distinguishing between these two kinds of rules. Regulative rules are identified to “standing directives” while constitutive rules consist in “standing declarations”. The former have only for function to bring about some form of behavior, while the latter make something the case by representing it as being the case. The distinction is interesting as an account of the different ways rules may be generated. It is not so clear that is helpful to distinguish different forms of rules though, if we consider that being self-referential is more or less a necessary condition for any rule to holds.

[2] Philosophers disagree regarding the nature of possible worlds. David Lewis was the most prominent proponent of a realist account according to which possible worlds are true worlds that exist in some alternative reality and that can discovered. Others like Saul Kripke hold that possible worlds are theoretical constructions where it is stipulated what is true according to them.

[3] Because they are build on the basis of a truth value function, Possible worlds models are said to be ‘semantic’. Most of the time (but not so much in economics), they are combined with a language (a syntax) responding to a logic articulated around several axioms: modal logic, deontic logic, epistemic or doxastic logic are the most discussed.

September Issue of the Journal of Institutional Economics on Institutions, Rules and Equilibria

The last issue of the Journal of Institutional Economics features an interesting and stimulating set of articles on how to account of institutions in game theory (note that all articles are currently ungated). In the main article “Institutions, rules, and equilibria: a unified theory”, the philosophers F. Hindriks and F. Guala attempt to unify three different accounts of institutions: the equilibrium account, the rule account and Searle’s constitutive rule account. They argue that the solution concept of correlated equilibrium is the key concept to make such a unification. The later retains the notion that institutions can only persist if they correspond to an equilibrium, but at the same time it emphasizes that institutions can be understood as correlating devices based on the humans’ ability to symbolically represent rules (as a sidenote, I make a similar point in this forthcoming paper as well as in this working paper [a significantly different version of the latter is currently under submission]). The authors also argue that Searle’s constitutive rules are reducible to regulative rules (I have presented the argument here).

Several short articles by Vernon Smith, Robert Sugden, Ken Binmore, Masahiko Aoki, John Searle and Geoffrey Hodgson reflect on Hindriks and Guala’s paper. They are all interesting but I would essentially recommend Sugden’s paper because it tackles a key issue in the philosophy of science (i.e. whether or not scientific concept should reflect “common-sense ontology”) and Searle’s response. I find the latter essentially misguided (it is not clear whether Searle’s understand game-theoretic concepts and it makes the surprising claim that “if… then” (regulative) rules have no deontic component) but it still makes some interesting points regarding the fact that some institutions such as the promise-keeping one exist (and create obligations) though they are not always followed.