September Issue of the Journal of Institutional Economics on Institutions, Rules and Equilibria

The last issue of the Journal of Institutional Economics features an interesting and stimulating set of articles on how to account of institutions in game theory (note that all articles are currently ungated). In the main article “Institutions, rules, and equilibria: a unified theory”, the philosophers F. Hindriks and F. Guala attempt to unify three different accounts of institutions: the equilibrium account, the rule account and Searle’s constitutive rule account. They argue that the solution concept of correlated equilibrium is the key concept to make such a unification. The later retains the notion that institutions can only persist if they correspond to an equilibrium, but at the same time it emphasizes that institutions can be understood as correlating devices based on the humans’ ability to symbolically represent rules (as a sidenote, I make a similar point in this forthcoming paper as well as in this working paper [a significantly different version of the latter is currently under submission]). The authors also argue that Searle’s constitutive rules are reducible to regulative rules (I have presented the argument here).

Several short articles by Vernon Smith, Robert Sugden, Ken Binmore, Masahiko Aoki, John Searle and Geoffrey Hodgson reflect on Hindriks and Guala’s paper. They are all interesting but I would essentially recommend Sugden’s paper because it tackles a key issue in the philosophy of science (i.e. whether or not scientific concept should reflect “common-sense ontology”) and Searle’s response. I find the latter essentially misguided (it is not clear whether Searle’s understand game-theoretic concepts and it makes the surprising claim that “if… then” (regulative) rules have no deontic component) but it still makes some interesting points regarding the fact that some institutions such as the promise-keeping one exist (and create obligations) though they are not always followed.


Rational Expectations and the Standard Model of Social Ontology

Noah Smith has an interesting post where he refers to an article of Charles Manski about the rational expectations hypothesis (REH). Manski points out that in a stochastic environment it is highly unlikely that expectations are rational in the sense of the REH. However, he ultimately concludes that there are no better alternative. In this post, I want to point out that the REH is actually well in line with what the philosopher Francesco Guala calls in an article the “Standard Model of Social Ontology” (SMOSO), including the fact that it lacks empirical support. This somehow echoes Noah Smith’s conclusion that “rational Expectations can’t be challenged on data grounds”.

Guala characterizes the SMOSO by the following  three elements:

1) Reflexivity: Guala defines this as the fact that “social entities are constituted by beliefs about beliefs” (p. 961). A more general way to characterize reflexivity is that individuals form attitudes (mainly, beliefs) about the systems they are part of and thus attitudes about others’ attitudes. If it is assumed that these attitudes determine people’s actions and in turn, these actions determine the state of the system, then people’s attitudes determine the system. This may lead to the widely discussed phenomenon of self-fulfilling prophecies where the agents’ beliefs about others’ beliefs about the (future) state of the system bring the system to that state.

2) Performativity: it can be defined as the fact that the social reality is literally made by the agents’ attitudes and actions. The classical example is language: performative utterances like “I promise that Y” or “I make you man and wife” not only describe the social reality, they (in the appropriate circumstances) make it by creating a state of affairs that makes the utterance true. Other cases are for instance the fact that some pieces of paper are collectively regarded as money or the fact that raising one’s hand is regarded as a vote in favor of some proposition or candidate.

3) Collective intentionality: attitudes (in particular beliefs) constitutive of the social reality are in some way or another “collective”. Depending on the specific model, collective intentionality can refer to a set of individual attitudes (intentions, beliefs) generally augmented by an epistemic condition (usually, mutual or common knowledge of these attitudes) or a distinct collective attitude of the form “we intend to” or “we believe that”.

The three elements constitutive of the SMOSO are common to almost all the theories and models developed in social ontology and the philosophy of social science for the last thirty years. That does not mean that they fully determine the content of these theories and models: there are several and mutually exclusive accounts of collective intentionality, as well as there are different ways to account for performativity and reflexivity. Now, I want to suggest that many economic models using the REH fall within this general model of social ontology. The REH states that economic agents do not make systematic errors in the prediction of the future value of relevant economic variables. In other words, they make correct predictions on average. Denote X(t) the value of any economic variable you want (price, inflation, …) at time t and X(t+1)^ei the expected value of X at time t+1 according to agent i. Formally, an expectation corresponds to X(t+1)^ei = E[X(t)ΙI(t)^i] with I(t)^i the information available at t for i and E the expectation operator. The REH is the assumption that X(t+1) = X(t+1)^ei + Eu where u is an error term of mean 0. The proximity of the REH with the three elements of the SMOSO is more or less difficult to see but is nevertheless real.

The relationship between the REH and reflexivity is the easiest to state because discussions on rational expectations in the 1950s find their roots in the treatment of the reflexivity issue which itself originates in Oskar Morgenstern’s discussion of the “Holmes-Moriarty paradox”. Morgenstern was concerned with the fact that if the state of affairs that realizes depends on one’s beliefs about others’ beliefs about which state of affairs will realize, then it may be impossible to predict states of affairs. In 1950s, papers by Simon and by Modigliani and Grunberg tackle this problem. Using fixed-point techniques, they show that under some conditions, there is at least one solution x* = F(x*) such that the prediction x* regarding the value of some variable is self-confirmed by the actual value F(x*). In his article on rational expectations, Muth mentions as one of the characteristic of the REH the fact that a public prediction in the sense of Grunberg and Modigliani “will have no substantial effect on the operation of the economic system (unless it is based on inside information)”. So, the point is that a “rational prediction” should not change the state of the system.

The relationship of the REH with performativity and collective intentionality is more difficult to characterize. Things are somewhat clearer however once we realize that the REH implies mutual consistency of the agents’ beliefs and actions (see this old post by economist Rajiv Sethi which makes this point clearly). This is due to the fact that in an economic system, the value X(t+1) of some economic variable at time t+1 will depend on the decisions si made by thousands of agents at t, i.e. X(t+1) = f(s1(t), s2(t), …, sn(t)). Assuming that these agents are rational (i.e. they maximize expected utility), the agent’s decisions depend on their conjectures X(t+1)^ei about the future value of the variable. But then this implies that one’s conjecture X(t+1)^ei is a conjecture about others’ decisions (s1(t), …, si-1(t), si+1(t), …, sn(t)) for any given functional relation f, and thus (assuming that rationality is common knowledge) a conjecture about others’ conjectures (X(t+1)^e1, …, X(t+1)^ei-1, X(t+1)^ei+1, …, X(t+1)^en). Since others’ conjectures are also conjectures about conjectures, we have an infinite chain of iterated conjectures about conjectures. Mutual consistency implies that everyone maximizes his utility given others’ behavior. In general, this will also imply that everyone forms the same, correct conjecture, which is identical to the REH in the special case where all agents have the same information since we have X(t+1) = X(t+1)^ei for all agent i. As Sethi indicates in his post, this is equivalent to what Robert Aumann called the “Harsanyi doctrine” or more simply, the common prior assumption: any disagreement between agents must come from difference in information.

In itself, the relationship between the REH and the common prior assumption is interesting. Notably, if we consider that the common prior assumption is difficult to defend on empirical grounds, this should lead us to consider the REH with suspicion. But it also helps to make the link with the SMOSO. Regarding performativity, we have to give up the assumption (standard in macroeconomics) that the equilibrium is unique, i.e. there are at least two values X(t+1)* and X(t+1)** for which the agents’ plans and conjectures are mutually consistent. Now, any public announcement of the kind “the variable will take value X(t+1)* (resp. X(t+1)**)” is self-confirming. Moreover, this is common knowledge.[1] The public announcement play the role of a “choreographer” (Herbert Gintis’ term) that coordinates the agents’ plans. This makes the link with collective intentionality. It is tempting to interpret the common prior assumption as some kind of “common mind hypothesis”, as if the economic agents were collectively sharing a worldview. Of course, as indicated above, it is also possible to adopt a less controversial interpretation by seeing this assumption as some kind of tacit agreement involving nothing but a set of individual attitudes. The way some macroeconomists defend the REH suggests a third interpretation: economic agents are able to learn about the economic world and this learning generates a common background. In game-theoretic terms, we could also say that agents are learning to play a Nash equilibrium (or a correlated equilibrium).

This last point is interesting when it is put in perspective with Guala’s critique of the SMOSO. Guala criticizes the SMOSO for its lack of empirical grounding. For instance, discussions about collective intentionality are typically conceptual, but almost never build on empirical evidence. Most critics of the REH in economics make a similar point: the REH is made for several reasons (essentially conceptual and theoretical) but has no empirical foundations. The case of learning is particularly interesting: since the 1970s, one of the “empirical” defenses of the REH has been the casual claim that “you can’t fool people systematically”. This is the same as to say that on a more or less short term, people learn how the economy works. This is a pretty weak defense, to say the least. Economists actually do not know how economic agents are learning, what is the rate of the learning process, and so on. Recently, a literature on learning and expectations has been developing, establishing for instance the conditions of convergence to rational expectations. As far as I can tell, this literature is essentially theoretical but is a first step to provide more solid foundations to REH… or to dismiss it. The problem of the empirical foundations for any assumption regarding how agents form expectations is likely to remain though.


[1] Going a little further, it can be shown that if the public announcement is made on the basis of a probabilistic distribution p where each equilibrium is announced with probability p(X(t+1)*), then p also defines a correlated equilibrium in the underlying game, i.e. agents behave as if they were playing a mixed strategy defined by p.

Frame Principles and the Grounding of Social Facts

ant trapI am currently reading Brian Epstein’s book The Ant Trap (Oxford University Press). Epstein is Assistant Professor of Philosophy at Tufts University and is a specialist of social ontology and philosophy of social science more generally. Though I do not like the subtitle at all (“Rebuilding the Foundations of the Social Sciences”), the book provides an interesting and stimulating attempt to build a metaphysical framework for studying the social world. Epstein is mainly interested in working out the metaphysical reasons that ground social facts, i.e. what is it that makes facts like “I have a 20$ bill in my pocket” or “Barack Obama is the President of the United States of America” possible. The book has two parts: the first one develops the metaphysical framework on the basis of a critique of the “standard model of social ontology”. The second part applies the framework to the specific topic of groups and what ground facts about groups. A recurring theme throughout the book is the critique of ontological individualism, i.e. the claim that only facts about individuals ground social facts, including facts about groups.

In this post, I will only discuss Epstein’s key concept of frame principles. Epstein offers this concept as an alternative to Searle’s constitutive rules and it is instructive to see if and how it avoids the problems I discuss in my preceding post. Epstein’s framework builds on a key distinction between anchoring and grounding. This distinction is not essential here but helps to better understand both the critique of ontological individualism and Epstein’s points about the nature of social facts. Grounding is a relation between two facts through what the author calls a frame principle: it states the conditions (the “metaphysical reasons”) for a fact to generate another (social) fact. For instance, through a given frame principle, the (physical) fact that I raise my hand at some time and some place grounds the (social) fact that I have voted for some candidate in an election. Anchoring is different: it is “a relation between a set of facts and a frame principle” (p. 82). An anchor is what is making a given frame principle to hold in some population. The nature of the anchor may vary depending one’s favorite model of social ontology. For instance, in Searle’s account of institutional facts the anchor is the collective acceptance or recognition of some constitutive rules. Epstein does not much discuss anchoring but argues convincingly against the “conjunctivist” (scholars who conflate grounding and anchoring) that the distinction is important because it is the only way to avoid falling in an infinite regress.

For the rest of this post, I will ignore issues related to anchoring. The grounding relation is more significant because Epstein suggests that it is an alternative to Searle’s account of constitutive rules (which, according to the author, are “neither constitutive nor are they rules” (p. 77)). As said above, the grounding relation is a relation between two facts or set of facts. More exactly, it is established through a frame principle that articulates a link between a grounding (set of) condition(s) X and a grounded fact of type Y. This gives the following formula for a frame principle (p. 76):

For any z, the fact “z is X” grounds the fact “z is Y”.

Consider for instance the following frame principle:

For all z, the fact “z is a bill printed by the Bureau of Printing and Engraving” grounds the fact “z is a dollar.

Given this frame principle, any fact of the type “this particular bill z* is printed by the bureau of Printing and Engraving” grounds the social fact “this particular bill z* is a dollar”. A frame principle can also be alternatively formulated in a semantic model of possible worlds: a frame is simply a set of possible worlds P where the grounding conditions for social facts are fixed. Denote w(z-X) and w(z-Y) as the propositions “In world w, the fact “z is X” holds” and “In world w, the fact “z is Y” holds” respectively. Then, if zX is the event “z is X” (i.e. the subset of possible worlds where the proposition w(z-X) is true), then zX ⊆ zY for all w ∈ P, with zY the event “z is Y”. In words, for any possible worlds where the frame holds, whenever z has property X, it also has property Y.

Epstein does not state clearly why his formulation his superior to Searle’s. One advantage is that it may help to make the distinction between grounding and anchoring more salient. In particular, it appears clearly that grounding is captured by a “possible worlds/unique frame” model, while anchoring corresponds a “unique world/possible frames” model. Another advantage is that it seems to rule out all the debates over the nature of rules. Epstein’s frame principles are not (necessarily) rules, so that to ask whether they are regulative or constitutive seems meaningless. Still, as far as I can see, the “linguistic argument” presented in the preceding post is still valid. The issues of the nature of (regulative) rules and of their differences with frame principles remain in Epstein’s framework. Does a regulative rule of the kind “In Britain, drive at the left side of the road” also counts as a frame principle? At first sight, it seems not. But the problem is that it is not difficult to reformulate any frame principle as a regulative rule along exactly the same lines that the one discussed in the preceding post. This is not surprising: Epstein’s frame principles are semantically identical to Searle’s constitutive rules (i.e. the underlying semantic model is the same). And any regulative rule can be captured by a similar semantic model[1]. So the question of the nature of the grounding relation is not completely answered. Epstein’s account suggests nevertheless a possible direction to look at: it is possible that the more or less “constitutive” nature of frame principles depends on the conditions of their anchoring. That is a possibility that could be worth to be explored.


[1] Consider a regulative rule “if z is X, then z is Y” (e.g. if Bill is under 18, then Bill cannot vote”). Now, using the same notation than above, in all possible worlds w ∈ P where the rule holds, zX ⊆ zY.

Are There Constitutive Rules?

The philosopher John Searle is well-known for his work in the philosophy of language and in the philosophy of mind (see in particular the “Chinese room” thought experiment). He has also made an important contribution to social ontology with his books The Construction of Social Reality (1995) and Making the Social World (2010). An important feature of Searle’s account of the nature of social reality is his distinction between constitutive and regulative rules. Actually, he already made this distinction in 1969 in his work on speech acts. Searle’s point is that some rules are straightforward statements of the kind “Do X” or “If Y, then X”. Other rules however are of the form “This X count as Y in (circumstances) C”. The former are regulative rules, the latter are constitutive rules. The key difference is that constitutive rules make some kinds of actions or facts possible while regulative rules only regulate a practice that is not logically tied to the rule.

Consider the following facts: “I have been checkmated”, “Bill hits a home run”, “I have a 20$ bill in my wallet”. All these facts depend on constitutive rules that define what counts as a checkmate, a home run or a 20$ bill. Without these rules, the above facts cannot exist. Now, contrast with the fact “In Britain, people drive at the left side of the road”. This fact only depends on a regulative rule (“if you’re in Britain, then drive at the left side of the road”); the very practice of driving does not seem to depend on the peculiar content of the rule.

The distinction has some intuitive appeal. It also seems significant because constitutive rules, unlike regulative rules, have the ability to create the institutional reality. This is reflected in the fact that the “count-as” locution generates what Searle calls status functions: a constitutive rule attributes to some entity X (which can be a person, an object or anything else) a status defined in terms of deontic powers. For instance, the rule “such and such pieces of papers count as dollars in the United States of America” gives these pieces of paper the power to buy things. Once the rule is collectively accepted in some community, pieces of paper with the appropriate characteristics actually have this property. So, constitutive rules generate institutional facts.

However, Searle’s account of constitutive rules has been widely criticized. The most significant critique has been that the distinction is a false one: all rules are both constitutive and regulative. Whether or not all regulative rules are also constitutive is a complex debate. In some ways, the rule “if in Britain, then drive at the left side of the road” is constitutive of the practice “to drive in Britain”: assume some possible world identical to the actual world except for the fact that people in Britain are driving at the right because the rule says so. Then, the practice “to drive in Britain” would not be the same. This involves a problem of identity on which I will briefly return at the end of this post. The reverse is easier to analyze: all constitutive rules can be reformulated as regulative rules. This point is forcefully made by Frank Hindriks in this paper (see also here). He shows that all constitutive rules actually consist in the conjunction of two propositions corresponding respectively to a “base rule” and a “status rule”. Consider for instance the case of a proto-institution we call property* which is defined by the constitutive rule “X[this piece of land l] counts as Y[property* of person p] in C[p was the first person to claim so and such and such other conditions obtain]”. The base rule states the conditions for the piece of land to be owned by person p:

Base Rule: if the set of conditions c obtain, then l is property* of p.

Suppose that the proto-institution of property* grants the right of exclusive use and nothing else. Then, the status rule is:

Status Rule: if l is property* of p, then p has the right of exclusive use of l.

It should be noted that both the base rule and the status rule are of the form “if… then”, i.e. they are regulative rules. Now, it is easy to demonstrate that Searle’s distinction between constitutive and regulative rules seems merely to be a linguistic one, rather than logical or ontological. Indeed, we can define the rule for property* by combining the base rule and the status rule:

Rule for Property*: if the set of conditions c obtain, then p has the right of exclusive use of l.

Once again, the statement of the rule is of the form “if… then”. Moreover, the rule is stated without any reference to the institution of property* itself. It seems that we have reduced a constitutive rule to a regulative rule and thus that constitutive rules have nothing specific. They are merely a linguistic artifact.

One may think that this is only due to Searle’s specific account of the distinction and that it may be possible to defend it in some other way. For instance, in his paper “Two Concepts of Rules”, John Rawls seems to offer an alternative way to account for constitutive rules. Rawls distinguishes between what he calls the “summary conception” and the “practice conception” of rules. The former defines a rule as a mere behavioral pattern generated by acts persons have made because of their efficiency. According to the latter, a rule defines a practice in the sense that the practice consists in following the rule. For instance, “hitting a home run” or more generally the practice of “playing baseball” consist precisely in the fact of following some set of rules.

However, the same problem remains, as shown by David Lewis in his article “Scorekeeping in a Language Game” (note that Lewis does not make reference to Rawls’ article). Consider any well-run baseball game G (either a professional game or an informal game between friends). At any stage t of G, there is a score S which is defined by the septuple of numbers < rv, rh, h, i, s, b, o > with rv and rh the number of runs of the visiting team and the home team respectively, h the half (first or second) in the inning, i the inning, s the number of strikes, b the number of balls and o the number of outs. According to Lewis, a codification of the rules of baseball would consist in the conjunction of four kinds of rules:

1) Rules specifying the evolution of S: if S(t) is the score at stage t, and if between t and t’ the players behave in a manner m, then the score S(t’) is determined in a certain way by both S(t) and m.

2) Specifications of correct play: for any score S(t) and any other stage t’, there is a set M of manners to behave which corresponds to correct play.

3) Directives concerning correct play: throughout G (i.e. for all tt’ sequences), players ought to adopt manners to behave belonging to M.

4) Directives concerning scores: players have behave such as their teams score the maximum runs and the opposing teams the minimum runs.

Lewis notes that sets of rules 1) and 2) correspond to constitutive rules, while sets of rules 3) and 4) rather correspond to regulative rules. Consider in particulars rules about the evolution of score S. That these rules cannot be seen as a mere summary of past behaviors is reflected by the fact that the evolution of score is both a function of how the players behave m and of the current score S(t), i.e. S(t’) = f(S(t), m). The function f encompasses a set of constitutive rules regarding, for instance, what counts as a strike. The way the players behave seems not sufficient as such to make the score evolve.

But this is clearly a linguistic artifact, again. As Lewis states, “[o]nce score and correct play are defined in terms of the players’ behavior, then we may eliminate the define terms in the directive concerning requiring play and the directing concerning scores”. In other words, constitutive rules can be reformulated in regulative rules, which themselves can be stated as summary of (past) behaviors. The implication seems to be that there is no social reality beyond the actions of persons: institutions are reducible to individual behavior, and there is nothing more to the social reality.

There may be several ways to avoid this conclusion however. A first possibility is suggested by Hindriks in the article I have linked to: the fact that the distinction between constitutive and regulative rules is linguistic does not mean that it is ontologically and scientifically irrelevant. The debates over reductionism in science give a great illustration. In principle, all facts about the economy (“interest rates are rising”, “growth is slowing”, “Amazon is losing money”, “Bill buys a car”) can be described without any economic terms and concepts. It could be possible to describe all these facts as facts about the movement of atoms and molecules. But not only this would be extremely complicated, this would also be unhelpful to explain and to predict economic phenomena. The distinction between constitutive and regulative rules can thus be grounded not on ontology, but rather on a pragmatic account of scientific explanation and prediction.

Another alternative consists in acknowledging that the distinction is not clear-cut. It may well be that all rules are both constitutive and regulative. But our attitudes toward rules may vary. In a given community and at a given time, it may be a fact that a rule is regarded as being constitutive of some practice or institution, while others are not. Compare for instance a proposal to create a four-points basket in basketball with another one making tackles permissible. Both rules would change the nature of the game, but while the former would probably not be rejected on the ground that “this is not basketball”, the latter probably would. The point is that rules are constitutive not per se, but through (or because of) our practices and attitudes toward them. Behind these attitudes and practices, lies the difficult issue of identity: what is it to an institution to be this institution and nothing else according to some community? This is a subject for another post.

“Philosophy of Economics” or “Philosophical Economics”

The economist and philosopher Erik Angner has an interesting post on his blog about the proper label to give to academic works at the intersection between economics and philosophy. He claims that “philosophical economics” is the appropriate label, rather than the more usual “philosophy of economics” or “economic philosophy”. In some way, Angner is right that both philosophy of economics and economic philosophy are too restrictive. Not only they suggest that most of the work is done by philosophers from a philosophical point of view. “Economic philosophy” seems to refer to a particular kind of philosophy, while “philosophy of economics” tends to indicate that all the studies belong to a subfield of the philosophy of science. Clearly, this does not correspond to the actual practice: a great part of the work located at the intersection between economics and philosophy is done by economists; this work is not committed to a particular kind of philosophy ; a substantial part of this work as nothing to do with philosophy of science.

Still, Angner’s suggestion sounds awkward to me, in part because it is also restrictive. Maybe a better way to approach the problem is to look at the work done by economists and philosophers that can be reasonably considered as being at the intersection between economics and philosophy and to try to make some kind of typology. Here is my proposal. I see four not mutually exclusive types of work belonging to this intersection. A first category can be referred to as “economic analysis of philosophical issues” (in the same way that we have the economic analysis of law). In this category, we will find all the studies that use economic tools and theories to study what are generally considered as philosophical problems. The most significant examples are all the works using game theory (bargaining game theory, evolutionary game theory) and rational choice theory to study issues like the evolution of morality and moral norms. The references are too numerous to be cited (see some articles and books by Binmore, Gauthier, Skyrms, Alexander, Young, …). Also in this domain falls the works belonging to what is called the “economics of scientific knowledge”.

A second category is what is traditionally called “economic methodology”, i.e. the study of the methods and practices of economists to produce knowledge. Economic methodology is generally divided between two subparts: “Methodology” with a capital “M” and “methodology”. The former tackles the big questions which are standard in the philosophy of science (causality, scientific progress, measurement) but with an application to economics. The latter deals with more specific issues related to the practice of economists (e.g. how economists build and use models, specific issues related to the use of econometrics or natural experiments, …). The third category corresponds to what I would call the “ontology of economics”: the inquiry into the nature of economic kinds and objects (e.g. the work of Uskali Mäki). Writings in social ontology fall also in this category since they generally tackle the same issues. Finally, the fourth category corresponds to what I would call “philosophical investigations of economic topics” or maybe “philosophical economics”. This may sound similar to economic methodology but actually they are not identical. Works belonging to this category either attempt to clarify and to refine the economists’ use of some concepts (e.g. Dan Hausman’s work on the concept of preference), to assess the normative commitments of economists (e.g. the explosion of writings – most of them critical – about libertarian paternalism and the nudge approach) or more generally to investigate the ethical dimension of economic theories and categories (e.g. many of Sen’s writings on ethics and economics).

I said above that these four categories are not mutually exclusive. Indeed, they probably overlap. The use of Venn diagram may help to visualize this:


What about the intersections? First, note that the size of the intersections in the diagram is arbitrary and does not necessarily reflect the amount or the importance of works belonging to it. There are nine intersections and so I will not comment on them all. Maybe some of them are actually empty. For instance, I am not able to cite works belonging to intersection I, except maybe for the writings of Don Ross on economic agency and the relationship economics and other social sciences (Ross uses game theory to explain the emergence of economic agency, builds on a theory of intentionality, makes claim about the proper interpretation of economics concepts such as preferences and choices, …). Other intersections are clearly relevant. For instance, Dan Hausman’s work about the preference concept mentioned above belongs both to economic methodology and what I call “philosophical investigations of economic topics” (intersection B). The economics of scientific knowledge arguably both belong to the economic analysis of philosophical issues and economic methodology (intersection A). As an illustration of intersection C, I would modestly cite my work on salience and on constitutive rules in game theory. Tony Lawson’s and some of Mäki’s writings are clearly an instance of intersection D or even E. Michael Bacharach’s and Robert Sugden’s work on team reasoning seem to me as being great examples of intersection F. And so on. Note that the construction of the diagram assumes that some intersections are logically impossible: economics analysis of philosophical issues/philosophical investigations of economic topics, and ontology of economics/economic methodology. Maybe this is wrong, but it seems to me that in both cases a third category is required to make sense of such intersections.

In the end, how should we call the whole field represented by the four circles and their nine intersections? Maybe simply “Economics and philosophy” is the more appropriate, as it points out the intersection between two fields, without any commitment regarding any priority of one field over another or the preference for any particular perspective. By the way, this is also the name of one of – if not the major academic journal in this domain.

Ullmann-Margalit’s Game-Theoretic Account of Social Norms


Game theory is now a fairly standard tool in the study of social norms and institutions. Among the pioneering game-theoretic accounts of norms figures Edna Ullmann-Margalit’s book The Emergence of Norms. Originally published in 1977, it has been recently reedited by Oxford University Press. This offers an opportunity to rediscover an interesting study which anticipates several developments in the analysis of social norms.

The title of the book seems to indicate that Ullmann-Margalit is interested in the way norms have appeared and evolved, i.e. what today we would rather call the evolution of norms. Her approach is “structural” rather than “historical”: types of norms and their emergence are related to their functions in different kinds of strategic interactions which differ in their properties. This gives Ulmann-Margalit’s account a strong functionalist stance which she explicitly recognizes: the existence of norms is explained by their functions. I will return at the end of this post on this point but it can be already noted that this makes the title of the book slightly misleading. Ullmann-Margalit’s account is far more convincing if viewed as an account of the way norms work (i.e. determine individual’s behaviors) than an account of the mechanisms by which norms evolve.

The book distinguishes between three kinds of strategic interactions with specific features which give rise to three kinds of social norms: prisoner’s dilemma (PD) norms, coordination norms and norms of partiality. Ullmann-Margalit’s analysis of coordination norms partially builds on Schelling’s and Lewis’ game-theoretic accounts. Coordination norms are defined as solutions to coordination problems, i.e. interactions where the players’ interests are perfectly aligned and where at least two profiles of actions fully satisfy them. Like Schelling and Lewis, she points out that salience is the most general way through which a recurrent coordination problem is solved. Once a solution has been singled out by the participants, the repetition of the interactions gives rise to a social norm. For novel coordination problems however, or at least some of them, she argues that it is most likely that the solution comes from a norm dictated and enforced by some external authority. Moreover, contrary to Schelling and Lewis, Ullmann-Margalit suggests that coordination norms are “real” norms and not simply conventions. That means that coordination norms have a normative force: social pressure and/or moral obligation rather than simply convergent expectations contribute to explain why people conform to some specific norm.

As their name suggests, PD norms are solutions to prisoner’s dilemma type of interactions. Basically, PD norms help to foster cooperation in strategic interactions where defection is the dominant strategy. Ullmann-Margalit’s account can be seen as one of the numerous attempts made by social scientists and philosophers to show that it can be rational to cooperate in a PD. However, contrary to other philosophers (for instance, David Gauthier’s theory of constrained maximization), Ullmann-Margalit does not attempt to argue that playing dominated strategies is rational. Rather, she suggests that PD norms foster cooperation on the basis of several “payoff-transforming” mechanisms depending on external sanctions and/or moral commitments. The payoff-matrix then no longer corresponds to a PD but to a game where mutual cooperation is (possibly the only) an equilibrium. On this point, there is some similarity between Ullmann-Margalit’s account and the theory of social norms developed recently by Christina Bicchieri. Bicchieri’s suggests that social norms rely on a conditional preference for conformity which in some cases a PD into a coordination game.

Ullmann-Margalit’s study of the third kind of norms – norms of partiality, is the most original and intriguing. Norms of partiality stabilize situations where some parties are favored and other disfavored. More exactly, they legitimize a status quo of inequality. The analysis builds on an interesting (though not totally convincing) distinction between equilibrium and strategic stability. The former corresponds to the standard Nash equilibrium solution concept and follows from the fact that each player is rationally searching for improving his absolute position. An equilibrium is simply a state of affairs where no player can improve his absolute position by changing his behavior. Strategic stability matters as soon as we assume that the players are also concerned by their relative position (assuming of course that this concern is not already incorporated into the payoff matrix). In a situation of inequality, a disfavored party may seek to reduce the inequality level even if this leads to a worsening of her absolute position. The threat of such a move becomes credible once it is realized that in some cases, the favored party’s rational response to this move leads to an improvement in both the disfavored party’s relative and absolute position. In this case, a state of affairs may be an equilibrium but still be strategically unstable: one may want to change his behavior even though it worsens temporarily his absolute position. According to Ullmann-Margalit, norms of partiality’s function is to stabilize state of affairs which are strategically unstable through they correspond to a game-theoretic equilibrium. The matrix below corresponds to the paradigmatic illustration:


Assume that R1-C1 is the status quo. Though it is an equilibrium, it is not strategically stable since the column player may try to convince the row player that it will play C2 to improve his relative position. Then, row player’s best response will be to switch to R2, thus leading to R2-C2 with a reversal of fortune for the two parties. Norms of partiality prevent such kinds of strategic move by generating some kind of normatively binding constraint.

Though ingenious, this functional account of the role of norms in stabilizing situations of inequality is not totally convincing because of a lack of concrete examples (which the author herself points out). As noted by Cass Sunstein in his review of Ullmann Margalit’s book, the examples cited in the latter (property rights, rights inheritance) are not enforced by norms but rather by law (at least in developed countries). In other cases, obvious situations of inequality (for instance between men and women regarding wages) seem to lack any broad normative or moral support but still endure thanks to other social mechanisms. It is not clear therefore whether norms of partiality really have an empirical counterpart.

This leads me to the last point of this post. As I briefly note above, Ullmann-Margalit’s study is not really an account of the emergence of norms. Such an account would propose one or several causal mechanisms for the creation and the evolution of norms. Since the 1980’s, evolutionary game-theoretic accounts of norms have been developed. They remain largely unconvincing however because the emergence problem is fundamentally an empirical one. At the very first line of her book, Ullmann-Margalit states clearly that her essay belongs to “speculative sociology” (nowadays, we would rather call it “social ontology”). She claims that she is intending to propose a “rational reconstruction” of norms rather than an historical account. By this, she means that her goal is to provide a list of reasons or features that may explain why norms exist. As noted above, her approach is functionalist because she tries to relate the existence of norms to the functions they fulfill. But as argued by Jon Elster, functional explanations are no explanation at all; only causal explanations are. The title thus wrongly suggests that the book provides a causal explanation (either theoretical or historical) of the emergence of norms, which is not the case. However, Ullmann-Margalit’s work is highly valuable if it is taken to give an account of the way norms are working, i.e. how they actually affect people’s behavior. Because she insists on the functional stance of her approach, Ullmann-Margalit does not make this point clearly enough. However, for each kind of norms, one or several mechanisms are suggested to explain why people cooperate or succeed in coordinating: commitment (strategic or moral), framing effects, social pressure, and so on are all hinted as explanations for the working of norms. From this point of view, The Emergence of Norms anticipates many contemporary developments in social ontology, game theory and experimental economics and for this reason remains a valuable read.

Irrational Consumers, Market Demand and the Link Between Positive and Normative Economics

Note : This post has been originally published at the blog “Rationalité Limitée

As a follow-up to my last post, I would like to briefly return on the claim that the representative agent assumption is also well alive in microeconomics, not only in macroeconomics. As Wade Hands explain in several papers I linked to in the previous post, the representative agent assumption in microeconomics finds its roots in the study of consumer’s choice and more generally in demand theory. This may seem surprising (at least for non professional economists) since in virtually all microeconomic textbooks the study of demand theory starts from the analysis of the individual consumer’s choice. This reflects the fact that initially ordinal utility theory was conceived as a theory of the rational individual consumer with market demand simply derived through the horizontal summation of the individual demand curves. Similarly, in his seminal article on revealed preference theory, Samuelson started with well-behaved individual demand functions on the basis of which he derived a consistency axiom nowadays known as the weak axiom of revealed preference theory.

However, as it is well-known, the idea that the individual consumer is rational in the specific sense of ordinal utility theory (i.e. the consumer’s preferences over bundle of goods form a complete ordering) or revealed preference theory (i.e. the consumer’s choices are consistent in the sense of some axiom) is a disputed one, inside and outside economics. A foundational issue for economics has been, and still is the relationship between individual rationality and what can be called “market” or “collective” rationality. To ask the question in these terms already marks a theoretical and even an ontological commitment: it presupposes that the rationality criteria we apply to individual agents are also relevant to study collective behavior. Some economists have always resisted this commitment. However, acknowledging the fact that individual consumers may not be rational, the issue is obviously an important one for the validity of many theoretical results in economics, particularly concerning the properties of market competition.

An important paper from this point of view is Gary Becker’s “Irrational Behavior and Economic Theory” published in 1962 in the Journal of Political Economy. Becker established that the so-called law of demand (i.e. demand is a decreasing function of price) is preserved even with irrational consumers who either choose randomly a bundle of goods among the bundles in the budget set or who, because of inertia, always consume the same bundle if it is still available after a change in the price ratio. The simplest, random-choice case, is easy to illustrate. Becker assumes a population of irrational consumers who choose a bundle on their budget hyperplane according to a uniform distribution. In other words, a given consumer has the same chance to choose any bundle which exhausts his monetary income. Consider the figure below:

Becker 2

Consider the budget set OAB first. An irrational consumer will pick any bundle on the budget line AB with the same probability. Since the distribution is uniform, the bundle a (which corresponds to the mid-point between A and B) will be chosen on average in the population. Now, suppose that good x becomes more expensive relatively to good y. CD corresponds to the resulting compensated budget line (i.e. the budget line defined by the new price ratio assuming the same real income than before the price change). For the new budget set OCD, the bundle b is now the one that will be chosen on average. Therefore, the compensated demand for good x has decreased following the increase of its price. This implies that the substitution effect is negative, which is a necessary condition for the law of demand to hold. Note however that this not imply that each consumer is maximizing his utility under budget constraint. Quite the contrary, an individual consumer may perfectly violate the law of demand through a positive substitution effect. For instance, one may choose a bundle near point A with budget set OAB and a bundle near point D with budget set OCD, in which case the consumption of x increases with its price.

Clearly, there is nothing mysterious in this result which simply follows from a probabilistic mechanism. Becker used it as a “as-if” defense of the rationality assumption for consumer choice. Even if consumers are not really rational, one can safely assume the contrary because it leads to the right prediction regarding the shape of the market demand. As Moscati and Tubaro note in an interesting historical and methodological discussion, most of the experimental studies based on Becker’s theoretical argument have focused on the individual rationality issue: are consumers really rational or not? As the authors show, it turns out that Becker’s article and the subsequent experimental studies only offer a weak defense of the rationality assumption because they only show that rational choice is a plausible explanation for demand behavior, not the best explanation.

Surprisingly however, the most significant economic implications of Becker’s argument seem to have been largely ignored: the fact that individual rationality is of secondary importance to study market demand and that only “collective” rationality matters. This idea has been recently developed in several places, for instance in Gul and Pesendorfer’s critique of neuroeconomics and in Don Ross’ writings on the scope of economics. The latter offer a sophisticated account of agency according to which an economic agent is anything that fulfills some consistency requirements. Contrary to Becker, Ross’ approach is not grounded on an instrumentalist philosophy but rather on a realist one and provides a strong defense for the representative agent assumption in microeconomics.

The problem with this approach does not only lie in the fact that it excludes from the scope of economics all issues related to individual rationality. More significantly, as I already noted in the preceding post, it has significant implications for the relationship between positive and normative economics. Welfare analysis has been traditionally grounded on the preference-satisfaction criterion. The latter is justified by the fact that there is an obvious link between the preferences of an (individual) agent and the welfare of a person. The link is lost under the more abstract definition of agency because there is no reason to grant to some abstract market demand function any normative significance, despite its formal properties. Added to the fact that the irrationality of consumers makes the preference-satisfaction criterion meaningless, this makes necessary to rethink the whole link between positive and normative economics.