Recent Working Papers

You will find below several working papers I have written recently on different (but somewhat related) topics. Comments are welcome!

A Bayesian Conundrum: From Pragmatism to Mentalism in Bayesian Decision and Game Theory

Abstract: This paper discusses the implications for Bayesian game theory of the behaviorism-versus-mentalism debate regarding the understanding of foundational notions of decision theory. I argue that actually the dominant view among decision theorists and economists is neither mentalism nor behaviorism, but rather pragmatism. Pragmatism takes preferences as primitives and builds on three claims: i) preferences and choices are analytically distinguishable, ii) qualitative attitudes have priority over quantitative attitudes and iii) practical reason has priority over theoretical reason. Crucially, the plausibility of pragmatism depends on the availability of the representation theorems of Bayesian decision theory. As an extension of decision-theoretic principles to the study of strategic interactions, Bayesian game theory also essentially endorses the pragmatist view. However, I claim that the fact that representation theorems are not available in games makes this view implausible. Moreover, I argue that pragmatism cannot properly account for the the generation of belief hierarchies in games. If the epistemic program in game theory is to be pursued, this should probably be along mentalistic lines.

Keywords: Bayesian synthesis – Bayesian game theory – Pragmatism – Mentalism – Preferences


Neo-Samuelsonian Welfare Economics: From Economic to Normative Agency

Abstract: This paper explores possible foundations and directions for “Neo-Samuelsonian Welfare Economics” (NSWE). I argue that neo-Samuelsonian economics entails a reconciliation problem between positive and normative economics due to the fact that it cuts the relationship between economic agency (i.e. what and who the economic agent is) and normative agency (i.e. what should be the locus of welfare analysis). Developing a NSWE thus implies to find a way to articulate economic and normative agency. I explore two possibilities and argue that both are attractive but have radically different implications for the status of normative economics. The first possibility consists in fully endorsing a normative approach in terms of “formal welfarism” which is completely neutral regarding both the locus and the unit measure of welfare analysis. The main implication is then to make welfare economics a branch of positive economics. The second possibility is to consider that human persons should be regarded as axiologically relevant because while they are not prototypical economic agents, they have the ability to represent them both to themselves and to others as reasonable and reliable beings through narrative construction processes. This gives a justification for viewing well-being as being constituted by the persons’ preferences, but only because these preferences are grounded on reasons and values defining the identity of the persons. This view is somehow compatible with recent accounts of well-being in terms of value-based life satisfaction and implies a sensible reconsideration of the foundations of welfare economics.

Keywords: Neo-Samuelsonian economics – Welfare Economics – Revealed preference theory – Preference-satisfaction view of welfare – Economic agency


History, Analytic Narratives and the Rules-in-Equilibrium View of Institutions

Abstract: Analytic narratives are case studies of historical events and/or institutions that are formed by the combination of the narrative method characteristic of historical and historiographical works with analytic tools, especially game theory, traditionally used in economics and political science. The purpose of this paper is to give a philosophy-of-science view of the relevance of analytical narratives for institutional analysis. The main claim is that the AN methodology is especially appealing in the context of a non-behaviorist and non-individualist account of institutions. Such an account is fully compatible with the “rules-in-equilibrium” view of institutions. On this basis, two supporting claims are made: first, I argue that within analytical narrative game-theoretic models play a key role in the identification of institutional mechanisms as the explanans for economic phenomena, the latter being irreducible to so-called “micro-foundations”. Second, I claim that the “rules-in-equilibrium” view of institutions provides justification for the importance given to non-observables in the institutional analysis. Hence, institutional analysis building on analytical narrative typically emphasizes the role of derived (i.e. non-directly observed) intentional states (preferences, intentions, beliefs).

Keywords: Analytic narratives – Rules-in-equilibrium view of institutions – Institutional analysis – Game theory


Review of “Understanding Institutions. The Science and Philosophy of Living Together”, Francesco Guala, Princeton University Press, 2016

The following is a (long) review of Francesco Guala’s recent book Understanding Institutions. The Science and Philosophy of Living Together (Princeton University Press, 2016).

Twenty years ago, John Searle published his influential account of the nature of institutions and institutional facts (Searle 1995). Searle’s book has been a focal point for philosophers and social scientists interested in social ontology and its claims and arguments continue to be hotly disputed today. Francesco Guala, a professor at the University of Milan and a philosopher with a strong interest in economics, has written a book that in many ways can be considered both as a legitimate successor but also a thoroughly-argued critique of Searle’s pioneering work. Understanding Institutions is a compact articulation of Guala’s thoughts about institutions and social ontology that he has developed in several publications in economic and philosophy journals. It is a legitimate successor to Searle’s book as all the central themes in social ontology that Searle discussed are also discussed by Guala. But it is also a strong critique of Searle’s general approach to social ontology: while the latter relies on an almost complete (and explicit) rejection of social sciences and their methods, Guala instead argues for a naturalistic approach to social ontology combining the insights of philosophers with the theoretical and empirical results of social sciences. Economics, and especially game theory, play a major role in this naturalistic endeavor.

The book is divided into two parts of six chapters each, with an “interlude” of two additional chapters. The first part presents and argues for an original “rules-in-equilibrium” account of institutions that Guala has recently developed in several articles, some of them co-authored with Frank Hindriks. Two classical accounts of institutions have indeed been traditionally endorsed in the literature. On the institutions-as-rules account, “institutions are the rules of the game in a society… the humanly devised constraints that shape human interactions” (North 1990, 3-4). Searle’s own account in terms of constitutive rules is a subspecies of the institutions-as-rules approach where institutional facts are regarded as being the products of the assignment of status function through performative utterances of the kind “this X counts as Y in circumstances C”. The institutions-as-equilibria account has been essentially endorsed by economists and game theorists. It identifies institutions to equilibria in games, especially in coordination games. In this perspective, institutions are best seen as devices solving the classical problem of multiple equilibria as they select one strategy profile over which the players’ beliefs and actions converge. Guala’s major claim in this part is that the relevant way to account for institutions calls for the merging of these two approaches. This is done through the key concept of correlated equilibrium: institutions are figured out as playing the role of “choreographers” coordinating the players’ choices on the basis of public (or semi-public) signals indicating to each player what she should do. Institutions then take the form of lists of indicative conditionals, i.e. statements of the form “if X, then Y”. Formally, institutions materialize as statistically correlated patterns of behavior with the equilibrium property that no one has an interest to unilaterally change her behavior.

The motivation for this new approach follows from the insufficiencies of the institutions-as-rules and institutions-as-equilibria accounts but also to answer fundamental issues regarding the nature of the social world. Regarding the former, it has been widely acknowledged that one the main defect of the institutions-as-rules is that it lacks a convincing account of the reason of why people are motivated in following rules. The institutions-as-equilibria approach for its part is unable to account for the specificity of human beings regarding their ability to reflect over the rules and the corresponding behavioral patterns that are implemented. Playing equilibria is far from being human specific, as evolutionary biologists have recognized long ago. However, being able to explain why one is following some rule or even to communicate through a language about the rules that are followed are capacities that only humans have. There are also strong reasons to think that the mental operations and intentional attitudes that sustain equilibrium play in human populations are far more complex than in any other animal population. Maybe the most striking result of this original account of institutions is that Searle’s well-known distinction between constitutive and regulative rules collapses. Indeed, building on a powerful argument made by Frank Hindriks (2009), Guala shows that Searle’s “this X counts as Y in C” formula reduces to a conjunction of “if X then Y” conditionals corresponding to regulative rules. “Money”, “property” or “marriage” are theoretical terms that are ultimately expressible through regulative rules.

The second part of the book explores the implications of the rules-in-equilibrium account of institutions for a set of related philosophical issues about reflexivity, realism and fallibilism in social ontology. This exploration is done after a useful two-chapter interlude where Guala successively discusses the topics of mindreading and collective intentionality. In these two chapters, Guala contends, following the pioneering work of David Lewis (1969), that the ability of institutions to solve coordination problems depends on the formation of iterated chains of mutual expectations of the kind “I believe that you believe that I believe…” and so on ad infinitum. It is suggested that the formation of such chains is generally the product of a simulation reasoning process where each player forms expectations about the behavior of others by simulating their reasoning, on the assumption that others are reasoning like her. In particular, following the work of Morton (2003), Guala suggests that coordination is often reached through “solution thinking”, i.e. a reasoning process where each player first asks which is the most obvious or natural way to tackle the problem and then assumes that others are reasoning toward the same conclusion than her. The second part provides a broad defense of realism and fallibilism in social ontology. Here, Guala’s target is no longer Searle as the latter also endorses realism (though Searle’s writings on this point are ambiguous and sometimes contradictory as Guala shows) but rather various forms of social constructionism. The latter hold that the social realm and the natural realm are intrinsically different because of a fundamental relation between how the social world works and how humans (and especially social scientists) reflect on how it works. Such a relationship is deemed to be unknown to the natural sciences and the natural world and therefore, the argument goes, “social kinds” considerably differ from natural kinds. The most extreme forms of social constructionism hold the view that we cannot be wrong about social kinds and objects as the latter are fully constituted by our mental attitudes about them.

The general problem tackled by Guala in this part is what he characterizes as the dependence between mental representations of social kinds and social kinds. The dependence can be both causal and constitutive. As Guala shows, the former is indeed a feature of the social world but is unproblematic in the rules-in-equilibrium account. Causal dependency merely reflects the fact that equilibrium selection is belief-dependent, i.e. when there are several equilibria, which one is selected depends on the players’ beliefs about which equilibrium will be selected. Constitutive dependency is a trickier issue. It assumes that an ontological dependence holds between a statement “Necessarily (X is K)” and a statement “We collectively accept that (X is K)”. For instance, on this view, a specific piece of paper (X) is money (K) if and only if it is collectively accepted that this is the case. It is then easy to see why we cannot be wrong about social kinds. Guala claims that constitutive dependence is false on the basis of a strong form of non-cognitivism that makes a radical distinction between folk classifications of social objects and what these objects are really doing in the social world: “Folk classificatory practices are in principle quite irrelevant. What matters is not what type of beliefs people have about a certain class of entities (the conditions they think the entities ought to satisfy to belong to that class) but what they do with them in the course of social interactions” (p. 170). Guala strengthens his point in the penultimate chapter building on semantic externalism, i.e. the view that meaning is not intrinsic but depends on how the world actually is. Externalism implies that the meaning of institutional terms is determined by people’s practices, not by their folk theories. An illustration of the implication of this view is given in the last two chapters through the case of the institution of marriage. Guala argues for a distinction between scientific considerations about what marriage is and normative considerations regarding what marriage should be.

Guala’s book is entertaining, stimulating and thought-provoking. Moreover, as it is targeted to a wide audience of social scientists and philosophers, it is written in plain language and devoid of unnecessary technicalities. Without doubt, it will quickly become a reference work for anyone believing that naturalism is the right way to approach social ontology. Given the span of the book (and is relatively short length – 222 pages in total), there are however many claims that would call for more extensive arguments to be completely convincing. Each chapter contains a useful “further readings” section that helps the interested reader to go further. Still, there are several points where I consider that Guala’s discussion should be qualified. I will briefly mention three of them. The first one concerns the very core of Guala’s “rules-in-equilibrium” account of institutions. As the author notes himself, the idea is not wholly new as it has been suggested several times in the literature. Guala’s contribution however resides in his handling of the conceptual view that institutions are both rules and equilibria with an underlying game-theoretic framework that has been explored and formalized by Herbert Gintis (2009) and even before by Peter Vanderschraaf (1995). Vanderschraaf has been the first to suggest that Lewis’ conventions should be formalized as correlated equilibria and Gintis has expanded this view to social norms. By departing from the institutions-as-equilibria account, Guala endorses a view of institutions that eschews the behaviorism that characterizes most of the game-theoretic literature on institutions, where the latter are simply conceived as behavioral patterns. The concept of correlated equilibrium indeed allows for a “thicker” view of institutions as sets of (regulative) rules having the form of indicative conditionals. I think however that this departure from behaviorism is insufficient as it fails to acknowledge the fact that institutions also rely on subjunctive (and not merely indicative) conditionals. Subjunctive conditionals are of the from “Were X, then Y” or “Had X, then Y” (in the latter case, they correspond to counterfactuals). The use of subjunctive conditionals to characterize institutions is not needed if rules are what Guala calls “observer-rules”, i.e. devices used by social scientists to describe an institutional practice. The reason is that if the institution is working properly, we will never observe behavior off-the-equilibrium path. But this is no longer true if rules are “agent-rules”, i.e. devices used by the players themselves to coordinate. In this case, the players must use (if only tacitly) counterfactual reasoning to form beliefs about what would happen in events that cannot happen at the equilibrium. This point is obscured by the twofold fact that Guala only considers simple normal-form games and does not explicitly formalize the epistemic models that underlie the correlated equilibria in the coordination games he discusses. However, as several game theorists have pointed out, we cannot avoid dealing with counterfactuals when we want to account for the way rational players are reasoning to achieve equilibrium outcomes, especially in dynamic games. Avner Greif’s (2006) discussion of the role of “cultural beliefs” in his influential work about the economic institutions of the Maghribi traders emphasizes the importance of counterfactuals in the working of institutions. Indeed, Greif shows that differences regarding the players’ beliefs at nodes that are off-the-equilibrium path may result in significantly different behavioral patterns.

A second, related point on which I would slightly amend Guala’s discussion concerns his argument about the unnecessity of public (i.e. self-evident) events in the generation of common beliefs (see his chapter 7 about mindreading). Here, Guala follows claims made by game theorists like Ken Binmore (2008) regarding the scarcity of such events and therefore that institutions cannot depend on their existence. Guala indeed argues that neither Morton’s “solution thinking” nor Lewis’ “symmetric reasoning” rely on the existence of this kind of event. I would qualify this claim for three reasons. First, if public events are defined as publicly observable events, then their role in the social world is an empirical issue that is far from being settled. Chwe (2001) has for instance argued for their importance in many societies, including modern ones. Arguably, modern technologies of communication make such events more common, if anything. Second, Guala rightly notes in his discussion of Lewis’ account of the generation of common beliefs (or common reason to believe) that common belief of some state of affairs or event R (where R is for instance any behavioral pattern) depends on a state of affairs or event P and on the fact that people are symmetric reasoners with respect to P. Guala suggests however that in Lewis’ account, P should be a public event. This is not quite right as it is merely sufficient for P to be two-order mutual belief (i.e. everyone believes P and everyone believes that everyone believes P). However, the fact that everyone is a symmetric reasoner with respect to P has to be commonly believed (Sillari 2008). The issue is thus what grounds this common belief. Finally, if knowledge and belief are set-theoretically defined, then for any common knowledge event R there must be a public event P. I would argue in this case that rather than characterizing public events in terms of observability, it is better to characterize them in terms of mutual accessibility, i.e. in a given society, there are events that everyone comes to know or believe even if she cannot directly observe them simply because they are assumed to be self-evident.

My last remark concerns Guala’s defense of realism and fallibilism about social kinds. I think that Guala is fundamentally right regarding the falsehood of constitutive dependence. However, his argument ultimately relies on a functionalist account of institutions: institutions are not what people take them to be but rather are defined by the functions they fulfill in general in human societies. To make sense of this claim, one should be able to distinguish between “type-institutions” and “token-institutions” and claim that the functions associated to the former can be fulfilled in several ways by the latter. Crucially, for any type-institution I, the historical forms taken by the various token-institutions I cannot serve as a basis to characterize what I is or should be. To argue for the contrary would condemn one to some form of traditionalism forbidding the evolution of an institution (think of same-sex marriage). The problem with this argument is that while it may be true that the way people represent a type-institution I at a given time and location through a token-institution I cannot define what I is, it remains to determine how the functions of I are to be established. Another way to state the problem is the following: while one (especially the social scientist) may legitimately identify I with a class of games it solves, thus determining its functions, it is not clear why we could not identify I with another (not necessarily mutually exclusive) class of games. Fallibilism about social kinds supposes that we can identify the functions of an institution but is this very identification not grounded on collective representations and acceptance? If this is the case, then some work remains to be done to fully establish realism and fallibilism about social kinds.


Binmore, Ken. 2008. “Do Conventions Need to Be Common Knowledge?” Topoi 27 (1–2): 17.

Chwe, Michael Suk-Young. 2013. Rational Ritual: Culture, Coordination, and Common Knowledge. Princeton University Press.

Gintis, Herbert. 2009. The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. Princeton University Press.

Greif, Avner. 2006. Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge University Press.

Hindriks, Frank. 2009. “Constitutive Rules, Language, and Ontology.” Erkenntnis 71 (2): 253–75.

Lewis, David. 1969. Convention: A Philosophical Study. John Wiley & Sons.

Morton, Adam. 2003. The Importance of Being Understood: Folk Psychology as Ethics. Routledge.

North, Douglass C. 1990. Institutions, Institutional Change and Economic Performance. Cambridge University Press.

Searle, John R. 1995. The Construction of Social Reality. Simon and Schuster.

Sillari, Giacomo. 2008. “Common Knowledge and Convention.” Topoi 27 (1–2): 29–39.

Vanderschraaf, Peter. 1995. “Convention as Correlated Equilibrium.” Erkenntnis 42 (1): 65–87.

Isaac Levi on Rationality, Deliberation and Prediction (3/3)

This is the last of a three-part post on the philosopher Isaac Levi’s account of the relationship between deliberation and prediction in decision theory and which is an essential part of Levi’s more general theory of rationality. Levi’s views potentially have tremendous implications for economists especially regarding the current use of game theory. These views are more particularly developed in several essays collected in his book The Covenant of Reason, especially “Rationality, prediction and autonomous choice”, “Consequentialism and sequential choice” and “Prediction, deliberation and correlated equilibrium”. The first post presented and discussed Levi’s main thesis that “deliberation crowds out prediction”. The second post discussed some implications of this thesis for decision theory and game theory, specifically the equivalence between games in dynamic form and in normal form. On the same basis, this post evaluates the relevance of the correlated equilibrium concept for Bayesianism in the context of strategic interactions. The three posts are collected under a single pdf file here.


In his important article “Correlated equilibrium as an Expression of Bayesian Rationality”, Robert Aumann argues that Bayesian rationality in strategic interactions makes correlated equilibrium the natural solution concept for game theory. Aumann’s claim is a direct and explicit answer to a short paper by Kadane and Larkey which argues that the extension of Bayesian decision theory to strategic interactions leads to a fundamental indetermination at the theoretical level regarding how Bayesian rational players will/should play. To these authors, the way a game will be played depends on contextual and empirical features on which the theorist has few things to say. Aumann’s aim is clearly to show that the game theorist endorsing Bayesianism is not committed to such nihilism. Aumann’s paper was one of the first contributions to what is nowadays sometimes called the “epistemic program” in game theory. The epistemic program can be characterized as an attempt to characterize various solution concepts for normal- and extensive-form games (Nash equilibrium, correlated equilibrium, rationalizability, subgame perfection, …) in terms of sufficient epistemic conditions regarding the players’ rationality and their beliefs and knowledge over others’ choices, rationality and beliefs. While classical game theory in the tradition inspired by Nash has followed a “top-down” approach consisting in determining which strategy profiles in a game correspond to a given solution concept, the epistemic approach rather follows a “bottom-up” perspective and asks what are the conditions for a given solution concept to be implemented by the players. While Levi’s essay “Prediction, deliberation and correlated equilibrium” focuses on Aumann’s defense of the correlated equilibrium solution concept, its main points are essentially relevant for the epistemic program as a whole, as I will try to show below.

Before delving into the details of Levi’s argument, it might be useful to first provide a semi-formal definition of the correlated equilibrium solution concept. Denote A = (A1, …, An) the joint action-space corresponding to the Cartesian product of the set of pure strategies of the n players in a game. Assume that each player i = 1, …, n has a cardinal utility function ui(.) representing his preferences over the set of outcomes (i.e. strategy profiles) determined by A. Finally, denote Γ some probability space. A function f: Γ –> A defines a correlated equilibrium if for any signal γ, f(γ) = a is a strategy profile such that each player maximizes his expected utility conditional on the strategy he is playing:

For all i and all strategy ai’ ≠ ai, Eui(ai|ai) ≥ Eui(ai|ai)

Correspondingly, the numbers Prob{f-1(a)} define the correlated distribution over A that is implemented in the correlated equilibrium. The set of correlated equilibria in any given game is always at least as large than the set of Nash equilibria. Indeed, a Nash equilibrium is also a correlated equilibrium while correlated equilibria correspond to the convex hull of Nash equilibria. As an illustration, consider for instance the famous hawk-dove game:






5 ; 5

3 ; 7


7 ; 3

2 ; 2

This game has two Nash equilibria in pure strategy (i.e. [D, C] and [C, D]) and one Nash equilibrium in mixed strategy where each player plays C with probability 1/3. There are many more correlated equilibria in this game however. One of them is trivially given for instance by the following correlated distribution:











In his paper, Aumann establishes the following important theorem:

Aumann’s theorem – For any game, if

(i)        Each player i has a probability measure pi(.) over the joint-action space A;

(ii)       The probability measure is unique, i.e. p1(.) = … = pn(.);

(iii)      The players are Bayesian rational (i.e. maximizes expected utility) and this is common   knowledge;

then, the players implement a correlated equilibrium corresponding to a function f with the correlated distribution defined by Prob{f-1(a)} = p(a).

The theorem thus shows that Bayesian rational players endowed with common knowledge of their rationality and a common prior belief over the joint-action space must implement a correlated equilibrium. Therefore, it seems that Kadane and Larkey were indeed too pessimistic in claiming that nothing can be said regarding what will happen in a game with Bayesian decision makers.

Levi attacks Aumann’s conclusion by rejecting all of its premises. Once again, this rejection is grounded on the “deliberation crowds out prediction” thesis. Actually, Levi’s makes two distinct and relatively independent critics against Aumann’s assumptions. The first concerns an assumption that I have left implicit while the second targets premises (i)-(iii) together. I will consider them in turn.

Implicit in Aumann’s theorem is an assumption that Levi calls “ratifiability”. To understand what that means, it is useful to recall that a Bayesian decision-maker maximizes expected utility using conditional probabilities over states given acts. In other words, a Bayesian decision maker has to account for the possibility that his choice may reveal and/or influence the likelihood that a given state is the actual state. Evidential decision theorists like Richard Jeffrey claim in particular that it is right to see one’s choice as an evidence for the truth-value of various state-propositions even in cases where no obvious causal relationship seems to hold between one’s choice and states of nature. This point is particularly significant in a game-theoretic context where while the players make choice independently (in a normal-form game), some kind of correlation over choices and beliefs may be seen as plausible. The most extreme case is provided by the prisoner’s dilemma which Levi discusses at length in his essay:






5 ; 5

1 ; 6


6 ; 1

2 ; 2

The prisoner’s dilemma has a unique Nash equilibrium: [D, D]. Clearly, given the definition given above, this strategy profile is also the sole correlated equilibrium. However, from a Bayesian perspective, Levi argues that it is perfectly fine for Row to reason along the following lines:

“Given what I know and believe about the situation and the other player, I believe almost for sure that if I play D, Column will also play D. However, I also believe that if I play C, there is a significant chance that Column will play C”.

Suppose that Row’s conditional probabilities are p(Column plays D|I play D) = 1 and p(Column plays C|I play C) = ½. Then, Row’s expected utilities are respectively Eu(D) = 2 and Eu(C) = 3. As a consequence, being Bayesian rational, Row should play C, i.e. should choose to play a dominated strategy. Is there any wrong for Row to reason this way? The definition of the correlated equilibrium solution concept excludes this kind of reasoning because, for any action ai, the computation of expected utilities for each alternative actions ai’ should be made using the conditional probabilities u(.|ai). This corresponds indeed to the standard definition of ratifiability in decision theory as put by Jeffrey: “A ratifiable decision is a decision to perform an act of maximum estimated desirability relative to the probability matrix the agent thinks he would have if he finally decided to perform that act.” In the prisoner’s dilemma, it is easy to see that only D is ratifiable because considering to play C with the conditional probabilities given above, Row would do better by playing D; indeed, Eu(D|Columns plays C with probability ½) > Eu(C|Columns plays C with probability ½).

As Levi recognizes, the addition of ratifiability as a criterion of rational choice leads de facto to exclude the possibility that the players in a game may rationally believe that some form of causal dependence holds between their choices. Indeed, as formally shown by Oliver Board, Aumann’s framework tacitly builds upon an assumption of causal independence but also of common belief in causal independence. For some philosophers and game theorists, this is unproblematic and indeed required since it is constitutive of game-theoretic reasoning (see for instance this paper of Robert Stalnaker). Quite the contrary, Levi regards this exclusion as illegitimate at least on a Bayesian ground.

Levi’s rejection of premises (i)-(iii) are more directly related to his “deliberation crowds out prediction thesis”. Actually, we may even focus on premises (i) and (iii) as premise (ii) depends on (i). Consider the assumption that the players have a probability measure over the joint-action space first. Contrary to a standard Bayesian decision problem where the probability measure is defined over a set of states that is distinct from the set of acts, in a game-theoretic context the domain of the probability measures encompasses each player’s own strategy choice. In other words, this leads the game theorist to assumes that each player ascribes an unconditional probability to his own choice. I have already explained why Levi regards this assumption as unacceptable if one wants to account for the way decision makers reason and deliberate.* The common prior assumption (ii) is of course even less commendable in this perspective, especially if we consider that such an assumption pushes us outside the realm of strict Bayesianism. Regarding assumption (iii), Levi’s complaint is similar: common knowledge of Bayesian rationality implies that each player knows that he is rational. However, if a player knows that he is rational before making his choice, then he already regards as feasible only admissible acts (recall Levi’s claims 1 and 2). Hence, no deliberation has to take place.

Levi’s critique of premises (i)-(iii) seem to extend to the epistemic program as a whole. What is at stake here is the epistemological and methodological status of the theoretical models build by game theorists. The question is the following: what is the modeler trying to establish regarding the behavior of players in strategic interactions? There are two obvious possibilities. A first one is that, as an outside observer, the modeler is trying to make sense (i.e. to describe and to explain) of players’ choices after having observed them. Relatedly, still as an outside observer, he may try to predict players’ choices before they are made. A second possibility is to game-theoretic models as tools to account for the players’ reasoning process prospectively, i.e. how players’ deliberate to make choice. Levi’s “deliberation crowds out prediction” thesis could grant some relevance to the first possibility may not for the second. However, he contends that Aumann’s argument for correlated equilibrium cannot be only retrospective but must also be prospective.** If Levi is right, the epistemic program as a whole is affected by this argument, though fortunately there is room for alternative approaches, as illustrated by Bonanno’s paper mentioned in the preceding post.


* The joint-action space assumption results from a technical constraint: if we want to exclude the player’s own choice from the action space, we then have to account for the fact that each player has a different action space over which he forms beliefs. In principle, this can be dealt with even though this would lead to more cumbersome formalizations.

** Interestingly, in a recent paper with Jacques Dreze which makes use of the correlated equilibrium solution concept, Aumann indeed argues that the use of Bayesian decision theory in a game-theoretic context has such prospective relevance.

Working Paper: “Game Theory, Game Situations and Rational Expectations: A Dennettian View”

I have just finished a new working paper entitled “Game Theory, Game Situations and Rational Expectations: A Dennettian View” which I will present at the 16th international conference of the Charles Gide Association for the Study of Economic Thought. The paper is a bit esoteric as it discusses the formalization of rational expectations in a game-theoretic and epistemic framework on the basis of the philosophy of mind and especiallly Daniel Dennett’s intentional-stance functionalism. As usual, comments are welcome.

Christmas, Economics and the Impossibility of Unexpected Events


Each year, as Christmas is approaching, economists like to remind everyone that making gifts is socially inefficient. The infamous “Christmas deadweight loss” corresponds to the fact that the resources allocation is suboptimal because people would have chosen to buy different things than the ones they have received as gifts at Christmas if they were given the equivalent value in cash. This is a provocative result but it follows from straightforward (though clearly shortsighted) economic reasoning. I would like here to point out another disturbing result that comes from economic theory. Though it is not specific to the Christmas period it is quite less straightforward, which makes it much more interesting. It is related to the (im)possibility of surprising people.

I will take for granted that one of the points of a Christmas present is to try to surprise the person you’re making the gift to. Of course, many people make wish lists but the point is precisely that 1) one will rarely expect to receive all the items he has indicated on his list and 2) the list may be fairly open or at least give to others an idea of the kind of presents one wish to receive without being too specific. In any case, apart from Christmas, there are several other social institutions whose value is partially derived from the possibility of surprising people (think of April fools). However, on the basis of the standard rationality assumptions made in economics, it is clear that surprising people is simply impossible and even non-sense.

I start with some definitions. An event is a set of states of the world where each person behave in a certain way (e.g. makes some specific gifts to others) and holds some specific conjectures or beliefs about what others are doing and believing. I call an unexpected event an event for which at least one person attributes a null prior probability of realizing. An event is impossible if it is inconsistent with the people’s theory (or model) of the situation they are in. The well-known example of the so-called “surprise exam paradox” gives a great illustration of these definitions. A version of this example is as follows:

The Surprise Exam Paradox: At day D0, the teacher T announces to his students S that he will give them a surprise exam either at D1 or at D2. Denote En the event “the exam is given at day Dn (n = 1, 2)” and assumes that the students S believes the teacher T’s announcement. They also know that T really wants to surprise them and they know that he knows that. Finally, we assume that S and T have common knowledge of their reasoning abilities. On this basis, the students reason the following way:

SR1: If the exam is not given at D1, it will be necessarily given at D2 (i.e. E2 has probability 1 according to S if not E1). Hence, S will not be surprised.
SR2: S knows that T knows SR1.
SR3: Therefore, T will give the exam at D1 (i.e. E1 has probability 1 according to S). Hence, S will not be surprised.
SR4: S knows that T knows SR3.
SR5: S knows that T knows SR1-SR4, hence the initial announcement is impossible.

The final step of S’s reasoning (SR5) indicates that there is no event En that is both unexpected and consistent with S’s theory of the situation as represented by the  assumptions stated in the description of the case. Still, suppose that T gives the exam at D2; then indeed the students will be surprised but in a very different sense than the one we have figured out. The surprise exam paradox is a paradox because whatever T decides to do, this is inconsistent with at least one of the premises constitutive of the theory of the situation. In other words, the students are surprised because they have the wrong theory of the situation, but this is quite “unfair” since the theory is the one the modeler has given to them.

Now, the point is that surprise is similarly impossible in economics under the standard assumption of rational expectation. Actually, this directly follows from how this assumption is stated in macroeconomics: an agent’s expectations are rational if they correspond to the actual state of the world on average. The last clause “on average” means that for any given variable X, the difference between the agent’s expectation of the value of X and the actual value of X is captured by a random error variable of mean 0. This variable is assumed to follow some probabilistic distribution that is known by the agent. Hence, while the agent’s rational expectation may actually be wrong, he will never be surprised whatever the actual value of X. This is due to the fact that he knows the probability distribution of the error term and hence he expects to be wrong according to this probability distribution even though he expects to be right on average.

However, things are more interesting in the strategic case, i.e. when the value of X depends on the behavior of each person in the population, the latter depending itself on one’s expectations about others’ behavior and expectations. Then, the rational expectations hypothesis is akin to assuming some kind of consistency between the persons’ conjectures (see this previous post on this point). At the most general level, we assume that the value of X (deterministically or stochastically) depends on the profile of actions s = (s1, s2, …, sn) of the n agents in the population, i.e. X = f(s). We also assume that there is mutual knowledge that each person is rational: she chooses the action that maximizes her expected utility given her beliefs about others’ actions, hence si = si(bi) for all agents i in the population, with bi agent i’s conjecture about others’ actions. It follows that it is mutual knowledge that X = f(b1, b2, …, bn). An agent i’s conjecture is rational if bi* = (s1*, …, si-1*, si+1*, …, sn*) with sj* the actual behavior of agent j. Denote s* = (s1*(b1*), s2*(b2*), …, sn*(bn*)) the resulting strategy profile. Since there is mutual knowledge of rationality, the fact that one knows s* implies that he knows each bi* (assuming that there is a one-to-one mapping between conjecture and action); hence the profile of rational conjectures b* = (b1*, b2*, …, bn*) is also mutually known. By the same reasoning, a k order of mutual knowledge of rationality entails a k order of mutual knowledge of b* and common knowledge of rationality entails common knowledge of b*. Therefore, everyone correctly predicts X and this is common knowledge.

Another way to put this point is proposed by Robert Aumann and Jacques Dreze in an important paper where they show the formal equivalence between the common prior assumption and the rational expectation hypothesis. Basically, they show that a rational expectation equilibrium is equivalent to a correlated equilibrium, i.e. a (mix-)strategy profile determined by the probabilistic distribution of some random device and where players maximize expected utility. As shown in another important paper by Aumann, two sufficient conditions for obtaining a correlated equilibrium are common knowledge of Bayesian rationality and a common prior over the strategy profiles that can be implemented (the common prior reflects the commonly known probabilistic distribution of the random device). This ultimately leads to another important result proved by Aumann: persons with a common prior and a common knowledge of their ex post conjectures cannot “agree to disagree”. In a world where people have a common prior over some state space and a common knowledge of their rationality or of their ex post conjectures (which here is the same thing), unexpected events are simply impossible. One already knows all that can happen and thus will ascribe a strictly positive probability to any possible event. This is nothing but the rational expectation hypothesis.

Logicians and game theorists who have dealt with Aumann’s theorems have proven that the latter build on a formal structure that is equivalent to the well-known S5 formal system in modal logic. The axioms of this formal system imply, among other things, logical omniscience (an agent knows all logical truths and the logical implications of what he knows) and, more controversially, negative introspection (when one does not know something, he knows it). Added to the fact that everything is captured in terms of knowledge (i.e. true beliefs), it is intuitive that such a system is unable to deal with unexpected events and surprise. From a logical point of view, this problem can be answered simply by changing the axioms of and assumptions of the formal system. Consider the surprise exam story once again. The paradox seems to disappear if we give up the assumption of common knowledge of reasoning abilities. For instance, we may suppose that the teacher knows the reasoning abilities of the students but not that the students knows that he knows that. In this case, steps SR2, SR3 and SR4 cannot occur. Or we may suppose that the teacher knows the reasoning abilities of the students and that the students knows that he knows that, but that the teacher does not know that they know that he knows. In this case, step SR5 in the students’ reasoning cannot occur. In both cases, the announcement is no longer inconsistent with the students’ and teacher’s knowledge. This is not completely satisfactory however for at least two reasons: first, the plausibility of the result depends on epistemic assumptions which are completely ad hoc. Second, the very nature of the formal systems of standard modal logic implies that the agent’s theory of a given situation captures everything that is necessarily true. In the revised version of the surprise exam example above, it is necessarily true that an exam will be given either at day D1 or D2, and thus everyone must know that, and so the exam is not a surprise in the sense of an unexpected event.

The only way to avoid these difficulties is to enter the fascinating but quite complex realm of non-monotonic modal logic and beliefs revision theories. In practice, this consists in giving up the assumption that the agents are logically omniscient in the sense that may not know something that is necessarily true. Faced with an inconsistency, an agent will adopt a belief revision procedure such as to make his belief and knowledge consistent with an unexpected event. In other words, though the agent does not expect to be surprised, it is possible to account for how he deals with unexpected information. As far as I know, there have been very few attempts in economics to build on such kinds of non-monotonic formalization to tackle of expectations formation and revision, in spite of the growing importance of the macroeconomic literature on learning. Game theorists have been more prone to enter into this territory (see this paper of Michael Bacharach for instance) but much remains to be done.

September Issue of the Journal of Institutional Economics on Institutions, Rules and Equilibria

The last issue of the Journal of Institutional Economics features an interesting and stimulating set of articles on how to account of institutions in game theory (note that all articles are currently ungated). In the main article “Institutions, rules, and equilibria: a unified theory”, the philosophers F. Hindriks and F. Guala attempt to unify three different accounts of institutions: the equilibrium account, the rule account and Searle’s constitutive rule account. They argue that the solution concept of correlated equilibrium is the key concept to make such a unification. The later retains the notion that institutions can only persist if they correspond to an equilibrium, but at the same time it emphasizes that institutions can be understood as correlating devices based on the humans’ ability to symbolically represent rules (as a sidenote, I make a similar point in this forthcoming paper as well as in this working paper [a significantly different version of the latter is currently under submission]). The authors also argue that Searle’s constitutive rules are reducible to regulative rules (I have presented the argument here).

Several short articles by Vernon Smith, Robert Sugden, Ken Binmore, Masahiko Aoki, John Searle and Geoffrey Hodgson reflect on Hindriks and Guala’s paper. They are all interesting but I would essentially recommend Sugden’s paper because it tackles a key issue in the philosophy of science (i.e. whether or not scientific concept should reflect “common-sense ontology”) and Searle’s response. I find the latter essentially misguided (it is not clear whether Searle’s understand game-theoretic concepts and it makes the surprising claim that “if… then” (regulative) rules have no deontic component) but it still makes some interesting points regarding the fact that some institutions such as the promise-keeping one exist (and create obligations) though they are not always followed.

Rational Expectations and the Standard Model of Social Ontology

Noah Smith has an interesting post where he refers to an article of Charles Manski about the rational expectations hypothesis (REH). Manski points out that in a stochastic environment it is highly unlikely that expectations are rational in the sense of the REH. However, he ultimately concludes that there are no better alternative. In this post, I want to point out that the REH is actually well in line with what the philosopher Francesco Guala calls in an article the “Standard Model of Social Ontology” (SMOSO), including the fact that it lacks empirical support. This somehow echoes Noah Smith’s conclusion that “rational Expectations can’t be challenged on data grounds”.

Guala characterizes the SMOSO by the following  three elements:

1) Reflexivity: Guala defines this as the fact that “social entities are constituted by beliefs about beliefs” (p. 961). A more general way to characterize reflexivity is that individuals form attitudes (mainly, beliefs) about the systems they are part of and thus attitudes about others’ attitudes. If it is assumed that these attitudes determine people’s actions and in turn, these actions determine the state of the system, then people’s attitudes determine the system. This may lead to the widely discussed phenomenon of self-fulfilling prophecies where the agents’ beliefs about others’ beliefs about the (future) state of the system bring the system to that state.

2) Performativity: it can be defined as the fact that the social reality is literally made by the agents’ attitudes and actions. The classical example is language: performative utterances like “I promise that Y” or “I make you man and wife” not only describe the social reality, they (in the appropriate circumstances) make it by creating a state of affairs that makes the utterance true. Other cases are for instance the fact that some pieces of paper are collectively regarded as money or the fact that raising one’s hand is regarded as a vote in favor of some proposition or candidate.

3) Collective intentionality: attitudes (in particular beliefs) constitutive of the social reality are in some way or another “collective”. Depending on the specific model, collective intentionality can refer to a set of individual attitudes (intentions, beliefs) generally augmented by an epistemic condition (usually, mutual or common knowledge of these attitudes) or a distinct collective attitude of the form “we intend to” or “we believe that”.

The three elements constitutive of the SMOSO are common to almost all the theories and models developed in social ontology and the philosophy of social science for the last thirty years. That does not mean that they fully determine the content of these theories and models: there are several and mutually exclusive accounts of collective intentionality, as well as there are different ways to account for performativity and reflexivity. Now, I want to suggest that many economic models using the REH fall within this general model of social ontology. The REH states that economic agents do not make systematic errors in the prediction of the future value of relevant economic variables. In other words, they make correct predictions on average. Denote X(t) the value of any economic variable you want (price, inflation, …) at time t and X(t+1)^ei the expected value of X at time t+1 according to agent i. Formally, an expectation corresponds to X(t+1)^ei = E[X(t)ΙI(t)^i] with I(t)^i the information available at t for i and E the expectation operator. The REH is the assumption that X(t+1) = X(t+1)^ei + Eu where u is an error term of mean 0. The proximity of the REH with the three elements of the SMOSO is more or less difficult to see but is nevertheless real.

The relationship between the REH and reflexivity is the easiest to state because discussions on rational expectations in the 1950s find their roots in the treatment of the reflexivity issue which itself originates in Oskar Morgenstern’s discussion of the “Holmes-Moriarty paradox”. Morgenstern was concerned with the fact that if the state of affairs that realizes depends on one’s beliefs about others’ beliefs about which state of affairs will realize, then it may be impossible to predict states of affairs. In 1950s, papers by Simon and by Modigliani and Grunberg tackle this problem. Using fixed-point techniques, they show that under some conditions, there is at least one solution x* = F(x*) such that the prediction x* regarding the value of some variable is self-confirmed by the actual value F(x*). In his article on rational expectations, Muth mentions as one of the characteristic of the REH the fact that a public prediction in the sense of Grunberg and Modigliani “will have no substantial effect on the operation of the economic system (unless it is based on inside information)”. So, the point is that a “rational prediction” should not change the state of the system.

The relationship of the REH with performativity and collective intentionality is more difficult to characterize. Things are somewhat clearer however once we realize that the REH implies mutual consistency of the agents’ beliefs and actions (see this old post by economist Rajiv Sethi which makes this point clearly). This is due to the fact that in an economic system, the value X(t+1) of some economic variable at time t+1 will depend on the decisions si made by thousands of agents at t, i.e. X(t+1) = f(s1(t), s2(t), …, sn(t)). Assuming that these agents are rational (i.e. they maximize expected utility), the agent’s decisions depend on their conjectures X(t+1)^ei about the future value of the variable. But then this implies that one’s conjecture X(t+1)^ei is a conjecture about others’ decisions (s1(t), …, si-1(t), si+1(t), …, sn(t)) for any given functional relation f, and thus (assuming that rationality is common knowledge) a conjecture about others’ conjectures (X(t+1)^e1, …, X(t+1)^ei-1, X(t+1)^ei+1, …, X(t+1)^en). Since others’ conjectures are also conjectures about conjectures, we have an infinite chain of iterated conjectures about conjectures. Mutual consistency implies that everyone maximizes his utility given others’ behavior. In general, this will also imply that everyone forms the same, correct conjecture, which is identical to the REH in the special case where all agents have the same information since we have X(t+1) = X(t+1)^ei for all agent i. As Sethi indicates in his post, this is equivalent to what Robert Aumann called the “Harsanyi doctrine” or more simply, the common prior assumption: any disagreement between agents must come from difference in information.

In itself, the relationship between the REH and the common prior assumption is interesting. Notably, if we consider that the common prior assumption is difficult to defend on empirical grounds, this should lead us to consider the REH with suspicion. But it also helps to make the link with the SMOSO. Regarding performativity, we have to give up the assumption (standard in macroeconomics) that the equilibrium is unique, i.e. there are at least two values X(t+1)* and X(t+1)** for which the agents’ plans and conjectures are mutually consistent. Now, any public announcement of the kind “the variable will take value X(t+1)* (resp. X(t+1)**)” is self-confirming. Moreover, this is common knowledge.[1] The public announcement play the role of a “choreographer” (Herbert Gintis’ term) that coordinates the agents’ plans. This makes the link with collective intentionality. It is tempting to interpret the common prior assumption as some kind of “common mind hypothesis”, as if the economic agents were collectively sharing a worldview. Of course, as indicated above, it is also possible to adopt a less controversial interpretation by seeing this assumption as some kind of tacit agreement involving nothing but a set of individual attitudes. The way some macroeconomists defend the REH suggests a third interpretation: economic agents are able to learn about the economic world and this learning generates a common background. In game-theoretic terms, we could also say that agents are learning to play a Nash equilibrium (or a correlated equilibrium).

This last point is interesting when it is put in perspective with Guala’s critique of the SMOSO. Guala criticizes the SMOSO for its lack of empirical grounding. For instance, discussions about collective intentionality are typically conceptual, but almost never build on empirical evidence. Most critics of the REH in economics make a similar point: the REH is made for several reasons (essentially conceptual and theoretical) but has no empirical foundations. The case of learning is particularly interesting: since the 1970s, one of the “empirical” defenses of the REH has been the casual claim that “you can’t fool people systematically”. This is the same as to say that on a more or less short term, people learn how the economy works. This is a pretty weak defense, to say the least. Economists actually do not know how economic agents are learning, what is the rate of the learning process, and so on. Recently, a literature on learning and expectations has been developing, establishing for instance the conditions of convergence to rational expectations. As far as I can tell, this literature is essentially theoretical but is a first step to provide more solid foundations to REH… or to dismiss it. The problem of the empirical foundations for any assumption regarding how agents form expectations is likely to remain though.


[1] Going a little further, it can be shown that if the public announcement is made on the basis of a probabilistic distribution p where each equilibrium is announced with probability p(X(t+1)*), then p also defines a correlated equilibrium in the underlying game, i.e. agents behave as if they were playing a mixed strategy defined by p.