Isaac Levi on Rationality, Deliberation and Prediction (3/3)

This is the last of a three-part post on the philosopher Isaac Levi’s account of the relationship between deliberation and prediction in decision theory and which is an essential part of Levi’s more general theory of rationality. Levi’s views potentially have tremendous implications for economists especially regarding the current use of game theory. These views are more particularly developed in several essays collected in his book The Covenant of Reason, especially “Rationality, prediction and autonomous choice”, “Consequentialism and sequential choice” and “Prediction, deliberation and correlated equilibrium”. The first post presented and discussed Levi’s main thesis that “deliberation crowds out prediction”. The second post discussed some implications of this thesis for decision theory and game theory, specifically the equivalence between games in dynamic form and in normal form. On the same basis, this post evaluates the relevance of the correlated equilibrium concept for Bayesianism in the context of strategic interactions. The three posts are collected under a single pdf file here.

 

In his important article “Correlated equilibrium as an Expression of Bayesian Rationality”, Robert Aumann argues that Bayesian rationality in strategic interactions makes correlated equilibrium the natural solution concept for game theory. Aumann’s claim is a direct and explicit answer to a short paper by Kadane and Larkey which argues that the extension of Bayesian decision theory to strategic interactions leads to a fundamental indetermination at the theoretical level regarding how Bayesian rational players will/should play. To these authors, the way a game will be played depends on contextual and empirical features on which the theorist has few things to say. Aumann’s aim is clearly to show that the game theorist endorsing Bayesianism is not committed to such nihilism. Aumann’s paper was one of the first contributions to what is nowadays sometimes called the “epistemic program” in game theory. The epistemic program can be characterized as an attempt to characterize various solution concepts for normal- and extensive-form games (Nash equilibrium, correlated equilibrium, rationalizability, subgame perfection, …) in terms of sufficient epistemic conditions regarding the players’ rationality and their beliefs and knowledge over others’ choices, rationality and beliefs. While classical game theory in the tradition inspired by Nash has followed a “top-down” approach consisting in determining which strategy profiles in a game correspond to a given solution concept, the epistemic approach rather follows a “bottom-up” perspective and asks what are the conditions for a given solution concept to be implemented by the players. While Levi’s essay “Prediction, deliberation and correlated equilibrium” focuses on Aumann’s defense of the correlated equilibrium solution concept, its main points are essentially relevant for the epistemic program as a whole, as I will try to show below.

Before delving into the details of Levi’s argument, it might be useful to first provide a semi-formal definition of the correlated equilibrium solution concept. Denote A = (A1, …, An) the joint action-space corresponding to the Cartesian product of the set of pure strategies of the n players in a game. Assume that each player i = 1, …, n has a cardinal utility function ui(.) representing his preferences over the set of outcomes (i.e. strategy profiles) determined by A. Finally, denote Γ some probability space. A function f: Γ –> A defines a correlated equilibrium if for any signal γ, f(γ) = a is a strategy profile such that each player maximizes his expected utility conditional on the strategy he is playing:

For all i and all strategy ai’ ≠ ai, Eui(ai|ai) ≥ Eui(ai|ai)

Correspondingly, the numbers Prob{f-1(a)} define the correlated distribution over A that is implemented in the correlated equilibrium. The set of correlated equilibria in any given game is always at least as large than the set of Nash equilibria. Indeed, a Nash equilibrium is also a correlated equilibrium while correlated equilibria correspond to the convex hull of Nash equilibria. As an illustration, consider for instance the famous hawk-dove game:

Column

C

D

Row

C

5 ; 5

3 ; 7

D

7 ; 3

2 ; 2

This game has two Nash equilibria in pure strategy (i.e. [D, C] and [C, D]) and one Nash equilibrium in mixed strategy where each player plays C with probability 1/3. There are many more correlated equilibria in this game however. One of them is trivially given for instance by the following correlated distribution:

Column

C

D

Row

C

0

1/2

D

1/2

0

In his paper, Aumann establishes the following important theorem:

Aumann’s theorem – For any game, if

(i)        Each player i has a probability measure pi(.) over the joint-action space A;

(ii)       The probability measure is unique, i.e. p1(.) = … = pn(.);

(iii)      The players are Bayesian rational (i.e. maximizes expected utility) and this is common   knowledge;

then, the players implement a correlated equilibrium corresponding to a function f with the correlated distribution defined by Prob{f-1(a)} = p(a).

The theorem thus shows that Bayesian rational players endowed with common knowledge of their rationality and a common prior belief over the joint-action space must implement a correlated equilibrium. Therefore, it seems that Kadane and Larkey were indeed too pessimistic in claiming that nothing can be said regarding what will happen in a game with Bayesian decision makers.

Levi attacks Aumann’s conclusion by rejecting all of its premises. Once again, this rejection is grounded on the “deliberation crowds out prediction” thesis. Actually, Levi’s makes two distinct and relatively independent critics against Aumann’s assumptions. The first concerns an assumption that I have left implicit while the second targets premises (i)-(iii) together. I will consider them in turn.

Implicit in Aumann’s theorem is an assumption that Levi calls “ratifiability”. To understand what that means, it is useful to recall that a Bayesian decision-maker maximizes expected utility using conditional probabilities over states given acts. In other words, a Bayesian decision maker has to account for the possibility that his choice may reveal and/or influence the likelihood that a given state is the actual state. Evidential decision theorists like Richard Jeffrey claim in particular that it is right to see one’s choice as an evidence for the truth-value of various state-propositions even in cases where no obvious causal relationship seems to hold between one’s choice and states of nature. This point is particularly significant in a game-theoretic context where while the players make choice independently (in a normal-form game), some kind of correlation over choices and beliefs may be seen as plausible. The most extreme case is provided by the prisoner’s dilemma which Levi discusses at length in his essay:

Column

C

D

Row

C

5 ; 5

1 ; 6

D

6 ; 1

2 ; 2

The prisoner’s dilemma has a unique Nash equilibrium: [D, D]. Clearly, given the definition given above, this strategy profile is also the sole correlated equilibrium. However, from a Bayesian perspective, Levi argues that it is perfectly fine for Row to reason along the following lines:

“Given what I know and believe about the situation and the other player, I believe almost for sure that if I play D, Column will also play D. However, I also believe that if I play C, there is a significant chance that Column will play C”.

Suppose that Row’s conditional probabilities are p(Column plays D|I play D) = 1 and p(Column plays C|I play C) = ½. Then, Row’s expected utilities are respectively Eu(D) = 2 and Eu(C) = 3. As a consequence, being Bayesian rational, Row should play C, i.e. should choose to play a dominated strategy. Is there any wrong for Row to reason this way? The definition of the correlated equilibrium solution concept excludes this kind of reasoning because, for any action ai, the computation of expected utilities for each alternative actions ai’ should be made using the conditional probabilities u(.|ai). This corresponds indeed to the standard definition of ratifiability in decision theory as put by Jeffrey: “A ratifiable decision is a decision to perform an act of maximum estimated desirability relative to the probability matrix the agent thinks he would have if he finally decided to perform that act.” In the prisoner’s dilemma, it is easy to see that only D is ratifiable because considering to play C with the conditional probabilities given above, Row would do better by playing D; indeed, Eu(D|Columns plays C with probability ½) > Eu(C|Columns plays C with probability ½).

As Levi recognizes, the addition of ratifiability as a criterion of rational choice leads de facto to exclude the possibility that the players in a game may rationally believe that some form of causal dependence holds between their choices. Indeed, as formally shown by Oliver Board, Aumann’s framework tacitly builds upon an assumption of causal independence but also of common belief in causal independence. For some philosophers and game theorists, this is unproblematic and indeed required since it is constitutive of game-theoretic reasoning (see for instance this paper of Robert Stalnaker). Quite the contrary, Levi regards this exclusion as illegitimate at least on a Bayesian ground.

Levi’s rejection of premises (i)-(iii) are more directly related to his “deliberation crowds out prediction thesis”. Actually, we may even focus on premises (i) and (iii) as premise (ii) depends on (i). Consider the assumption that the players have a probability measure over the joint-action space first. Contrary to a standard Bayesian decision problem where the probability measure is defined over a set of states that is distinct from the set of acts, in a game-theoretic context the domain of the probability measures encompasses each player’s own strategy choice. In other words, this leads the game theorist to assumes that each player ascribes an unconditional probability to his own choice. I have already explained why Levi regards this assumption as unacceptable if one wants to account for the way decision makers reason and deliberate.* The common prior assumption (ii) is of course even less commendable in this perspective, especially if we consider that such an assumption pushes us outside the realm of strict Bayesianism. Regarding assumption (iii), Levi’s complaint is similar: common knowledge of Bayesian rationality implies that each player knows that he is rational. However, if a player knows that he is rational before making his choice, then he already regards as feasible only admissible acts (recall Levi’s claims 1 and 2). Hence, no deliberation has to take place.

Levi’s critique of premises (i)-(iii) seem to extend to the epistemic program as a whole. What is at stake here is the epistemological and methodological status of the theoretical models build by game theorists. The question is the following: what is the modeler trying to establish regarding the behavior of players in strategic interactions? There are two obvious possibilities. A first one is that, as an outside observer, the modeler is trying to make sense (i.e. to describe and to explain) of players’ choices after having observed them. Relatedly, still as an outside observer, he may try to predict players’ choices before they are made. A second possibility is to game-theoretic models as tools to account for the players’ reasoning process prospectively, i.e. how players’ deliberate to make choice. Levi’s “deliberation crowds out prediction” thesis could grant some relevance to the first possibility may not for the second. However, he contends that Aumann’s argument for correlated equilibrium cannot be only retrospective but must also be prospective.** If Levi is right, the epistemic program as a whole is affected by this argument, though fortunately there is room for alternative approaches, as illustrated by Bonanno’s paper mentioned in the preceding post.

Notes

* The joint-action space assumption results from a technical constraint: if we want to exclude the player’s own choice from the action space, we then have to account for the fact that each player has a different action space over which he forms beliefs. In principle, this can be dealt with even though this would lead to more cumbersome formalizations.

** Interestingly, in a recent paper with Jacques Dreze which makes use of the correlated equilibrium solution concept, Aumann indeed argues that the use of Bayesian decision theory in a game-theoretic context has such prospective relevance.