*This is the first of a three-part post on the philosopher Isaac Levi’s account of the relationship between deliberation and prediction in decision theory and which is an essential part of Levi’s more general theory of rationality. Levi’s views potentially have tremendous implications for economists especially regarding the current use of game theory. These views are more particularly developed in several essays collected in his book *The Covenant of Reason, *especially “Rationality, prediction and autonomous choice”, “Consequentialism and sequential choice” and “Prediction, deliberation and correlated equilibrium”. The first post presents and discusses Levi’s main thesis that “deliberation crowds out prediction”. The next two posts will discuss some implications of this thesis for decision theory and game theory, specifically (i) the equivalence between games in dynamic form and in normal form and (ii) the relevance of the correlated equilibrium concept for Bayesianism in the context of strategic interactions. The three posts are collected under a single pdf file here.
*

The determination of principles of rational choice is the main subject of decision theory since its early development at the beginning of the 20^{th} century. Since its beginnings, decision theory has pursued two different and somehow conflicting goals: on the one hand, to describe and explain how people actually make choices and, on the other hand, to determine how people *should* make choices and what choices they should make. While the former goal corresponds to what can be called “positive decision theory”, the latter is constitutive of “normative decision theory”. Most decision theorists, especially the proponents of “Bayesian” decision theory, have agreed that decision theory cannot but be partially normative. Indeed, while today Bayesian decision theory is generally not regarded as an accurate account of how individuals are actually making choices, most decision theorists remain convinced that it is still relevant as a normative theory of rational decision-making. This is in this context that Isaac Levi’s claim that “deliberation crowds out prediction” should be discussed.

In this post, I will confine the discussion to the restrictive framework of Bayesian decision theory though Levi’s account more generally applies to any form of decision theory that adheres to *consequentialism*. Consequentialism will be more fully discussed in the second post of this series. Consider any decision problem **D** in which an agent DM has to make a choice over a set of options whose consequences are not necessarily fully known for sure. Bayesians will generally model **D** as a triple < *A*, *S*, *C* > where *A* is the set of *acts a*, *S* the set of *states of nature* *s* and *C* the set of *consequences* *c*. In the most general form of Bayesian decision theory, any *a*, *s* and *c* may be regarded as a proposition to which truth-values might be assigned. In Savage’s specific version of Bayesian decision theory, acts are conceived as functions from states to consequences, i.e. *a*: *S* à C or *c* = *a*(*s*). In this framework, it is useful to see acts as DM’s objects of choice, i.e. the elements over which he has a direct control, while states may be interpreted as every features in **D** over which DM has no *direct* control. Consequences are simply the result of the combination of an act (chosen by DM) and a state (not chosen by DM). Still following Savage, it is standard to assume that DM has (subjective) beliefs over which state *s* actually holds. These beliefs are captured by a probability function *p*(.) with ∑* _{s}p*(

*s*) = 1 for a finite state space. Moreover, each consequence

*c*is assigned a utility value

*u*representing DM’s preferences over the consequences. A Bayesian DM will then choose the act that maximizes his expected utility given his subjective beliefs and his preferences, i.e.

Max_{a}*Eu*(*a*) = ∑* _{s}p*(

*s*|

*a*)

*u*(

*a*(

*s*)) = ∑

*(*

_{s}p*s*|

*a*)

*u*(

*c*).

Two things are worth noting. First, note that the probabilities that enter into the expected utility computation are *conditional* probabilities of states given acts. We should indeed account for the possibility that the probabilities of states depend on the act performed. The nature of the relationship between states and acts represented by these conditional probabilities is the main subject of conflict between *causal* and *evidential *decision theorists. Second, as it is well-known, in Savage’s version of Bayesian decision theory, we start with a full ordering representing DM’s preferences over *acts* and given a set of axioms, it is shown that we can derive a unique probability function *p*(.) and a cardinal utility function *u*(.) unique up to any positive affine transformation. It is indeed important to recognize that Savage’s account is essentially behaviorist because it merely shows that given the fact that DM’s preferences and beliefs satisfy some properties, then his choice can be *represented* as the maximization of some function with some uniqueness property. Not all Bayesian decision theorists necessarily share Savage’s behaviorist commitment.

I have just stated that in Savage’s account, DM ascribes probabilities to states, utilities to consequences and hence expected utilities to acts. However, if acts, states and consequences are all understood as propositions (as argued by Richard Jeffrey and Levi among others), then there is nothing in principle prohibiting to ascribe utilities to states and probabilities to both consequences *and *acts. This is this last possibility (ascribing probabilities to acts) that is the focus of Levi’s claim that deliberation crowds out prediction. In particular, does it make sense for DM to have *unconditional *probabilities over the set *A*? How having such probabilities could be interpreted from the perspective of DM’s deliberation in **D**? If we take a third person perspective, ascribing probabilities to DM’s objects of choice seems not particularly contentious. It makes perfect sense for me to say for instance “I believe that you will start again to smoke before the end of the month with probability *p*”. Ascribing probabilities to others’ choices is an essential part of our daily activity consisting in predicting others’ choices. Moreover, probability ascription may be a way to explain and rationalize others’ behavior. The point of course is that these are *my* probabilities, not *yours*. The issue here is whether a deliberating agent has to, or even can ascribe such probabilities to his own actions, acknowledging that such probabilities are in any case not relevant in the expected utility computation.

Levi has been (with Wolfgang Spohn) the most forceful opponent to such a possibility. He basically claims that the principles of rationality that underlie any theory of decision-making (including Bayesian ones) cannot at the same time serve as explanatory and predictive tools and as normative principles guiding rational behavior. In other words, as far as the deliberating agent is using rationality principles to make the best choice, he cannot at the same time use these principles to predict his own behavior *at the very moment he is making his choice*.* This is the essence of the “deliberation crowds out prediction” slogan. To understand Levi’s position, it is necessary to delve into some technical details underlying the general argument. A paper of philosopher Wlodek Rabinowicz makes a great job in reconstructing this argument (see also this paper by James Joyce). A crucial premise is that, following De Finetti, Levi considers belief ascription as fully constituted by the elicitation of betting rates, i.e. DM’s belief over some event E is determined and corresponds to what DM would consider as the fair price of a gamble where event E pays *x*$ and event non-E pays *y*$.** Consider this example: I propose you to pay *y*$ (the cost or the price of the bet) to participate to the following bet: if Spain win the Olympic gold medal of basketball at Rio this month, I will pay you *x*$, otherwise I pay you nothing. Therefore, *x* is the net gain of the bet and *x*+*y* is called the *stake* of the bet. Now, the fair price *y**$ of the bet corresponds to the amount for which you are indifferent between taking and not taking the bet. Suppose that *x* = 100 and that *y** = 5. Your betting rate for this gamble is then *y**/(*x*+*y**) = 5/105 = 0,048, i.e. you believe that Spain will win with probability less than 0,05. This is the traditional way beliefs are determined in Bayesian decision theory. Now, Levi’s argument is that such a procedure cannot be applied in the case of beliefs over acts on pain of inconsistency. The argument relies on two claims:

(1) If DM is certain that he will not perform some action *a*, then *a* is not regarded as part of the feasible acts by DM.

(2) If DM assigns probabilities to acts, then he must assign probability 0 to acts he regards as inadmissible, i.e. which do not maximize expected utility.

Clearly, (1) and (2) entail together that only feasible acts (figuring in the set *A*) are admissible (maximize expected utility), in which case deliberation is unnecessary for DM. If it is the case however, that means that principles of rationality cannot be used as normative principles in the deliberation process. While claim (1) is relatively transparent (even if it is disputable), claim (2) is less straightforward. Consider therefore the following illustration.

DM has a choice between two feasible acts *a* and *b* with *Eu*(*a*) > *Eu*(*b*), i.e. only *a* is admissible. Suppose that DM assigns probabilities *p*(*a*) and *p*(*b*) according to the procedure presented above. We present DM with a fair bet B on *a* where the price is *y** and the stake is *x*+*y**. As the bet is fair, *y** is the fair price and *y**/(*x*+*y**) = *p*(*a*) is the betting rate measuring DM’s belief. Now, DM has four feasible options:

Take the bet and choose *a* (B&*a*)

Do not take the bet and choose *a* (notB&*a*)

Take the bet and choose *b* (B&*b*)

Do not take the bet and choose *b* (notB&*b*)

As taking the bet and choosing *a* guarantee a sure gain of *x* to DM, it is easy to see that B&*a* strictly dominates notB&*a*. Similarly, as taking the bet and choosing *b* guarantee a sure loss of *y**, notB&*b* strictly dominates B&*b*. The choice is therefore between B&*a* and notB&*b* and clearly *Eu*(*a*) + *x* > *Eu*(*b*). It follows that the fair price for B is y** = *x + y* and hence *p*(*a*) = 1 and *p*(*b*) = 1 – *p*(*a*) = 0. The inadmissible option *b* has probability 0 and is thus regarded as unfeasible by DM (claim 1). No deliberation is needed for DM if he predicts his choice since only *a* is regarded as feasible.

Levi’s argument is by no means undisputable and the papers of Rabinowicz and Joyce referred above make a great job at showing its weaknesses. In the next two posts, I will however take it as granted and discuss some of its implications for decision theory and game theory.

**Notes**

* As I will discuss in the second post, Levi considers that there is nothing contradictory or problematic in the assumption that one may be able to predict his *future* choices.

** A gamble’s fair price is the price at which DM is indifferent accepting to buy the bet and accepting to sell the bet.