Is it Rational to be Bayesian Rational?

Most economists and many decision theorists equate the notion of rationality with Bayesian rationality. While the assumption that individuals actually are Bayesian rational has been largely disputed and is now virtually rejected, the conviction that Bayesianism defines the normative standard of rational behavior remains fairly entrenched among economists. However, even the normative relevance of Bayesianism has been questioned. In this post, I briefly survey one particular and interesting kind of argument that has been particularly developed by the decision theorist Itzhak Gilboa with different co-authors in several papers.

First, it is useful to start with a definition of Bayesianism in the context of economic theory: the doctrine according to which it is always rational to behave according to the axioms of Bayesian decision theory. Bayesianism is a broad church with many competing views (e.g. radical subjectivism, objective Bayesianism, imprecise Bayesianism…) but it will be sufficient to retain a generic characterization through the two following principles:

Probabilism: Bayesian rational agents have beliefs that can be characterized through a probability function whose domain is some state space.

Expected Utility Maximization: The choices of Bayesian rational agents can be represented by the maximization of the expectation of a utility function according to some probability function.

Gilboa’s critique of Bayesianism is uniquely concerned with probabilism though some of its aspects could be easily extended to the expected utility maximization principle. Probabilism can itself be characterized as the conjunction of three tenets:

(i) Grand State Space: each atom (“state of nature”) in the state space is assumed to resolve all uncertainty, i.e. everything that is relevant for the modeler is specified, included all causal relationships. Though in Savage’s version of Bayesian decision theory, states of nature where understood as “small worlds” corresponding to some coarse partition of the state space, in practice most economists implicitly interpret states of nature as “large worlds”, i.e. as resulting from the finest partition of the state space.

(ii) Prior Probability: Rational agents have probabilistic beliefs over the state space which are captured by a single probability measure.

(iii) Bayesian updating: In light of new information, rational agents update their prior to a posterior belief according to Bayes’s rule.

While the third tenet may be disputed, included within the realm of Bayesianism (see for instance Jeffrey’s probability kinematics or views entertained by some objective Bayesians), it is the first two that are targeted by Gilboa. More exactly, while each tenet taken separately seems pretty reasonable normatively speaking, problems arise as soon as one decides to combine them.

Consider an arbitrary decision problem where it is assumed (as economists routinely do) that all uncertainty is captured through a Grand State Space. Say, you have to decide between choosing to bet on what is presented to you as a fair coin falling on heads and betting on the fact that the next winner of the US presidential will be a Republican. There seem to be only four obvious states of nature: [Heads, Republican], [Heads, Democrat], [Tail, Republican], [Tail, Democrat]. Depending on your prior beliefs that the coin toss will fall on Heads (maybe a 1:1 odd) and that the next US president will be a Republican (and assuming monotonic preferences in money), your choice will reveal your preference for one of the two bets. Even if ascribing probabilities to some of the events may be difficult, it seems that the requirements of Bayesian rationality cannot be said to be unreasonable here. But matters are actually more complicated because there are many things that may causally affect the likelihood of each event. For instance, while you have been said that the coin is fair, maybe you have reason to doubt this affirmation. This will depend for instance on who has made the statement. Obviously, the result of the next US presidential elections will depend on the many factual and counterfactual events that may happen. To form a belief about the result of the US elections not only you have to form a belief over these events but also over the nature of the causal relationships between them and the result of the US election. Computationally, the task quickly becomes tremendous as the number of states of nature to consider is quite huge. Assuming that a rational agent should be able to assign a prior over all of them is normatively unreasonable.

An obvious answer (at least for economists and behaviorists-minded philosophers) is to remark that prior beliefs need not be “in the mind” of the decision-maker. What matters is that the betting behavior of the decision-maker reveals preferences over prospects that can be represented by a unique probability measure over as larger a state space as needed to make sense of it. There are many things to be said against this standard defense but for the sake of the argument we may momentarily accept it. What happens however of the behavior of the agents fail to reveal the adequate preferences? Must we conclude then that the decision-maker is irrational? A well-known case leading to such questions is Ellsberg’s paradox. Under a plausible interpretation, the latter indicates that most actual agents reveal through their choices an aversion for probabilistic ambiguity which directly led to the violation of the independence axiom of Bayesian decision theory. In this case, the choice behavior of agents cannot be consistently represented by a unique probability measure. Rather than arguing that such a choice behavior is irrational, a solution (which I have already discussed here) is to adopt the Grand State Space approach. It is then possible to show that with an augmented state space there is nothing “paradoxical” in Ellsber’s paradox. The problem however with this strategy is twofold. On the one hand, many choices are “unobservable” by definition, which fits uneasily in the behaviorist interpretation of Bayesian axioms. On the other hand,  it downplays the reasons that explain the choices that actual agents are actually making.

To understand this last point, it must be acknowledged that Bayesianism defines rationality merely in terms of consistency with respect to a set of axioms. As a result, such an approach completely disregards the way agents form their beliefs (as well as their preferences) and – more importantly – abstains from making any normative statement regarding the content of beliefs. “Irrational” beliefs are merely beliefs that fail to qualify for a representation through a unique probability measure. Now, consider whether it is irrational to fail or to refuse to have such beliefs in cases where some alternatives but not others suffer from probabilistic ambiguity. Also, consider whether it is irrational to firmly believe (eventually to degree 1) that smoking presents no risk for health. Standard Bayesianism will answer positively in the first case but negatively in the second. Not only this is unintuitive but it also seems to be pretty unreasonable. Consider the following alternative definition of rationality proposed by Gilboa:

A mode of behavior is irrational for a decision maker, if, when the latter is exposed to the analysis of her choices, she would have liked to change her decision, or to make different choices in similar future circumstances.

This definition of rationality appeals to the reflexive abilities of human agents and, crucially, to our capacity to motivate our choices through reasons. This suggests first that the axioms of Bayesian decision theory can be submitted both as reasons to make specific choices but also has the subject of the normative evaluation. This also indicates that whatever may be thought of these axioms, Bayesianism lacks an adequate account of beliefs formation. In other words, Bayesianism cannot pretend to constitute a normative theory of rationality because it does not offer any justification neither for the way an agent should partition the state space nor for deciding which prior to adopt. The larger the state space is made to capture all the relevant features explaining an agent’s prior, the lesser it seems reasonable to expect rational agents to be able or to be willing to entertain such a prior.

Leave a comment