Each year, as Christmas is approaching, economists like to remind everyone that making gifts is socially inefficient. The infamous “Christmas deadweight loss” corresponds to the fact that the resources allocation is suboptimal because people would have chosen to buy different things than the ones they have received as gifts at Christmas if they were given the equivalent value in cash. This is a provocative result but it follows from straightforward (though clearly shortsighted) economic reasoning. I would like here to point out another disturbing result that comes from economic theory. Though it is not specific to the Christmas period it is quite less straightforward, which makes it much more interesting. It is related to the (im)possibility of surprising people.

I will take for granted that one of the points of a Christmas present is to try to surprise the person you’re making the gift to. Of course, many people make wish lists but the point is precisely that 1) one will rarely expect to receive all the items he has indicated on his list and 2) the list may be fairly open or at least give to others an idea of the kind of presents one wish to receive without being too specific. In any case, apart from Christmas, there are several other social institutions whose value is partially derived from the possibility of surprising people (think of April fools). However, on the basis of the standard rationality assumptions made in economics, it is clear that surprising people is simply impossible and even non-sense.

I start with some definitions. An event is a set of states of the world where each person behave in a certain way (e.g. makes some specific gifts to others) and holds some specific conjectures or beliefs about what others are doing and believing. I call an unexpected event an event for which at least one person attributes a null prior probability of realizing. An event is impossible if it is inconsistent with the people’s theory (or model) of the situation they are in. The well-known example of the so-called “surprise exam paradox” gives a great illustration of these definitions. A version of this example is as follows:

**The Surprise Exam Paradox:** At day D0, the teacher T announces to his students S that he will give them a surprise exam either at D1 or at D2. Denote En the event “the exam is given at day Dn (n = 1, 2)” and assumes that the students S believes the teacher T’s announcement. They also know that T really wants to surprise them and they know that he knows that. Finally, we assume that S and T have common knowledge of their reasoning abilities. On this basis, the students reason the following way:

SR1: If the exam is not given at D1, it will be necessarily given at D2 (i.e. E2 has probability 1 according to S if not E1). Hence, S will not be surprised.

SR2: S knows that T knows SR1.

SR3: Therefore, T will give the exam at D1 (i.e. E1 has probability 1 according to S). Hence, S will not be surprised.

SR4: S knows that T knows SR3.

SR5: S knows that T knows SR1-SR4, hence the initial announcement is impossible.

The final step of S’s reasoning (SR5) indicates that there is no event En that is both unexpected and consistent with S’s theory of the situation as represented by the assumptions stated in the description of the case. Still, suppose that T gives the exam at D2; then indeed the students will be surprised but in a very different sense than the one we have figured out. The surprise exam paradox is a paradox because whatever T decides to do, this is inconsistent with at least one of the premises constitutive of the theory of the situation. In other words, the students are surprised because they have the wrong theory of the situation, but this is quite “unfair” since the theory is the one the modeler has given to them.

Now, the point is that surprise is similarly impossible in economics under the standard assumption of rational expectation. Actually, this directly follows from how this assumption is stated in macroeconomics: an agent’s expectations are rational if they correspond to the actual state of the world on average. The last clause “on average” means that for any given variable X, the difference between the agent’s expectation of the value of X and the actual value of X is captured by a random error variable of mean 0. This variable is assumed to follow some probabilistic distribution that is known by the agent. Hence, while the agent’s rational expectation may actually be wrong, he will never be surprised whatever the actual value of X. This is due to the fact that he knows the probability distribution of the error term and hence he expects to be wrong according to this probability distribution even though he expects to be right on average.

However, things are more interesting in the strategic case, i.e. when the value of X depends on the behavior of each person in the population, the latter depending itself on one’s expectations about others’ behavior and expectations. Then, the rational expectations hypothesis is akin to assuming some kind of consistency between the persons’ conjectures (see this previous post on this point). At the most general level, we assume that the value of X (deterministically or stochastically) depends on the profile of actions s = (s1, s2, …, sn) of the n agents in the population, i.e. X = f(s). We also assume that there is mutual knowledge that each person is rational: she chooses the action that maximizes her expected utility given her beliefs about others’ actions, hence si = si(bi) for all agents i in the population, with bi agent i’s conjecture about others’ actions. It follows that it is mutual knowledge that X = f(b1, b2, …, bn). An agent i’s conjecture is rational if bi* = (s1*, …, si-1*, si+1*, …, sn*) with sj* the actual behavior of agent j. Denote s* = (s1*(b1*), s2*(b2*), …, sn*(bn*)) the resulting strategy profile. Since there is mutual knowledge of rationality, the fact that one knows s* implies that he knows each bi* (assuming that there is a one-to-one mapping between conjecture and action); hence the profile of rational conjectures b* = (b1*, b2*, …, bn*) is also mutually known. By the same reasoning, a k order of mutual knowledge of rationality entails a k order of mutual knowledge of b* and common knowledge of rationality entails common knowledge of b*. Therefore, everyone correctly predicts X and this is common knowledge.

Another way to put this point is proposed by Robert Aumann and Jacques Dreze in an important paper where they show the formal equivalence between the common prior assumption and the rational expectation hypothesis. Basically, they show that a rational expectation equilibrium is equivalent to a correlated equilibrium, i.e. a (mix-)strategy profile determined by the probabilistic distribution of some random device and where players maximize expected utility. As shown in another important paper by Aumann, two sufficient conditions for obtaining a correlated equilibrium are common knowledge of Bayesian rationality and a common prior over the strategy profiles that can be implemented (the common prior reflects the commonly known probabilistic distribution of the random device). This ultimately leads to another important result proved by Aumann: persons with a common prior and a common knowledge of their ex post conjectures cannot “agree to disagree”. In a world where people have a common prior over some state space and a common knowledge of their rationality or of their ex post conjectures (which here is the same thing), unexpected events are simply impossible. One already knows all that can happen and thus will ascribe a strictly positive probability to any possible event. This is nothing but the rational expectation hypothesis.

Logicians and game theorists who have dealt with Aumann’s theorems have proven that the latter build on a formal structure that is equivalent to the well-known S5 formal system in modal logic. The axioms of this formal system imply, among other things, logical omniscience (an agent knows all logical truths and the logical implications of what he knows) and, more controversially, negative introspection (when one does not know something, he knows it). Added to the fact that everything is captured in terms of knowledge (i.e. true beliefs), it is intuitive that such a system is unable to deal with unexpected events and surprise. From a logical point of view, this problem can be answered simply by changing the axioms of and assumptions of the formal system. Consider the surprise exam story once again. The paradox seems to disappear if we give up the assumption of common knowledge of reasoning abilities. For instance, we may suppose that the teacher knows the reasoning abilities of the students but not that the students knows that he knows that. In this case, steps SR2, SR3 and SR4 cannot occur. Or we may suppose that the teacher knows the reasoning abilities of the students and that the students knows that he knows that, but that the teacher does not know that they know that he knows. In this case, step SR5 in the students’ reasoning cannot occur. In both cases, the announcement is no longer inconsistent with the students’ and teacher’s knowledge. This is not completely satisfactory however for at least two reasons: first, the plausibility of the result depends on epistemic assumptions which are completely ad hoc. Second, the very nature of the formal systems of standard modal logic implies that the agent’s theory of a given situation captures everything that is necessarily true. In the revised version of the surprise exam example above, it is necessarily true that an exam will be given either at day D1 or D2, and thus everyone must know that, and so the exam is not a surprise in the sense of an unexpected event.

The only way to avoid these difficulties is to enter the fascinating but quite complex realm of non-monotonic modal logic and beliefs revision theories. In practice, this consists in giving up the assumption that the agents are logically omniscient in the sense that may not know something that is necessarily true. Faced with an inconsistency, an agent will adopt a belief revision procedure such as to make his belief and knowledge consistent with an unexpected event. In other words, though the agent does not expect to be surprised, it is possible to account for how he deals with unexpected information. As far as I know, there have been very few attempts in economics to build on such kinds of non-monotonic formalization to tackle of expectations formation and revision, in spite of the growing importance of the macroeconomic literature on learning. Game theorists have been more prone to enter into this territory (see this paper of Michael Bacharach for instance) but much remains to be done.