I have just finished a new working paper entitled “Game Theory, Game Situations and Rational Expectations: A Dennettian View” which I will present at the 16th international conference of the Charles Gide Association for the Study of Economic Thought. The paper is a bit esoteric as it discusses the formalization of rational expectations in a game-theoretic and epistemic framework on the basis of the philosophy of mind and especiallly Daniel Dennett’s intentional-stance functionalism. As usual, comments are welcome.
David Glasner wrote an interesting post a few weeks ago about the relationship between the Neoclassical synthesis and the mind-body problem in the philosophy of mind. Glasner contends that the mind-body problem vindicates methodological individualism (MI) in economics. He also argues that because the mind-body problem does not imply reductionism (i.e. mental states are identical to brain states), the representative agent assumption in macroeconomics is dubious: the latter basically reduces aggregate phenomena to the optimal plans of some rational agent.
This is quite interesting and I think Glasner is right in his overall parallel. There is indeed quite to learn for economists from the mind-body problem and more generally from the philosophy of mind. However, I do not fully agree with the details of Glasner’s argument and this may have some larger implications concerning his whole conclusion about the representative agent assumption. I start first with a remark about the terminology. Glaser argues that the mind-body problem and MI in economics share a non-commitment toward reductionism because they recognize the reality of “higher-level” entities and phenomena such as beliefs, desires, business cycles and so on. This is not completely true because there are actually approaches in the philosophy of mind that argue that mental states do not exist and/or are merely epiphenomenal. These “eliminativist” approaches claim that we should simply stop to use the notions and concepts of folk psychology in scientific discussions. Some would argue that the representative agent assumption is more eliminativist than simply reductionist: it eliminates the higher-level phenomena in the sense that they actually simply are identical to the choices of a representative agent. Quite the contrary, some versions of MI as well as some treatments of the mind-body problem are indeed reductionists: they recognize the existence of higher-level entities but they claim that they can be fully explained by lower-level entities. A truly microfounded macroeconomics (i.e. without the representative agent assumption) would probably be of this kind.
This is a complex debate because we should distinguish between ontological and explanatory reductionism (in particular, the former does not imply the latter). Moreover, there is much to be said about the relevance of reductionism in science in general. I will not discuss these points here. More relevant to the problem at stake here are treatments of the mind-problem that are both materialist (i.e. mental events are realized by physical events) but non-reductionist. Functionalism, which is currently the dominant paradigm in the philosophy of mind, is of this latter kind. Though there are many variants, they all recognize the basic fact of multiple realizability: the same mental events may be physically realized by different lower-level events (e.g. the firing of different neural areas). More generally, a basic postulate of functionalism is that the same software can be implemented in different hardwares. A second key assumption of functionalism is that mental states are defined by their functions in terms of their causal relations with other states and external factors: when we say that “Mike believes that it rains”, we are saying that there is some physical event in Mike’s brain that is caused by certain sorts of external stimuli and that this causes a certain behavioral response. The function of Mike’s belief is then to cause this behavioral response given the appropriate set of external stimuli. It is then easy to see why functionalism does not necessarily imply reductionism: the same set of causal relations can be realized by a variety of physical hardwares, not only brains but also machines for instance.
What are the implications for economics and in particular MI? The economist and philosopher Don Ross has argued in recent writings that a peculiar kind of functionalism, Daniel Dennett’s intentional stance functionalism, entails the rejection of MI. The point is that from a revealed preference perspective, what matters is that we can define a well-behaved choice function given a set of data about choices on the market or on any other institutional setting. However, what is the “hardware” or the “vehicle” for those choices is irrelevant: these can be flesh-and-bones persons but also intra-personal selves (like in dual or multiple selves models) or aggregate demand functions. This is at this point where Glasner’s claim that the mind-body problem entails a rejection of the representative agent hypothesis is unconvincing: at least in Ross’ interpretation of functionalism, philosophy of mind quite the contrary makes such an hypothesis permissible. The only requirement is that the market demand reveals consistent choices and preferences on the basis of some consistency axiom. From the point of view of functionalism, the only function of an agent’s intentional states is to trigger a behavioral response given some set of circumstances. If these behavioral response and circumstances are described in terms of market data (supply, demand, prices), then there is nothing wrong with assimilating the market demand to a unique representative agent, provided that the consistency requirements are fulfilled.
We may argue over this of course. More straightforward however is the fact that functionalism and MI does not go well together. This can be shown by a simple example. Suppose I am interested in accounting for a particular economic fact, say the over-exploitation of some non-renewable resources and the way this problem is mitigated by some community. As an economist, I frame this fact as an instance of the collective action problem. In trying to produce a theoretical and empirically testable explanation for this fact, I build a game-theoretic model where I specify who are the players, what are the strategies at their disposal, their utility functions (and thus their preferences) and possibly some information structure (e.g. who knows what about other’s actions and rationality). Suppose that my model fits the facts, in the sense that there is an equilibrium (possibly among others) where the collective pattern generates the observed level of use of the resources. I will thus consider that my model is successful in providing an explanation for the fact that interested me. What does my model is representing? It is actually representing an institution (or a set of institutions) that, as a whole, is responsible for the level of use of the resource. We can see this institution as some kind of “machine” that triggers a behavioral response (the behavioral pattern and the associated use of the resources) given a set of circumstances that are implemented in the value of the model’s parameters. Many economists would claim that game-theoretical models in general and this one in particular are an instance of MI. But this is clearly wrong from the point of view of functionalism: I have not explained the economic fact in virtue of the players’ behavior and other properties; my explanation is provided by the whole “machine” (the institution) I have modeled. This machine is a set of formal (functional) relationships that represent a set of causal relationships between given circumstances and a behavioral pattern. Each particular mental state that can be attributed to the players (e.g. their beliefs) takes its meaning from their relationships with the other elements of the larger system that corresponds to the institution. This is not MI as it is traditionally understood.