Hard Obscurantism and Unrealistic Models in Economics

The philosopher and social scientist Jon Elster is well-known for his critical and insightful views about the (ir)relevance of rational choice theory (RCT) in the social sciences. Among his recent writings on the subject, Elster has published last year a paper in the philosophy journal Synthese concerning what he calls “hard obscurantism” in economic modeling (gated version here). By hard obscurantism, Elster essentially refers to a practice where “ends and procedures become ends in themselves, dissociated form their explanatory functions” (p. 2163). This includes many rational choice models, but also a part of agent-based modeling, behavioral economics and statistical analysis in economics.

Elster’s paper focuses on the case of rational choice models and builds on several “case studies” that are thought to illustrate the practice of hard obscurantism. These case studies include Akerlof & Dickens’s and Rabin’s use of cognitive dissonance theory, Becker and Mulligan’s accounts of altruism as well as Acemoglu & Robinson’s theory of political transitions. Beyond these examples, Elster underlines two general problems with rational choice models and more generally with RAT: first, theory is indeterminate, second it ignores the irrationality of the agents. Indetermination is indeed a well-known problem that is partly (though not equivalent) related to the existence of multiple equilibria in many rational choice models. According to Elster, it has three sources: (i) the fact that the determination of the optimal amount of information leads to an infinite regress (i.e. to compute the marginal utility of information requires to collect the information but whether or not to collect the information necessitates to know its marginal utility), (ii) brute and strategic uncertainty (the latter is of course closely related to the existence of multiple equilibria) and (iii) the agents’ cognitive limitations. The latter is regarded by Elster as the most important source and is somewhat related to the irrationality problem. In Elster’s words,

“How can we impute to real-life agents the capacity to make in real time the calculations that occupy many pages of mathematical appendixes in the leading journals and that can be acquired only through years of professional training?” (p. 2166)

Elster’s objection is hardly new and many different responses have been developed. It is not my intention to survey them. I shall rather on one issue that follows from Elster’s critique: can we learn anything with unrealistic models and how? There is an empirical disagreement among economists regarding the degree at which individual agents are truly irrational. Against the behavioral economists’ claim that individuals’ behavior and reasoning exhibit a long list of biases, other economists claim that this depends on the institutional setting in which individuals’ choices take place (for instance, it is probably not true that hyperbolic discounting is dominant in many market and many biases seem to diminish in importance if agents have the opportunity to learn). It is a fact however that individuals’ behaviors do not have the consistency properties that most rational choice models assume they have. Moreover, most rational choice models are unrealistic beyond their “behavioral” assumptions about agents’ reasoning abilities. They also make rather unrealistic “structural” assumptions such as for instance the number of players, the homogeneity of their preferences, the fact that features of the game are common knowledge, and so on. A good example among the case discussed by Elster is Acemoglu & Robinson’s theory of political transitions. The latter builds on a game-theoretic model with only two players which are thought to be representative of two groups of actors, the elites and the citizens. The preferences of the members of each group are assumed to be homogenous and, for the citizens group, to correspond to the median voter’s preferences. The model also makes several strong assumptions regarding what the players are knowing.

So, can we learn anything about real world mechanisms from such unrealistic models? The philosopher of social science Harold Kincaid has recently made an interesting suggestion for a (partially) positive answer. Kincaid rightly starts by indicating that it is vain to search for a general defense of unrealistic models in the social sciences and that each evaluation must be made on a case-by-case basis. Regarding perfect competition and game-theoretic models, Kincaid argues that may offer relevant explanations in spite of the fact that they build on highly unrealistic assumptions:

“The insight is that assumptions of the perfect competition and game theory models may just be assumptions the analyst – the economist or political scientist – uses to identify equilibria. However, in certain empirical applications, the explanations are equilibrium explanations that make no commitment to what process leads individuals to find equilibrium”

In my view, this account of the relevance of unrealistic models particularly works well in the case of mechanism design which is at the same time a highly theoretical but also applied branch of microeconomics. A typical approach in mechanism design is to consider that the right institutional design will entail equilibrium play from the players, even if the designer ignores the players’ actual preferences. The modeler does not make any commitment regarding how the players will find their way to the equilibrium. The model simply indicates that if the institutional set up has such or such characteristics (e.g. a continuous double bid auction), then the outcome will have such or such characteristics (e.g. allocative efficiency). It is then possible to check for this conjecture through experiments.

On this account, the model is thus merely a device to identify the equilibrium but has no use for explaining the mechanism through which the equilibrium is reached. It is not sure however that this account applies to rational choice models used in other settings, especially if experiments are impossible. For instance, Acemoglu & Robinson’s model highlight the importance of commitment to explain political transitions. Indeed, their theory aims at accounting for the change from a dictatorial equilibrium toward a democratic equilibrium. The elites’ ability to commit not to raise taxes in the future is the key feature that determines whether or not the political transition will occur. The model thus suggests that a highly general mechanism is at play but it is unsure which level of confidence we can have in this explanation given the highly unrealistic assumptions on which it builds. An alternative defense would be that the model’s value comes from the fact that it highlights a mechanism that may possibly partially explain political transitions. Thanks to the model, we perfectly understand how this mechanism works, even though we cannot be sure that this mechanism is actually responsible for the relevant phenomenon to be explained. In other words, the relevance of the model comes from the fact that it depicts a possible world which we are able to fully explore and that this world bears some (even remote) resemblance with the actual world. As I have argued elsewhere, many models in economics seem to be valued for this reason.

The problem with this last account is that, while it may explain why economists give credence to rational choice models, it is highly unlikely to convince skeptics like Elster that they are explanatory relevant. Indeed, as Elster has argued elsewhere, the academic value given to these models may itself result from the fact that the economic profession is trapped in a bad equilibrium.

Advertisements

Recent Working Papers

You will find below several working papers I have written recently on different (but somewhat related) topics. Comments are welcome!

A Bayesian Conundrum: From Pragmatism to Mentalism in Bayesian Decision and Game Theory

Abstract: This paper discusses the implications for Bayesian game theory of the behaviorism-versus-mentalism debate regarding the understanding of foundational notions of decision theory. I argue that actually the dominant view among decision theorists and economists is neither mentalism nor behaviorism, but rather pragmatism. Pragmatism takes preferences as primitives and builds on three claims: i) preferences and choices are analytically distinguishable, ii) qualitative attitudes have priority over quantitative attitudes and iii) practical reason has priority over theoretical reason. Crucially, the plausibility of pragmatism depends on the availability of the representation theorems of Bayesian decision theory. As an extension of decision-theoretic principles to the study of strategic interactions, Bayesian game theory also essentially endorses the pragmatist view. However, I claim that the fact that representation theorems are not available in games makes this view implausible. Moreover, I argue that pragmatism cannot properly account for the the generation of belief hierarchies in games. If the epistemic program in game theory is to be pursued, this should probably be along mentalistic lines.

Keywords: Bayesian synthesis – Bayesian game theory – Pragmatism – Mentalism – Preferences

 

Neo-Samuelsonian Welfare Economics: From Economic to Normative Agency

Abstract: This paper explores possible foundations and directions for “Neo-Samuelsonian Welfare Economics” (NSWE). I argue that neo-Samuelsonian economics entails a reconciliation problem between positive and normative economics due to the fact that it cuts the relationship between economic agency (i.e. what and who the economic agent is) and normative agency (i.e. what should be the locus of welfare analysis). Developing a NSWE thus implies to find a way to articulate economic and normative agency. I explore two possibilities and argue that both are attractive but have radically different implications for the status of normative economics. The first possibility consists in fully endorsing a normative approach in terms of “formal welfarism” which is completely neutral regarding both the locus and the unit measure of welfare analysis. The main implication is then to make welfare economics a branch of positive economics. The second possibility is to consider that human persons should be regarded as axiologically relevant because while they are not prototypical economic agents, they have the ability to represent them both to themselves and to others as reasonable and reliable beings through narrative construction processes. This gives a justification for viewing well-being as being constituted by the persons’ preferences, but only because these preferences are grounded on reasons and values defining the identity of the persons. This view is somehow compatible with recent accounts of well-being in terms of value-based life satisfaction and implies a sensible reconsideration of the foundations of welfare economics.

Keywords: Neo-Samuelsonian economics – Welfare Economics – Revealed preference theory – Preference-satisfaction view of welfare – Economic agency

 

History, Analytic Narratives and the Rules-in-Equilibrium View of Institutions

Abstract: Analytic narratives are case studies of historical events and/or institutions that are formed by the combination of the narrative method characteristic of historical and historiographical works with analytic tools, especially game theory, traditionally used in economics and political science. The purpose of this paper is to give a philosophy-of-science view of the relevance of analytical narratives for institutional analysis. The main claim is that the AN methodology is especially appealing in the context of a non-behaviorist and non-individualist account of institutions. Such an account is fully compatible with the “rules-in-equilibrium” view of institutions. On this basis, two supporting claims are made: first, I argue that within analytical narrative game-theoretic models play a key role in the identification of institutional mechanisms as the explanans for economic phenomena, the latter being irreducible to so-called “micro-foundations”. Second, I claim that the “rules-in-equilibrium” view of institutions provides justification for the importance given to non-observables in the institutional analysis. Hence, institutional analysis building on analytical narrative typically emphasizes the role of derived (i.e. non-directly observed) intentional states (preferences, intentions, beliefs).

Keywords: Analytic narratives – Rules-in-equilibrium view of institutions – Institutional analysis – Game theory

Accounting for Choices in Economics

Economics is sometimes characterized as the “science of rational choices over the allocation of scarce resources” or even more straightforwardly as the “science of choices”. In a recent blog, Chris Dillow makes some interesting remarks about people’s economic behavior. He notes that our behavior is often partially unconscious and/or habit-based. Moreover, the set of available options is quite frequently severely restricted such that there is few room to make voluntary choices. Finally, many decisions are actually more or less random and grounded on social norms, conventions and other factors on which we barely reflect. The conclusion is then that

“when we ask “why did he do that?” we must look beyond “max U” stories about mythical individuals abstracted from society and look at the role of habit, cultural persistence and constraints.”

These are interesting and important remarks because they directly concern the scope of economics as well as the meaning of the key concept of choice. It seems that Dillow is using the choice concept according to its folk meaning. According to the latter, to properly say “she chooses x” requires at least that (a) one has several available options at her disposal to choose between and (b) she opts for one of the available option consciously and voluntarily. However, I would argue that this is not how economists generally use and understand the choice concept. They rather use a concept of choice* in a technical sense. To put it using some jargon, in economics choices* are basically behavioral patterns that correlate with changes in opportunity costs. In other words, when we say that economics is the science of choices*, what is actually meant is that it studies how some particular variable reflecting for instance the consumption level of a given good, changes as the good’s relative price or consumers’ information change. This definition of choice* has at least two noteworthy implications:

1) Economists are not interested in individual choices per se. Economists almost always work at some aggregated level and they do not aim at explaining the choices made by specific individuals or firms. They are rather interested in the properties of aggregate demand and supply.

2) Economists are agnostic regarding the specific mechanisms through which economic agents are making choices. In particular, there is no presumption that these choices are conscious and not habit-based. The U-Max framework only assumes that individual choices are responsive to change in opportunity costs, not how and why they are responsive.

These two implications work in conjunctions. Choices* need not be conscious nor based on any form of complex calculus but they are however intentional: choices (in both the folk and technical meanings) are about something and they are the product of the agents’ intentional states (desires, beliefs, wants…). As philosophers of mind have emphasized, there is nothing paradoxical in the combination of unconsciousness and intentionality. The U-Max framework, as well as decision and game theory as a whole are tools that are particularly well-fitted to study intentional behavior, whether conscious or not. These tools indeed assume that individual choices are responsive to changes in opportunity costs which, in special cases (e.g. addictive behavior), may not be true. However, this is mostly irrelevant as long as responsiveness is preserved at some market level. Gary Becker’s paper “Irrational Behavior and Economic Theory” provides an extreme example of this point. It shows how we can derive “well-behaved” demand and supply functions with individual agents (households and firms) using “irrational” decision rules. This result is by no way a necessity: there are cases where irrational behavior will lead to unconventional demand and supply functions and because of income effects even rational behavior at the individual level can generate upward-slopping demand curves. Generally speaking, institutions matter: the way exchanges are organized will determine the aggregate outcome for a given profile of preferences and production costs.

All of this depends on the claim that economists are not interested in explaining individual choices. Economists with the strongest revealed-preference stance are likely to agree with this claim. But there are many economists who are likely to disagree, considering that accounting for individual choices is necessary to understand aggregate outcomes such as a financial crisis. More generally, I would argue that attempting to explain individual choices can hardly be avoided in the numerous cases where multiple equilibria exist. The point is that to explain why a given equilibrium has been selected, it will most of the time be required to understand how individuals make choices. Here, whether choices are habit- or calculus-based, conscious or automatic, and so on, may matter. For instance, Thomas Schelling famously pointed out in The Strategy of Conflict the important of focal points to account for the way people are able to coordinate without communicating. As Schelling made it clear, focal points are not determined by the mathematical properties of the game nor by purely instrumental considerations. They depend on cultural, social and aesthetic features.

A slightly more complex example but which is even more relevant, especially in industrial organization, is the existence of multiple (Bayesian perfect) equilibria in incomplete information games. In incomplete information games, one player (the “principal”) ignores the other players’ (the “agent”) type. The agent’s choice may sometimes convey an information to the principal and helps him to identify the agent’s type. Such games typically have multiple equilibria with some of them separating and other pooling ones. Which equilibrium is implemented is partially determined by the way the principal interprets the agent’s choice. Under a separating equilibrium, the principal interprets the agent’s choice in such a way that it provides him with an information about the agent’s type. This is not the case under a pooling equilibrium. Of course, since under a pooling equilibrium all agents behave the same way whatever their type, observed behavior cannot serve as a basis to infer agents’ type. But the fact that all agents behave the same is itself a rational response to their own understanding of the way the principal will interpret their choice at the equilibrium.

My point is thus that in strategic interactions where players have to think about how other players are thinking, it is less clear that economists can safely ignore how people make choices. Given the same set of “fundamentals” (preferences, technology, information distribution), different behavioral patterns may arise and these differences are likely to be due to the way individual agents are choosing.

Bayesian Rationality and Utilitarianism

In a recent blog, Bryan Caplan gives his critical views about the “rationality community”, i.e. a group of people and organizations who are actively developing ideas related to cognitive bias, signaling and rationality. Basically, members of the rationality community are applying the rationality norms of Bayesianism to a large range of issues related to individual and social choices. Among Caplan’s complaints figures the alleged propensity of the community’s members to endorse consequentialist ethics and more specifically utilitarianism, essentially for “aesthetic” reasons. In a related Twitter exchange, Caplan states that by utilitarianism he refers to the doctrine that one’s duty is to act as to maximize the sum of happiness in the society. This corresponds to what his generally called hedonic utilitarianism.

Hedonic utilitarianism faces many problems well-known to moral philosophers. I do not know if the members of the rationality community are hedonic utilitarians, but there is another route for Bayesians to be utilitarians. This route is logical rather than aesthetic and is grounded on a theorem exposed by the economist John Harsanyi in the 1950s and since largely discussed by philosophically-minded economists and mathematically-minded philosophers. Harsanyi’s initial demonstration was grounded on the von Neumann and Morgenstern’s axioms (actually Marshak’s version of them) of decision under risk but has since been extended to other versions of decision theory, especially Savage’s axioms for decision under uncertainty. The theorem can be briefly stated in the following way. Denote S the set of states of nature, i.e. morally-relevant features that are outside the control of the decision-makers and O the set of outcomes. Intuitively, an outcome is a possible world specifying everything that is morally relevant for the individuals: their wealth, their health, their history, and so on. Finally, denote X the set of “prospects”, i.e. social alternatives or public policies mapping any state s onto an outcome o. We assume that the n members of the population have preferences over the set of prospects and that these preferences satisfy Savage’s axioms. Therefore, the preferences of any individual i can be represented by an expectational utility function: each prospect x is ascribed a utility number ui(x) that cardinally represent i’s preferences. ui(x) corresponds to the probability weighted-sum of utility of all possible outcomes (which correspond to “sure” prospects). Hence, each individual also has beliefs regarding the likelihood of the states of nature that are captured by a probability function pi(.).

Given the individuals’ preferences, each prospect x is assigned a vector of utility numbers (u1(x), …, un(x)). Now, we assume that there is a “benevolent dictator” k (possibly one of the member of the population) whose preferences over X also satisfy Savage’s axioms. It follows that the dictator’s preferences can also be represented by an expectational utility function with each prospect x mapped into a number uk(x). Last assumption: the individuals’ and dictator’s preferences over X are related by a Pareto principle: if every individual prefers (resp. is indifferent) prospect x to prospect y, then the dictator prefers (resp. is indifferent) x to y. Harsanyi’s theorem states that the dictator’s preferences can then be represented by a utility function corresponding to the weighted-sum of the individuals’ utilities for any prospect x. Suppose moreover than utilities are interpersonally comparable and that the dictator’s preferences are impartial (they do not arbitrarily weight more a person’s utility than another’s one), then for any x

uk(x) = u1(x) + … + un(x).

Of course, this is the utilitarian formulae but stated in utility rather than hedonic terms. Note that here utility does not correspond to happiness or pleasure but rather to preference-satisfaction. Harsanyi’s utilitarianism is preference-based. The point of the theorem is to show that consistent Bayesians should be utilitarians in this sense.

It should be acknowledged that what the theorem demonstrates is actually far weaker. A first reason (discussed by Sen among others) is that the cardinal representation of the individuals’ preferences is not imposed by Savage’s theorem. Obviously, the use of other representations of individuals’ preferences will have the effect of making the additive structure unable to represent the dictator’s preferences. Some authors like John Broome have argued however that the expectational representational is the most natural one and fits well with some notion of goodness. There is another, different kind of difficulty related to the Pareto principle. It can be shown that the assumption that the dictator’s preferences are transitive (which is imposed by Savage’s axioms) combined with the Pareto principle imply “probabilistic agreement”, i.e. that all individuals agree regarding their probabilistic assessment over the likelihood of the states of nature. Otherwise, probabilistic disagreement and the Pareto principle would lead to cases where the dictator’s preferences are inconsistent and thus unamenable to a utility representation. Probabilistic agreement is of course a very strong assumption, an assumption that Harsanyi would have been ready to defend without doubt (see the “Harsanyi doctrine” in game theory). Objective Bayesians may indeed argue that rationality entails a unique correct probabilistic assessment. But subjective Bayesians will of course disagree.

What happen if we give up the Pareto principle for prospects (not for outcomes however)? Then, the dictator’s preferences are amenable to being represented by an ex post prioritarian social welfare function such that

uk(x) = ∑spk(s)∑iv(ui(x(s)=o))

where v(.) is a strictly increasing and concave function. This corresponds to what Derek Parfit called the “priority view” and leads to giving priority to the satisfaction of preferences of the less well-off in the population.

Review of “Understanding Institutions. The Science and Philosophy of Living Together”, Francesco Guala, Princeton University Press, 2016

The following is a (long) review of Francesco Guala’s recent book Understanding Institutions. The Science and Philosophy of Living Together (Princeton University Press, 2016).

Twenty years ago, John Searle published his influential account of the nature of institutions and institutional facts (Searle 1995). Searle’s book has been a focal point for philosophers and social scientists interested in social ontology and its claims and arguments continue to be hotly disputed today. Francesco Guala, a professor at the University of Milan and a philosopher with a strong interest in economics, has written a book that in many ways can be considered both as a legitimate successor but also a thoroughly-argued critique of Searle’s pioneering work. Understanding Institutions is a compact articulation of Guala’s thoughts about institutions and social ontology that he has developed in several publications in economic and philosophy journals. It is a legitimate successor to Searle’s book as all the central themes in social ontology that Searle discussed are also discussed by Guala. But it is also a strong critique of Searle’s general approach to social ontology: while the latter relies on an almost complete (and explicit) rejection of social sciences and their methods, Guala instead argues for a naturalistic approach to social ontology combining the insights of philosophers with the theoretical and empirical results of social sciences. Economics, and especially game theory, play a major role in this naturalistic endeavor.

The book is divided into two parts of six chapters each, with an “interlude” of two additional chapters. The first part presents and argues for an original “rules-in-equilibrium” account of institutions that Guala has recently developed in several articles, some of them co-authored with Frank Hindriks. Two classical accounts of institutions have indeed been traditionally endorsed in the literature. On the institutions-as-rules account, “institutions are the rules of the game in a society… the humanly devised constraints that shape human interactions” (North 1990, 3-4). Searle’s own account in terms of constitutive rules is a subspecies of the institutions-as-rules approach where institutional facts are regarded as being the products of the assignment of status function through performative utterances of the kind “this X counts as Y in circumstances C”. The institutions-as-equilibria account has been essentially endorsed by economists and game theorists. It identifies institutions to equilibria in games, especially in coordination games. In this perspective, institutions are best seen as devices solving the classical problem of multiple equilibria as they select one strategy profile over which the players’ beliefs and actions converge. Guala’s major claim in this part is that the relevant way to account for institutions calls for the merging of these two approaches. This is done through the key concept of correlated equilibrium: institutions are figured out as playing the role of “choreographers” coordinating the players’ choices on the basis of public (or semi-public) signals indicating to each player what she should do. Institutions then take the form of lists of indicative conditionals, i.e. statements of the form “if X, then Y”. Formally, institutions materialize as statistically correlated patterns of behavior with the equilibrium property that no one has an interest to unilaterally change her behavior.

The motivation for this new approach follows from the insufficiencies of the institutions-as-rules and institutions-as-equilibria accounts but also to answer fundamental issues regarding the nature of the social world. Regarding the former, it has been widely acknowledged that one the main defect of the institutions-as-rules is that it lacks a convincing account of the reason of why people are motivated in following rules. The institutions-as-equilibria approach for its part is unable to account for the specificity of human beings regarding their ability to reflect over the rules and the corresponding behavioral patterns that are implemented. Playing equilibria is far from being human specific, as evolutionary biologists have recognized long ago. However, being able to explain why one is following some rule or even to communicate through a language about the rules that are followed are capacities that only humans have. There are also strong reasons to think that the mental operations and intentional attitudes that sustain equilibrium play in human populations are far more complex than in any other animal population. Maybe the most striking result of this original account of institutions is that Searle’s well-known distinction between constitutive and regulative rules collapses. Indeed, building on a powerful argument made by Frank Hindriks (2009), Guala shows that Searle’s “this X counts as Y in C” formula reduces to a conjunction of “if X then Y” conditionals corresponding to regulative rules. “Money”, “property” or “marriage” are theoretical terms that are ultimately expressible through regulative rules.

The second part of the book explores the implications of the rules-in-equilibrium account of institutions for a set of related philosophical issues about reflexivity, realism and fallibilism in social ontology. This exploration is done after a useful two-chapter interlude where Guala successively discusses the topics of mindreading and collective intentionality. In these two chapters, Guala contends, following the pioneering work of David Lewis (1969), that the ability of institutions to solve coordination problems depends on the formation of iterated chains of mutual expectations of the kind “I believe that you believe that I believe…” and so on ad infinitum. It is suggested that the formation of such chains is generally the product of a simulation reasoning process where each player forms expectations about the behavior of others by simulating their reasoning, on the assumption that others are reasoning like her. In particular, following the work of Morton (2003), Guala suggests that coordination is often reached through “solution thinking”, i.e. a reasoning process where each player first asks which is the most obvious or natural way to tackle the problem and then assumes that others are reasoning toward the same conclusion than her. The second part provides a broad defense of realism and fallibilism in social ontology. Here, Guala’s target is no longer Searle as the latter also endorses realism (though Searle’s writings on this point are ambiguous and sometimes contradictory as Guala shows) but rather various forms of social constructionism. The latter hold that the social realm and the natural realm are intrinsically different because of a fundamental relation between how the social world works and how humans (and especially social scientists) reflect on how it works. Such a relationship is deemed to be unknown to the natural sciences and the natural world and therefore, the argument goes, “social kinds” considerably differ from natural kinds. The most extreme forms of social constructionism hold the view that we cannot be wrong about social kinds and objects as the latter are fully constituted by our mental attitudes about them.

The general problem tackled by Guala in this part is what he characterizes as the dependence between mental representations of social kinds and social kinds. The dependence can be both causal and constitutive. As Guala shows, the former is indeed a feature of the social world but is unproblematic in the rules-in-equilibrium account. Causal dependency merely reflects the fact that equilibrium selection is belief-dependent, i.e. when there are several equilibria, which one is selected depends on the players’ beliefs about which equilibrium will be selected. Constitutive dependency is a trickier issue. It assumes that an ontological dependence holds between a statement “Necessarily (X is K)” and a statement “We collectively accept that (X is K)”. For instance, on this view, a specific piece of paper (X) is money (K) if and only if it is collectively accepted that this is the case. It is then easy to see why we cannot be wrong about social kinds. Guala claims that constitutive dependence is false on the basis of a strong form of non-cognitivism that makes a radical distinction between folk classifications of social objects and what these objects are really doing in the social world: “Folk classificatory practices are in principle quite irrelevant. What matters is not what type of beliefs people have about a certain class of entities (the conditions they think the entities ought to satisfy to belong to that class) but what they do with them in the course of social interactions” (p. 170). Guala strengthens his point in the penultimate chapter building on semantic externalism, i.e. the view that meaning is not intrinsic but depends on how the world actually is. Externalism implies that the meaning of institutional terms is determined by people’s practices, not by their folk theories. An illustration of the implication of this view is given in the last two chapters through the case of the institution of marriage. Guala argues for a distinction between scientific considerations about what marriage is and normative considerations regarding what marriage should be.

Guala’s book is entertaining, stimulating and thought-provoking. Moreover, as it is targeted to a wide audience of social scientists and philosophers, it is written in plain language and devoid of unnecessary technicalities. Without doubt, it will quickly become a reference work for anyone believing that naturalism is the right way to approach social ontology. Given the span of the book (and is relatively short length – 222 pages in total), there are however many claims that would call for more extensive arguments to be completely convincing. Each chapter contains a useful “further readings” section that helps the interested reader to go further. Still, there are several points where I consider that Guala’s discussion should be qualified. I will briefly mention three of them. The first one concerns the very core of Guala’s “rules-in-equilibrium” account of institutions. As the author notes himself, the idea is not wholly new as it has been suggested several times in the literature. Guala’s contribution however resides in his handling of the conceptual view that institutions are both rules and equilibria with an underlying game-theoretic framework that has been explored and formalized by Herbert Gintis (2009) and even before by Peter Vanderschraaf (1995). Vanderschraaf has been the first to suggest that Lewis’ conventions should be formalized as correlated equilibria and Gintis has expanded this view to social norms. By departing from the institutions-as-equilibria account, Guala endorses a view of institutions that eschews the behaviorism that characterizes most of the game-theoretic literature on institutions, where the latter are simply conceived as behavioral patterns. The concept of correlated equilibrium indeed allows for a “thicker” view of institutions as sets of (regulative) rules having the form of indicative conditionals. I think however that this departure from behaviorism is insufficient as it fails to acknowledge the fact that institutions also rely on subjunctive (and not merely indicative) conditionals. Subjunctive conditionals are of the from “Were X, then Y” or “Had X, then Y” (in the latter case, they correspond to counterfactuals). The use of subjunctive conditionals to characterize institutions is not needed if rules are what Guala calls “observer-rules”, i.e. devices used by social scientists to describe an institutional practice. The reason is that if the institution is working properly, we will never observe behavior off-the-equilibrium path. But this is no longer true if rules are “agent-rules”, i.e. devices used by the players themselves to coordinate. In this case, the players must use (if only tacitly) counterfactual reasoning to form beliefs about what would happen in events that cannot happen at the equilibrium. This point is obscured by the twofold fact that Guala only considers simple normal-form games and does not explicitly formalize the epistemic models that underlie the correlated equilibria in the coordination games he discusses. However, as several game theorists have pointed out, we cannot avoid dealing with counterfactuals when we want to account for the way rational players are reasoning to achieve equilibrium outcomes, especially in dynamic games. Avner Greif’s (2006) discussion of the role of “cultural beliefs” in his influential work about the economic institutions of the Maghribi traders emphasizes the importance of counterfactuals in the working of institutions. Indeed, Greif shows that differences regarding the players’ beliefs at nodes that are off-the-equilibrium path may result in significantly different behavioral patterns.

A second, related point on which I would slightly amend Guala’s discussion concerns his argument about the unnecessity of public (i.e. self-evident) events in the generation of common beliefs (see his chapter 7 about mindreading). Here, Guala follows claims made by game theorists like Ken Binmore (2008) regarding the scarcity of such events and therefore that institutions cannot depend on their existence. Guala indeed argues that neither Morton’s “solution thinking” nor Lewis’ “symmetric reasoning” rely on the existence of this kind of event. I would qualify this claim for three reasons. First, if public events are defined as publicly observable events, then their role in the social world is an empirical issue that is far from being settled. Chwe (2001) has for instance argued for their importance in many societies, including modern ones. Arguably, modern technologies of communication make such events more common, if anything. Second, Guala rightly notes in his discussion of Lewis’ account of the generation of common beliefs (or common reason to believe) that common belief of some state of affairs or event R (where R is for instance any behavioral pattern) depends on a state of affairs or event P and on the fact that people are symmetric reasoners with respect to P. Guala suggests however that in Lewis’ account, P should be a public event. This is not quite right as it is merely sufficient for P to be two-order mutual belief (i.e. everyone believes P and everyone believes that everyone believes P). However, the fact that everyone is a symmetric reasoner with respect to P has to be commonly believed (Sillari 2008). The issue is thus what grounds this common belief. Finally, if knowledge and belief are set-theoretically defined, then for any common knowledge event R there must be a public event P. I would argue in this case that rather than characterizing public events in terms of observability, it is better to characterize them in terms of mutual accessibility, i.e. in a given society, there are events that everyone comes to know or believe even if she cannot directly observe them simply because they are assumed to be self-evident.

My last remark concerns Guala’s defense of realism and fallibilism about social kinds. I think that Guala is fundamentally right regarding the falsehood of constitutive dependence. However, his argument ultimately relies on a functionalist account of institutions: institutions are not what people take them to be but rather are defined by the functions they fulfill in general in human societies. To make sense of this claim, one should be able to distinguish between “type-institutions” and “token-institutions” and claim that the functions associated to the former can be fulfilled in several ways by the latter. Crucially, for any type-institution I, the historical forms taken by the various token-institutions I cannot serve as a basis to characterize what I is or should be. To argue for the contrary would condemn one to some form of traditionalism forbidding the evolution of an institution (think of same-sex marriage). The problem with this argument is that while it may be true that the way people represent a type-institution I at a given time and location through a token-institution I cannot define what I is, it remains to determine how the functions of I are to be established. Another way to state the problem is the following: while one (especially the social scientist) may legitimately identify I with a class of games it solves, thus determining its functions, it is not clear why we could not identify I with another (not necessarily mutually exclusive) class of games. Fallibilism about social kinds supposes that we can identify the functions of an institution but is this very identification not grounded on collective representations and acceptance? If this is the case, then some work remains to be done to fully establish realism and fallibilism about social kinds.

References

Binmore, Ken. 2008. “Do Conventions Need to Be Common Knowledge?” Topoi 27 (1–2): 17.

Chwe, Michael Suk-Young. 2013. Rational Ritual: Culture, Coordination, and Common Knowledge. Princeton University Press.

Gintis, Herbert. 2009. The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. Princeton University Press.

Greif, Avner. 2006. Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge University Press.

Hindriks, Frank. 2009. “Constitutive Rules, Language, and Ontology.” Erkenntnis 71 (2): 253–75.

Lewis, David. 1969. Convention: A Philosophical Study. John Wiley & Sons.

Morton, Adam. 2003. The Importance of Being Understood: Folk Psychology as Ethics. Routledge.

North, Douglass C. 1990. Institutions, Institutional Change and Economic Performance. Cambridge University Press.

Searle, John R. 1995. The Construction of Social Reality. Simon and Schuster.

Sillari, Giacomo. 2008. “Common Knowledge and Convention.” Topoi 27 (1–2): 29–39.

Vanderschraaf, Peter. 1995. “Convention as Correlated Equilibrium.” Erkenntnis 42 (1): 65–87.

Consequentialism and Formalism in Rational and Social Choice Theory

Rational and social choice theory (RCT and SCT respectively) in economics are broadly consequentialists. Consequentialism can be characterized as the view that all choice alternatives should be evaluated in terms of their consequences and that the best alternatives are those which have the best consequences. This is a very general view which allows for many different approaches and frameworks. In SCT, welfarism is for example a particular form of consequentialism largely dominant in economics and utilitarianism is a specific instance of welfarism. In RCT, expected utility theory and revealed preference theory are two accounts of rational decision-making that assume that choices are made on the basis of their consequences.

Consequentialism is also characterized by a variety of principles or axioms that take different and more or less strong forms depending on the specific domain of application. The most important are the following:

Complete ordering (CO): The element of any set A of alternatives can be completely ordered on the basis of a reflexive and transitive binary relation ≥.

Independence (I): The ranking of any pair of alternatives is unaffected by a change in the likelihood of consequences which are identical across the two alternatives.

Normal/sequential form equivalence (NSE): The ordering of alternatives is the same whether the decision problem is represented in normal form (the alternative is directly associated to a consequence or a probability distribution of consequences) or in sequential form (the alternative is a sequence of actions leading to a terminal node associated to a consequence or a probability distribution of consequences).

Sequential separability (SS): For any decision tree T and any subtree Tn starting at node n of T, the ordering of the subset of consequences accessible in Tn is the same in T than in Tn.

Pareto (P): If two alternatives have the same or equivalent consequences across some set of locations (events, persons), then there must be indifference between the two alternatives.

Independence of irrelevant alternatives (IIA): The ordering of any pair of alternatives is independent of the set of available alternatives.

All these axioms are used either in RCT or in SCT, sometimes in both. CO, I, NSE, SS and IIA are almost always imposed on individual choice as criteria of rationality. CO and IIA, together with P, are generally regarded as conditions that Arrowian social welfare functions must satisfy. I is also sometimes considered as a requirement for social welfare functionals, especially in the context of discussions over utilitarianism and prioritarianism.

It should be noted that they are not completely independent: for instance, CO will generally require the satisfaction of IIA or of NSE. Regarding the former for instance, define a choice function C(.) such that, for any set S of alternatives, C(S) = {x|x ≥ y for all y  S}, i.e. the alternatives that can be chosen are those and only those which are not ranked below any other alternative in terms of their consequences. Consider a set of three alternatives x, y, z and suppose that C(x, y) = {x} but C(x, y, z) = {y, z}. This is a violation of IIA since while x y and (not y x) when S = (x, y), we have y x and (not x y) when S = (x, y, z). Now suppose that C(x, z) = {z}. We have a violation of the transitivity of the negation of binary relation ≥ since while we have (not z y) and (not y x), we nevertheless have z x. However, this is not possible if CO is satisfied.

All these axioms have traditionally been given a normative interpretation. By this, I mean that they are seen as normative criteria of individual and collective rationality: a rational agent should or must have completely ordered preferences over the set of all available alternatives, he cannot on pain of inconsistency violate I or NSE, and so on. Similarly, collective rationality entails that any aggregation of the individuals’ evaluations of the available alternatives generates a complete ordering satisfying P and IIA and possibly I. Understood this way, these axioms characterize consequentialism as a normative doctrine setting constraints on rational and social choices. For instance, in the moral realm, consequentialism rules out various forms of egalitarian accounts which violate I and sometimes P. In the domain of individual choice, it will regard criteria such as minimization of maximum regret or maximin as irrational. Consequentialists have to face however several problems. The first and most evident one is that reasonable individuals regularly fail to meet the criteria of rationality imposed by consequentialism. This has been well-documented in economics, starting with axiom I in Allais’ paradox and Ellsberg’s paradox. A second problem is that the axioms of consequentialism sometimes lead to counterintuitive and disturbing moral implications. It has been suggested that criterion of individual rationality should not apply to collective rationality, especially CO and I (but also P and IIA).

These difficulties have led consequentialists to develop defensive strategies to preserve most of the axioms. Most of these strategies refer to what I will call formalism: in a nutshell, they consist as regarding the axioms as structural or formal constraints for representing, rather than assessing, individual and collective choices. In other words, rather than a normative doctrine, consequentialism is instead best viewed as a methodological and theoretical framework to account for the underlying values that ground individual and collective choices. As this may sound quite abstract, I will discuss two examples, one related to individual rational choice the other to social choice, both concerned with axiom I. The first example is simply the well-known Ellsberg’s paradox. Assume you are presented with two consecutive decision-problems, each time between a pair of alternatives. In the first one, we suppose that an urn contains 30 red balls and 60 other balls which can be either black or yellow. You are presented with two alternatives: alternative A gives you 100$ in case a red ball is drawn and alternative B gives you 100$ in case a black ball is drawn. In the second decision-problem, the content of the urn is assumed to be the same, but this time alternative C gives you 100$ in case you draw either a red or yellow ball and alternative D gives you 100$ in case you draw either a black or yellow ball.

Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
A 100$ 0$ 0$
B 0$ 100$ 0$
Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
C 100$ 0$ 100$
D 0$ 100$ 100$

Axiom I entails that if the decision-maker prefers A to B, then he should prefer C to D. The intuition is that if one prefers A to B, that must mean that the decision-maker ascribes a higher probability to event E1 than to event E2. Since the content of the urn is assumed to be the same in both decision-problems, this should imply that the expected gain of C (measured either in money or in utility) should be higher than D’s. The decision-maker’s ranking of alternatives should be independent of what happen in case event E3 holds, since in each decision-problem the alternatives have the same outcome. However, as Ellsberg’s experiment shows, while most persons prefer A to B, they prefer D to C which is sometimes interpreted as the result of some ambiguity-aversion.

The second example has been suggested by Peter Diamond in a discussion of John Harsanyi’s utilitarian aggregation theorem. Suppose a doctor has two patients waiting for kidney transplantation. Unfortunately, only one kidney is available and it is not expected that another one will be before quite some time. We assume that the doctor, endorsing the social preference of the society, is indifferent between giving the kidney to one or the other patient. The doctor is considering choosing between three allocation mechanisms: mechanism S1 gives the kidney to patient 1 for sure, mechanism S2 gives the kidney to patient 2 for sure, while in mechanism R he tosses a fair coin and gives the kidney to patient 1 if tails but to patient 2 if heads.

Alternative/event E1: Coin toss falls Tails E2: Coin toss falls Heads
S1 Kidney is given to patient 1 Kidney is given to patient 1
S2 Kidney is given to patient 2 Kidney is given to patient 2
R Kidney is given to patient 1 Kidney is given to patient 2

Given that it is assumed that the society (and the doctor) is indifferent between giving the kidney to patient 1 or 2, axiom I implies that the three alternatives should be ranked as indifferent. Most people have the strong intuition however that allocation mechanism R is better because it is fairer.

Instead of giving up axiom I, several consequentialists have suggested instead to reconcile our intuitions with consequentialism through a refinement of the description of outcomes. The basic idea is that, following consequentialism, everything in the individual or collective choice should be featured in the description of outcomes. Consider Ellsberg’s paradox first. If we assume that the violation of I is due to the decision-makers’ aversion to probabilistic ambiguity, then we modify the tables in the following way:

Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
A 100$ + sure to have a 1/3 probability of winning 0$ + sure to have a 1/3 probability of winning 0$ + sure to have a 1/3 probability of winning
B 0$ + unsure of the probability of winning 100$ + unsure of the probability of winning 0$ + unsure of the probability of winning
Alternative/event E1: Red ball is drawn E2: Black ball is drawn E3: Yellow ball is drawn
C 100$ + unsure of the probability of winning 0$ + unsure of the probability of winning 100$ + unsure of the probability of winning
D 0$ + sure to have a 2/3 probability of winning 100$ + sure to have a 2/3 probability of winning 100$ + sure to have a 2/3 probability of winning

The point is simple. If we consider that being unsure of one’s probability of winning the 100$ is something that makes an alternative less desirable everything else equals, then this has to be reflected in the description and valuation of outcomes. It is then easy to see that ranking A over B but D over C no longer entails a violation of I because the outcomes associated to event E3 are no longer the same in each pair of alternatives. A similar logic can be applied to the second example. If it is collectively considered that the fairness of the allocation mechanism is something valuable, then this must be reflected in the description of outcomes. Then, we have

Alternative/event E1: Coin toss falls Tails E2: Coin toss falls Heads
S1 Kidney is given to patient 1 Kidney is given to patient 1
S2 Kidney is given to patient 2 Kidney is given to patient 2
R Kidney is given to patient 1 + both patients are fairly treated Kidney is given to patient 2 + both patients are fairly treated

Once again, this new description allows to rank R strictly above S1 and S2 without violating I. Hence, the consequentialist’s motto in all the cases where one axiom seems to be problematic is simply “get the outcome descriptions right!”.

A natural objection to this strategy is of course that it seems to make things too easy for the consequentialist. On the one hand, it makes the axioms virtually unfalsifiable as any choice behavior can be trivially accounted for by a sufficiently fine grain partition of the outcome space. On the other hand, all moral intuitions and principles can be made compatible with a consequentialist perspective, once again provided that we have the right partition of the outcome space. However, one can argue that this is precisely the point of the formalist strategy. The consequentialist will argue that this is unproblematic as long as consequentialism is not seen as a normative doctrine about rationality and morality, but rather as a methodological and theoretical framework to account for the implications of various values and principles on rational and social choices. More precisely, what can be called formal consequentialism can be seen as a framework to uncover the principles and values underlying our moral and rational behavior and judgments.

Of course, this defense is not completely satisfactory. Indeed, most consequentialists will not be comfortable with the removal of all the normative content from their approach. As a consequentialist, one wants to be able to argue what it is rational to do and to say what morality commends in specific circumstances. If one wants to preserve some normative content, then the only solution is to impose normative constraints on the permissible partitions of the outcome space. This is indeed what John Broome has suggested in several of his writings with the notion of “individuation of outcomes by justifiers”: the partition of the outcome space should distinguish outcomes if and only if they differ in a way that makes it rational to not be indifferent between them. It follows then that theories of rational choice and social choice are in need of a substantive account of rational preferences and goodness. Such an account is notoriously difficult to conceive. A second difficulty is that the formalist strategy will sometimes be implausible or may even lead to some form of inconsistency. For instance, in the context of expected utility theory, Broome’s individuation of outcomes depends on the crucial and implausible assumption that all “constant acts” are available. This leads to a “richness” axiom (made by Savage for instance) according to which all probabilistic distribution of outcomes should figure in the set of available alternatives, including logically or materially impossible alternatives (e.g. being dead and in a good health). In sequential decision-problems, the formalist strategy is bounded to fail as soon as the path taken to reach a given outcome is relevant for the decision-maker. In this case, to include the path taken in the description of outcomes will not be always possible without leading to inconsistent descriptions of what is supposed to be the same outcome.

These difficulties indicate that formalism cannot fully vindicate consequentialism. Still, it remains an interesting perspective both in rational and social choice theory.

Capitalist Economies, Commodification and Cooperation

Branko Milanovic has an interesting post on the topic of commodification and the nature of economic relations in capitalist economies. Milanovic argues that commodification (by which he roughly means the extension of market relations, i.e. price-governed relations, to social activities that were historically outside the realm of markets) works against the development of cooperative behavior based on “repeated games”. Milanovic’s main point is that while non-altruistic cooperative behavior may indeed be rational and optimal when interactions are repeated with a sufficiently high probability, the commodification process makes economic relations more anonymous and ephemeral:

Commodification of what was hitherto a non-commercial resource makes each of us do many jobs and even, as in the renting of apartments, capitalists. But saying that I work many jobs is the same thing as saying that workers do not hold durably individual jobs and that the labor market is fully “flexible” with people getting in and out of jobs at a very high rate. Thus workers indeed become, from the point of view of the employer, fully interchangeable “agents”. Each of then stays in a job a few weeks or months: everyone is equally good or bad as everyone else. We are indeed coming close to the dream world of neoclassical economics where individuals, with their true characteristics, no longer exists because they have been replaced by “agents”.

The problem with this kind of commodification and flexibilization is that it undermines human relations and trust that are needed for the smooth functioning of an economy. When there are repeated games we try to establish relationships of trust with people with whom we interact. But if we move from one place to another with high frequency, change jobs every couple of weeks, and everybody else does the same, then there are no repeated games because we do not interact with the same people. If there are no repeated games, our behavior adjusts to expecting to play just a single game, a single interaction. And this new behavior is very different.

This claim can be seen as a variant of Karl Polanyi’s old “disembeddedness thesis” according to which commodification, through the institutionalization of “fictitious commodities” (land, money, labor), has led to a separation between economic relations and the sociocultural institutions in which they were historically embedded. As it is well-known, Polanyi considered this as the major cause for the rise of totalitarianism in the 20th century. Though less dramatic, Milanovic’s claim similarly points out that by changing the structure of social relations, commodification leads to less cooperative behavior, especially because it creates opportunity costs that previously do not exist and favors anonymity. Is that completely true? There are two separate issues here according to me: the “monetization” of social relations and the “anonymization” of social relations. Regarding the former, it seems now well established that the introduction of (monetary) opportunity costs may change people’s behavior and their underlying preferences. This is the so-called “crowding-out effects” well-documented by behavioral economists and others. Basically, the fact that opportunity costs can be measured in monetary unit favors economic behaviors based on “extrinsic preferences” (i.e. favoring maximization of monetary gains) and weakens “intrinsic preferences” related, for instance, to a sense of civic duty. It is unclear to what extent this crowding-out effect has had a cultural impact in Western societies from a macrosocial perspective but at a more micro level, the effect seems hard to discard.

I am less convinced regarding the “anonymization thesis”. It is indeed quite usual in sociology and in economics to characterize market relations as being anonymous and ephemeral. This is contrasted with family and other kinds of “communitarian” relations that are assumed to be more personal and durable. To some extent, this is probably the case and it would be absurd to deny that there is no difference between giving the kids some money for them to buy some meal to an anonymous employee and cooking the meal myself. Now, the picture of the anonymous and ephemeral market relationship mostly corresponds to the idealistic Walrassian model of the perfectly competitive market. Such market, as famously argued by the philosopher David Gauthier, is a “morally-free zone”. But actually, every economist will recognize that markets are imperfect and that their functioning leads to many kinds of failures: asymmetric information and externalities are especially the cause of many market suboptimal outcomes. This is at this point that the “anonymization thesis” is unsustainable. Basically, because of market failures and imperfections, market relations cannot be fully anonymous and ephemeral to survive. Quite the contrary, mechanisms favoring the stability of these relations and making them more personal are required. The examples of Uber and AirB&B provide a case to this point: the economic model of these companies is precisely based on the possibility (and indeed the necessity) for their users to provide information to the whole community regarding the quality of the service provided by the other party. Reputation (i.e. the information regarding one’s and others’ “good-standing”), segmentation (i.e. the ability for one to choose his partner) and retaliation (i.e. one’s ability to sanction directly or indirectly uncooperative behavior) are all mechanisms that favor cooperation in market relations and they are indeed central in the kind of social relations promoted by companies like Uber. Moreover, new technologies tend to reduce considerably the cost of these mechanisms for economic agents as giving one’s opinion about the quality of the service is almost free of any opportunity cost (though that may lead to a different problem regarding the quality of information).

Now, once again, the point is not to say that there is no difference between providing a service through the market and within the family. But it is important to recognize that market relations have to be cooperative to be efficient. In this perspective, trust and other kinds of social bonds are quite needed in capitalist economies. Complete anonymity is the enemy, not the constitutive characteristic, of market institutions.