Page 1

APPENDIX C MAPPING PARADIGMS: A PRIMER ON TODAY’S MAINSTREAM ECONOMICS by Christian Arnsperger CONTENTS: C.1. Three examples C.2. The mirage of neutrality C.3. The inevitability of paradigms C.4. Paradigms and their basic dynamics C.5. The workings of Traditional Economics C.6. The twofold reductionism of Traditional Economics C.6.1. The economy as an interactive system C.6.2. Traditional Economics and the loss of process C.6.3. Two versions of equilibrium C.7. Beyond Traditional Economics: The post-neoclassical paradigm C.7.1. Instrumental rationality: From parametric to strategic C.7.2. Non-cooperative games and the dynamics of interaction C.7.3. The Nash solution and the return of the independent-agent approximation C.7.4. The flaws of game theory and the advent of bounded rationality C.7.5. Elements of complexity economics C.7.6. Elements of behavioral and neuroeconomics C.8. Beyond the post-neoclassical paradigm: Integrating ecological economics, monetary behaviorism, and critical political economy C.8.1. Making the post-neoclassical paradigm ecologically rational C.8.2. Making the post-neoclassical paradigm sensitive to the behavioral efficaciousness of money C.8.3. Opening the post-neoclassical paradigm to critical and existential rationality C.9. Toward a new economic paradigm? Let me start out with a cautionary note. This Appendix is clearly very long but it is, in another sense, way too concise. I will, for obvious reasons of space, not be able to deal with the whole landscape of approaches in economics. Therefore, I have made a choice: I will confine most of my discussion—that is, the core sections C.5 to C.8—to mainstream economics, i.e., that subset of the discipline that has been and still is taught and practiced predominantly in the vast majority of the worlds economics departments. Consequently—and with apologies to many of my colleagues—I will not touch upon the frequently used distinction between “orthodox” and “heterodox” economics. I will leave out many so-called “alternative” approaches, although sections C.8 and C.9 will quite definitely sound “alternative” to my more mainstream colleagues. If you enroll as a student in a contemporary economics department, chances are you won’t hear much about paradigms. You will most likely be taught that economics is a scientific discipline—with its own specific problems, such as “inexactitude”1—based on assumptions that allow the construction of deductive models. To this end, you will be instructed to learn mathematics and logic so that you can develop a capacity to “translate” most economic theories (whether it be that of Karl Marx, Vilfredo Pareto, Léon Walras,                                                                                                               1

See Daniel M. Hausman, The Separate and Inexact Science of Economics (Cambridge: Cambridge University Press, 1992).


Thorstein Veblen, John Maynard Keynes, or Nicholas Georgescu-Roegen) into the appropriate formal language. We might call this the hypothetico-deductive method of modeling and testing, although in most economics classes it isn’t even called that. (You’ll come across that expression only if you take a course in economic methodology or epistemology. This subject matter isn’t systematically available at all departments, even in the top-ranking ones.) It is routinely presented as a thoroughly non-dogmatic, non-doctrinaire endeavor—a mere “linguistic” attempt at casting various thinkers’ theories in the “rigorous” terms needed to test their conceptual coherence and/or their empirical validity. Let me give three examples. C.1. Three examples The first example is the so-called “neoclassical synthesis” of Keynes’s theory. In the 1980s, economists using the assumptions and tools of neoclassical economics came up with a version of Keynesianism that became known as the “new Keynesian” approach. Taking their cue from prestigious predecessors such as John Hicks (the creator of the infamous “IS-LM” model in the 1940s) and Don Patinkin, the new Keynesians built a hypothetico-deductive approach to liquidity traps and structural unemployment, essentially relying on the language of instrumental rationality, decentralized markets, sticky prices, and coordination failures. This gave birth to a cottage industry of formalized neoclassical models yielding “Keynesian” conclusions—and often blaming the economy’s problems on the very agents (obstructive trade unions, interventionist governments, or ill-guided monetary authorities) who in Keynes’s worldview were to be the saviors of capitalism. In the process of “translating” Keynes with a language that was alien to Keynes’s own intellectual universe, new Keynesianism arguably lost most of the complexity-related as well as the political-economy aspects that made Keynes’s theory so revolutionary. Accordingly, the neoclassical synthesis came under attack from both process- and dynamics-oriented Keynesians (such as Robert Clower and Axel Leijonhufvud) and from more politically radical post-Keynesians (such as Joan Robinson). Both of these had entirely different views on what working within Keynes’s intellectual heritage meant. The second example is the self-styled school of “analytical Marxism” which emerged in the 1980s as a result of analytical philosophers (Gerald Cohen, Jon Elster, Philippe Van Parijs) and neoclassical economists (John Roemer, Samuel Bowles) seeking to revive Karl Marx’s intellectual heritage. They intended to salvage it from the “bullshit Marxism” that had used the all too imprecise language of dialectics, alienation, and historical materialism, and to translate it into neoclassical categories including methodological individualism, instrumental rationality, and equilibrium analysis. Revolutionary movements—so analytical Marxists claimed—could be analyzed using non-cooperative game models, and the workings of socialism could be modeled with the tools of general equilibrium analysis. These “no bullshit” tools would provide the clarity and rigor that Marx had foregone by being both Hegelian and German—a twofold handicap that could only be remedied by recasting his grand theory in terms that any Anglo-American logician could grasp. Here too, a cottage industry of formalized neoclassical models yielding “Marxian” conclusions arose—systematically relying on assumptions about agents such that socialism became a variant of (pro-regulation) liberalism and revolution became a quest for the right “principles of justice.” In the process, many more traditional Marxists felt the central elements of Marx’s original view of the world and of human rationality were lost in translation. While my first two examples are clear illustrations of the principle traduttore, traditore (“the translator is always a traitor”), my third one goes even further in erasing any explicit

- C.2 -


reference to paradigms. I have in mind the gradually developing area of so-called “axiomatic” collective choice theory, pioneered by such brilliant minds as Hervé Moulin, William Thomson, Marc Fleurbaey, and François Maniquet. The basic idea here is that the economist—in fact, the particular economist trained in the neoclassical hypothetico-deductive method—is a mere translator of collective values into formal axioms, with a view to testing their mutual coherence and to suggesting possible “solution concepts” that might be applied to resource allocation problems. Suppose the aim of the political decision maker is to combat poverty or unemployment, or to equitably distribute the access to a given natural resource such as water or forest land. What, the axiomatic collective choice theorists ask, are the values you want to promote? Non-dictatorship? (Surely yes…) Pareto optimality? Equal treatment of equals? No envy between agents? Undominated diversity? Etc. Given the list of basic axioms thus established, they then proceed to delineate the set of solution concepts—or allocation mechanisms—that are compatible with these axioms, supposing the latter are themselves mutually compatible. Should we implement a competitive auction based on an initially equal split of access rights? Should we create a tradable quota mechanism, or a non-market mechanism that allocates resources to agents in proportion to the “relative intensity” of their preferences? This is a highly technical and formal area of research, basically a branch of social engineering in the tradition of the “mechanism design” approach pioneered by Leonid Hurwicz and Eric Maskin. It has, once again, given rise to a cottage industry of heavily formalized neoclassical models purporting to enlighten politicians, and citizens in general, as to the true content of their often muddled and imprecise normative criteria—but using neoclassical assumptions about agents such that, within the models themselves, no politician or citizen actually cares about normative issues at all.2 C.2. The mirage of neutrality I have personally had the good fortune of being exposed to these three examples while studying economics at one of the better European departments (at Université catholique de Louvain in Belgium, seat of the famous Center for Operations Research and Econometrics, or CORE) and of experiencing them from the inside. What do they have in common? Not much, except for one absolutely essential aspect: They all consist of using a scientific paradigm to “translate” other paradigms into the “right” language, while denying that there even are such things as paradigms. Although the analogy has its limits, this is a bit as if you translated all the world’s literature, both prose and poetry, into English and then claimed that there is actually no plurality of languages—just a bunch of muddled and imprecise tongues (German, French, Greek, Japanese, Mandarin) and one language, English, that allows to express in clear and understandable grammatical form what these tongues try to express in inconsistent, flawed ways. Any linguist in his right mind would find this completely preposterous. In fact, the teaching of economics these days proceeds in much the same way. What students are mainly—and sometimes exclusively—taught under the generic, apparently neutral label of “economics” is a language, a set of grammatical tools that are supposed to frame ex ante any debate about ideas and theories. This language is actually very specific and carries, as we shall see below, quite a few strong presuppositions. In other words, like any language, it is not only a set of grammatical rules for composing ideas, but it is also a set of semantic tools for giving specific meaning to the world—but it is routinely presented as “the economic method” without any historical, semantic or conceptual earmarks whatsoever. This                                                                                                               2

For a detailed explanation of this—to me—fundamental criticism, see Christian Arnsperger, Critical Political Economy: Complexity, Rationality and the Logic of Post-Orthodox Pluralism (London: Routledge, 2008).

- C.3 -


misleads both students and those who instruct them into believing that the “language of economics” does not contain a worldview but, rather, forms the pre-condition for any acceptable worldview: However interested you might spontaneously be in the intellectual and political cosmos of Marx, Veblen, Hayek, Rawls or Georgescu-Roegen, you must be wary of your own enthusiasm, for the only parts of their worldviews (if any) that you can actually use to carry on a debate are those that can be made intelligible through the “language of economics.” The reason most economics instructors nowadays can get away with this sort of thing is that the notion of paradigm is no longer part of the toolbox of the economist. And the reason for that, in turn, is that grammar and method have been divorced from semantics and worldview. The general attitude—which reigns unreflectively in the minds of most economists—is that the formal rules of composition of a scientific discourse (i.e., the rules that dictate how you are supposed to say things) are neutral with respect to the substantive content of that discourse (i.e., what the rules of discourse allow you to say, make you say, or prevent you from saying). Hence the almost exclusive focus on the language, methods, and tools that the student must use in order to be part of the circle of people who can legitimately address each other as “economists.” However, when discussing the three examples in the previous section, we saw that this sort of neutrality is a mirage. The language in which you try to express Keynesian or Marxian ideas, for instance, will profoundly affect the extent to which you can even claim to still be Keynesian or Marxian. Seeking to “translate” Keynes’s or Marx’s theory—their respective views of the world and of human agency, as well as their underlying conceptions of freedom, of society, and other things—into neoclassical language is not a merely grammatical endeavor: It colors the brand of Keynesianism or Marxism you put forward, to such an extent that many careful readers of Keynes or Marx may claim that you are, in fact, betraying the very core of their thought. Purporting to make Keynes’s or Marx’s thought more rigorous by casting it in the “language of economics” actually means that you are using part of one paradigm p—the language L(p) in which it instructs you to express all your theories—in order to express a theory T whose core assumptions c(T) actually belong to a distinct paradigm p’. What lurks behind any claim of “neutrality” is therefore, actually, often a form of intellectual hijacking. More precisely, and less polemically, someone who claims to be a neutral or merely “more rigorous” translator of a certain theory T into a certain language L may, in actual fact (and perhaps unconsciously), be acting as if L were tied to no paradigm at all—making it a seemingly neutral “means of exchange” between theories. In this way, the translator is actually concealing the fact that he or she is using a paradigm p to critically reformulate theories coming from another paradigm p’. This amounts to acting as if adopting p were a necessary precondition for any valid utterance within p’, and it implicitly conveys the notion that p is not really a paradigm, but rather a meta-paradigm used to filter and judge all paradigms. Every time you claim to be neutral, you’re in fact hiding your particular options behind a veil and making people believe that they, regardless of their own options, have no option but to adopt your options… That’s why insisting that economics is necessarily and inevitably a paradigm-rooted science is so essential. But what, in fact, is a paradigm? C.3. The inevitability of paradigms When we take economics as an object of reflection—as we do in epistemology—we necessarily hit upon a collective dimension: There is a group of individuals out there (and/or in here…) that we can stake out as the group of all those who, regardless of whether or not

- C.4 -


they agree with each other on the use of the term, call themselves “economists.” Economics is the set of all mental objects produced and circulated by the members of the class of all individuals who are self-declared economists. Of course, within that class of individuals there is a very substantial amount of conflict going on about who can legitimately declare him/herself an economist, and who on the contrary is a fraud. That conflictuality is part of the internal politics of the class of economists. It is, to a very large extent, an irreducible conflict that goes on in a context of very asymmetrical power relationships and very differentiated institutional circumstances. Each proper subset—or “sub-class”—of the class of economists purports to offer a paradigm for the practice of economics as it sees it. A paradigm is a set of assumptions, worldviews, and rules of practice (including rules on which methods to use, which techniques to use, and how to use in the right way) that defines how “we”—as a collective of “economists”—do economics. As the renowned philosopher of science Thomas Kuhn has put it, by a paradigm we should understand … some accepted examples of actual scientific practice—examples which include law, theory, application, and instrumentation together—[which] provide models from which spring particular coherent traditions of scientific research. […] The study of paradigms […] is what mainly prepares the student for membership in the particular scientific community with which he will later practice. Because he there joins men who learned the bases of their field from the same concrete models, his subsequent practice will seldom evoke overt disagreement over fundamentals. Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice.3

More formally, we could say that if E is the set of all individuals who, from one perspective or another, declare themselves to be economists, then we can break this set down into a number P of distinct paradigms p = 1, …, P, so that E ≡ {E1, …, EP}. Thus, Ep is the collective of individuals who declare themselves to be economists according to the criteria of paradigm p. We could say they are not simply economists, but more precisely “peconomists.” Notice from the quote that for Kuhn, any paradigm p is a “concrete model”: it is embodied in individuals who “provide models” by teaching and practicing p-economics according to certain theoretical conceptions [θp] of economic reality, to certain rules on what formal tools [φp] to use to model that economic reality, and to certain regulations on what techniques [τp] to use to empirically grasp it. Within any paradigm p, there is a triplet 〈θp, φp, τp〉 that summarizes the paradigm’s knowledge-producing technology. But beyond this technology, there is a very personalized, even individualized aspect of the paradigm: it is a way of intellectual life carried by paradigmatic individuals who serve as “models” for the upcoming generation; therefore, a crucial aspect of a paradigm is the dimension of intergenerational transmission by which younger people learn the tools of the trade and become members of a community—what Kuhn calls “membership in the particular scientific community with which he will later practice.” One of the main upshots of this is that no economist can speak from anywhere but inside a paradigm. He can change paradigms, but he can never stand without any paradigm at all—that would mean having no structuring theoretical conceptions, no formal toolbox, and no empirically oriented techniques; in short, it would mean this economist would claim to be producing his knowledge out of nothing but pure and immediate “reality.” Such a position of naive realism—which claims that knowledge can be obtained without any other tools except our five “objective” senses and our “natural” capacities for reasoning—will be rejected in this book. We will adopt a position that could be called paradigmatic perspectivism. This means simply that while an independent reality is indeed postulated to exist, the perspective you                                                                                                               3

Thomas Kuhn, The Structure of Scientific Revolutions (1962), new edition (Chicago: University of Chicago Press, 1070), p. 11.

- C.5 -


come to adopt on economic reality can never be independent of some paradigm, i.e., some theoretical, formal, and technical toolbox. In line with Kuhn, Ken Wilber has summarized the general notion of “good science” with three basic principles which underlie any well-constituted scientific paradigm: 1. A practical injunction or exemplar. […] “facts” are not lying around waiting for all and sundry to see. If you want to know this, you must do this—an experiment, an injunction, a paradigmatic series of engagements, a social practice: these lie behind most of good science. This is actually the meaning of Kuhn’s notion of “paradigm,” which does not mean a super-theory but an exemplar or actual practice. 2. An apprehension, illumination, or experience. Once you perform the experiment or follow the injunction—once you pragmatically engage the world—then you will be introduced to a series of experiences or apprehensions that are brought forth by the injunction. These experiences are known as data. […] All good science […] is anchored to some degree in data, or experiential evidence. 3. Communal checking (either rejection or confirmation). Once we engage the paradigm (or social practice) and bring forth a series of experiences and evidence (or data), it helps if we can check these experiences with others who have also completed the injunction and seen the evidence. A community of peers—or those who have already completed the first two strands (injunction and data)—is perhaps the best check possible, and all good science tends to turn to a community of the adequate for confirmation or rejection. […]4

Whether any existing economic paradigm, especially the currently dominant ones, actually conforms fully to these three principles has yet to be determined later in this book. However, the idea that a paradigm is a set of practical, exemplary injunctions on how to engage economic reality has a very important, immediate implication. Discussing the reasons why she believes the great late heterodox economist John Kenneth Galbraith was, to her mind, not an “economist,” one recent writer on contemporary economics has put it this way: The reason many economists think Galbraith isn’t really one of us lies in his methodology. His work covers the terrain of economics—the operation of business, growth and wealth, running public services, inflation, and so on—but it uses the methods of sociology and history. The point isn’t just that Galbraith’s books are literary and contain no equations; Paul Krugman is another wonderful writer and popularizer, in the same political spectrum as Galbraith, but we in the profession count Krugman as a bona fide economist. By contrast many of us spurn Galbraith because he wasn’t a modeler. Models don’t have to be expressed in mathematical equations, but the thought process a modeler brings to trying to understand the world involves trying to select a small number of variables and relationships which can perhaps, with elegance and economy, explain the phenomena we observe. So we modelers can read The Affluent Society [one of Galbraith’s most influential books], and even agree with it, without finding it persuasive. It gives us no grip on how to confront its hypotheses and claims with empirical evidence. Economics isn’t defined by its subject matter but by its way of thinking.5

When Diane Coyle claims that a non-mainstream economist like Galbraith was “not one of us economists,” because he was not using the “right” methods, what she is really only saying is this: I, Diane Coyle, have come to believe that paradigm p is the best way to produce knowledge about the economy; therefore, to my mind, “economics” has become synonymous with “p-adequate economics”; since Galbraith did q-adequate economics and since I believe paradigm q to be unsatisfactory, I equate this with saying that he was “not an economist,” when all I can really claim is that he was not a p-adequate economist—which, of course, is true but hardly a basis for rejection. In other words, any individual’s self-declaration as an economist must be “indexed” by some paradigm p that characterizes what kind of economist that individual is. Correlatively,                                                                                                               4

Ken Wilber, A Theory of Everything: An Integral Vision for Business, Politics, Science, and Spirituality (Boston: Shambhala, 2000), p. 75. 5 Diane Coyle, The Soulful Science: What Economists Really Do and Why It Matters (Princeton: Princeton University Press, 2007), pp. 231-232.

- C.6 -


erasing the perspectivist dimension and claiming that “economics ≡ p-adequate economics” implies an abuse of power by the members of Ep. In a sense, the plurality of paradigms that exists in economics corresponds to the plurality of perspectives that exist on the economy. No economist can claim that he or she is speaking from nowhere. The paradigm he or she chooses to work with represents the intellectual and political cosmos within which he or she is developing his or her ideas, and it is therefore the place he or she is speaking from.6 Some paradigms are rather like political parties, with very clear hierarchies and sanction mechanisms—the criteria for a “good” publication, the standards for “rigorous” discourse, and the procedures for the designation of the “right” colleagues within academic positions. Clearly, the long-dominant paradigm of neoclassical economics has been characterized by this sort of centralized, often authoritarian structure. So was the paradigm of Marxist-Leninist economics which reigned supreme in Soviet Russia for decades, which condemned economists working with other worldviews and tools to be disparaged as bourgeois class enemies. Other paradigms are more like social movements, emerging spontaneously and haphazardly from the crumbling remains of a previous paradigm. I would claim that the currently emerging post-neoclassical paradigm—which comprises complexity economics, behavioral economics, neuroeconomics, and experimental economics—is of this nature.7 So are many paradigms that languish at the fringe of the mainstream, such as the French “régulation” and “conventions” schools. In fact, one might think that paradigms start out as social movements and then, sometimes, become more like political parties or even like small banana republics. The latter is the case when a small group of individuals champion a self-declared “paradigm” and behave like sectarian autocrats looking for allegiances—mainly through publication in specific journals—even though their paradigm is still so small and insignificant within the overall scientific landscape that they might profit from more openness and ecumenism. More generally, all paradigms as collective human structures are prone to the weaknesses and imperfections of all human communities. They are nevertheless inevitable as the cognitive structures within which any economist has to develop his or her ideas. C.4. Paradigms and their basic dynamics Neoclassical economics—or what, in line with Eric Beinhocker’s analysis,8 we have in Money and Sustainability called the Traditional Economics paradigm—has been, for many decades, the dominant paradigm in the profession. To say that it is “dominant” immediately indicates that there have always been, alongside it, other paradigms that were dominated by it. A paradigm rarely, if ever, stands alone in the landscape; paradigms do not succeed each other like points on a line, one at a time, each paradigm neatly following on its predecessor. A paradigm, let us insist, is not just thoughts but people practicing thoughts with methods, tools, and validation injunctions. A paradigm is, before anything else, an exemplar-producing community: no p-worldview would have any existence if it were not for real people taking up that worldview and making it the “engine” of their everyday intellectual and institutional lives. Now, this does not mean they have necessarily freely chosen their community—they                                                                                                               6

For a detailed analysis of the issues of pluralism in contemporary economics, see Christian Arnsperger, Critical Political Economy, op. cit. 7 I have carried out a detailed analysis of the neoclassical and post-neoclassical paradigms in Christian Arnsperger, Full-Spectrum Economics: Toward an Inclusive and Emancipatory Social Science (London: Routledge, 2010). I will provide elements of this detailed analysis further down in this Appendix. 8 See Eric Beinhocker, The Origin of Wealth: Evolution, Complexity and the Radical Remaking of Economics (New York: Random House, 2006).

- C.7 -


may have become members out of habit, mindless opportunism, or fear—or that they deeply know why they are using the toolbox they are using—they may have learned the tools mechanically or even out of laziness, not bothering to search elsewhere; these are standard perversions in any community. However, that does not make the paradigm as a whole just arbitrary or random: key members of the community—often the so-called “elite” members— do know why they have adopted the paradigm p, what are its strengths but also its weaknesses, and what are the limitations of the tools they use. They may not make their thoughts on those matters public, and especially they may not teach these second thoughts and limitations to the younger members—another standard phenomenon in communities. Like any human community, a paradigm seeks to preserve its internal cohesion as long as possible. That is the reason for the various injunctions to which members of the paradigm— especially the junior members, whose minds are still malleable—are subjected: injunctions on which “core principles” of knowledge construction to start from; injunctions about which theoretical tools to use when building upward from these core principles; injunctions about how to approach empirical data and how to process them; injunctions about what techniques to use in order to validate theories with data; and injunctions on how to formulate, present, communicate, and write up one’s research results. Since paradigms have a definite intergenerational dimension, these various injunctions are definitely part of any pcommunity’s education and “disciplining” strategy. None of this, however, implies that a paradigm is nothing but a sociological power structure from which scientificity and the connection to Truth are absent; that can, and does happen in extreme cases, but in actual fact power and recognition within a paradigm are obtained via the right application of these injunctions, whose ultimate goal is the revelation of scientific Truth. The highest ideal of a scientist is to be able to completely sacrifice his or her first-person, subjective sides (linked to the Good and the Beautiful) to the production of “pure” knowledge. Thus, the instruments which p-economists use to exercise power over one another and over the younger generation are “well-meaning” instruments: They aim to reflect the highest ideals of truth-seeking and to translate them into p-injunctions. Such injunctions are, by definitions, “dogmas”: so it does not help at all to criticize anyone by accusing them of being “dogmatic,” since this is inevitable within a paradigm. It is part of what Kuhn calls the process of “normal science,” a process characterized by the rigid, disciplined, and therefore largely non-reflexive, application of the paradigm’s injunctions. In periods of normal science, the p-community exercises strong internal discipline and views any critique from outside as irrelevant; internal critique can only be accepted if it respects the paradigm’s basic tenets: If you’re a p-economist, do p-economics or leave! Such an attitude, Kuhn claims, is a normal feature of a paradigm that is seeking to “flesh itself out,” to perfect its toolbox and its methods for using those tools, and to investigate how far its core principles will go in permitting the production of true knowledge. Truth is measured against the world—that is, against empirical data. A paradigm possesses internal truth-validation technologies, and these have to be used consistently if the build-up of internal criticism is to be legitimate. As the period of normal science gets longer, more and more evidence may accumulate of the type, “We’ve been using the prescribed methods of data-oriented testing and of validation, and alas, our theories and models—built up from our core principles and worldviews—seem to be falsified more and more often by the data.” Two key elements usually precipitate the end of periods of normal science: •

Accumulating falsification, which according to Karl Popper means the empirical invalidation of p-theory-based utterances. When too many falsifications have accumulated, and when the paradigm’s “core” worldview, theoretical methods, and practical tools and techniques are no longer able to supply “quick fixes” (patching up

- C.8 -


models by adding ad hoc “sub-models,” adding some new assumptions, inventing new equilibrium concepts, etc.), then more and more p-economists tend to come to the conclusion that p has run its course, and they look for the next paradigm p’. Elite dissatisfaction, which according to Colander, Holt, and Rosser is a main driving force for internal shifts even while the practices of normal science are still in full swing. High-ranking p-economists may, largely on intuition and also because they are more curious and open-minded than the average p-economist, lend an ear to the critical and constructive work done by brilliant—usually younger—q- or k-economists (including some who were members of the p-paradigm but left). The basic attitude is, “Well, that’s non-p, but pretty interesting.”

Accumulating falsification and elite dissatisfaction combine to create what Kuhn has called “scientific revolutions.” This expression has sometimes been misunderstood as a sort of Marxist moment where the palace gets stormed, as it were, and the old paradigm is burned on the public square surrounded by the cheering crowd of its former members. Reality is not quite as spectacular because a dying—and even dead—paradigm can remain propped up for a long time by its most mediocre members who are simply too afraid to venture out into the new paradigms. This is all the more so if, as often happens, the now dead p-paradigm was massively dominant for decades, in which case the aftermath can linger for yet many more decades as pure power games within keep it fictitiously alive within enduring p-institutions (university departments, schools, etc.). In other words, a “scientific revolution” is a more or less invisible and gradual event, and the dissatisfied elite that drives the revolution will often be decried as “traitors” or as “senile” for a long time before the paradigm finally crumbles. At any rate, paradigms always coexist to a significant degree: p and its “internally generated” post-p rival, p’, usually coexist; and both of them usually coexist with q, k, etc. which are and have always been non-p paradigms; it may be, but need not be, that p’ has taken up some features from q or k. How all these paradigms coexist is a matter of how the educational and research institutions are organized; there is no general empirical pattern, and general normative theories of paradigm coexistence are not yet very widespread in economic epistemology. C.5. The workings of Traditional Economics What we have termed in this book the Traditional Economics paradigm has frequently been called neoclassical economics within the profession. The word “neoclassical” is said to have been coined by the economist Thorstein Veblen to characterize the emerging Marginalist School of the 1870s, with people such as Stanley Jevons, Carl Menger, and Léon Walras. (Just like it is Karl Marx, so it appears, who in the 1840s popularized the description of the work of Adam Smith, David Ricardo, and Thomas Malthus as “classical” economics.9) By a “neo-classical” approach, Veblen obviously meant something that took its root in classical doctrine but superseded it towards a new vision. Essentially, what Veblen objected to the marginalist economists is that they neglected some deep traits of human nature—such as our tendency to want to dominate others, to want to “show off” with our wealth, etc.—and that at the level of method they built a formalistic approach that neglected evolution and dynamics. So, in a sense, Veblen was nostalgic of the good old days of classical dynamics and preoccupation with growth, even though (perhaps less so than Marx) he also had strong reservations with respect to Smith’s or Ricardo’s worldview. We could summarize the “neo                                                                                                               9

See David C. Colander, Richard P.F. Holt and J.B. Rosser, The Changing Face of Economics: Conversations with Cutting Edge Economists (Ann Arbor: University of Michigan Press, 2004), p. 8.

- C.9 -


classical” approach as a formalistic, mechanistic, and static approach—three properties which, when put together, generated a type of economics that Veblen disliked because it left out some of the most important insights from the classics and kept some of their most important errors. We will see whether Veblen’s assessment is warranted; to announce the cards, I do believe he essentially was right, and that post-neoclassical methods (to be studied in section C.7.) are first and foremost an attempt, by the neoclassicals themselves, to take into account the sort of criticism that Veblen had already started to articulate in the 1890s. According to Ernesto Screpanti and Stefano Zamagni,10 the Traditional Economics paradigm—henceforth called TE-paradigm—succeeded in displacing the classical orthodoxy for both “internal” and “external” reasons. These offer a good illustration of the (messy) dynamics of paradigms. The internal reasons were (a) “the inability of the classical orthodoxy to solve a series of theoretical problems” (in particular, the defects of Ricardo’s labor theory of value and of his theory of the cost of production, which John Stuart Mill strongly criticized) and (b) the fact that “the classical economists had not been managed to produce a satisfactory theory of income distribution” (in particular, the controversy around whether, as Malthus claimed, wages were bound to get pushed down to subsistence level as the population grew—an idea which Jevons criticized strongly and for which the Ricardians offered no convincing alternative). The external reason was (c) the increasing identification of socialist-minded economists with the Marxian paradigm, which people like Wicksteed, Böhm-Bawerk, and Pareto felt was deeply defective in its analysis of value and of agents’ rationality. Thus, the combination of a defective classical theory and a perceived need for a “scientific” alternative to Marxian socialism seems to explain why more and more classically-oriented economists shifted towards new tools, new methods, and a new worldview. This led to the gradual emergence of the TE-paradigm. According to the excellent analysis offered by Screpanti and Zamagni, this new paradigm offered six crucial innovations. (1) It focused much less on evolutionary systems dynamics than on the careful analysis of how given resources get distributed at a given instant in time. This led to a fundamental recasting of the “core” worldview guiding economic analysis: In the analysis of the conditions ensuring the optimal allocation of given resources among alternative uses, the neoclassical economists identified a universally valid principle, one which was able, alone, to embrace the entire economic reality. As Robbins said: “Scarcity of means to satisfy ends of varying importance is an almost ubiquitous condition of human behaviour. Here, then, is the unity of subject of Economic Science, the forms assumed by human behaviour in disposing of scarce means” […]. The tendency to extend the basic model to every branch of economic investigation was reinforced during the course of the [twentieth] century until it culminated in the argument of P.A. Samuelson that there is a simple principle at the heart of all economic problems: a mathematical function to maximize under constraints.11

(2) The TE-paradigm accepted utility-maximization, i.e., the idea that what individuals pursue is exclusively the purposeful satisfaction of their tastes and preferences: … human behaviour is exclusively reducible to rational calculation aimed at the maximization of utility. They considered this principle to be universally valid: alone, it would have allowed the understanding of the entire economic reality.12

(3) The TE-paradigm rests on a methodology—absent from classical economics—of explaining individual economic decisions by “substitution,” or arbitrage through the continuous variation of proportions:                                                                                                               10

Ernesto Screpanti and Stefano Zamagni, An Outline of the History of Economic Thought (Oxford: Oxford University Press, 2005), pp. 170-172. 11 Ibid., pp. 165-166. 12 Ibid., p. 166.

- C.10 -


The analysis is carried out in terms of the alternative possibilities among which the subjects, both consumers and producers, can choose. And the objective is the same: to search for the conditions under which the optimal alternative is chosen. This method presupposes that the alternatives at stake are “open” and that the decisions taken are reversible […].13

(4) Within the TE-paradigm, methodological individualism gradually became allpervasive. The fundamental entities of scientific explanation, so it was claimed, had to be as small as possible, which leads to individualistic reductionism: If they are subjects able to make rational decisions with a view to maximizing an individual goal, such as utility or profit, they must be individuals or, at the most, ‘minimum’ social aggregates characterized by the individuality of the decision-making unit, such as households and companies. Thus the collective agents, the social classes and the “political bodies,” which […] the classical economists and Marx had placed at the centre of their theoretical systems, disappear from the scene.14

(5) The TE-paradigm claims that the “laws” it discovers with the help of its methodological tenets (3) and (4) are a-historical. They are not connected to any specific, historically located social relations or political-cultural background: Economics was likened to the natural sciences, physics in particular, and economic laws finally assumed that absolute and objective characteristic of natural laws. […] [F]or this to make sense, it is necessary to remove social relations from the field of economics, exorcizing them as superstitions, a waste of time, a subject not in line with the new scientific achievements. With the marginalist revolution also originated that reductionist project of economics which has marked all the successive neoclassical thought, a project according to which economics has no other field of research than technical relationships (the relationships between man and nature). Thus, while individualistic reductionism had led to the elimination of social classes, the anti-historicist reduction led to the elimination of social relations—which also meant that the study of their change also lost importance.15

(6) Finally, closely connected to tenet (2) is the idea that according to the TE-paradigm, since agents have preferences which they maximize under constraints, and since it is these preference that jointly determine the value of things, it follows that all values in society are both individual and subjective: “Individual” means that they are considered always as the ends of particular individuals. On the other hand, values are “subjective” in that they arise from a process of choice: an object has value if it is desired by at least somebody. […] In the opposite conception, that of objective value, values exist independently of individual choices [one of these objectivist approaches being the Smith-Ricardo-Marx approach of laborrelated value which occupied center stage in classical economics].16

Thus, the TE-paradigm emerged in response to flaws in the older, classical paradigm but, as Screpanti and Zamagni note correctly, Veblen actually coined the expression “neoclassical” with respect to Alfred Marshall’s work, about which he had many reservations; the prefix neo did not, in his mind, refer to the whole of what Marx, a few decades earlier, had termed “classical.” So there is little point in trying to pin every feature of the TE-paradigm against a hypothetical corresponding feature of classical economics. Still, in broad brush, there does seem to have been a scientific revolution in Kuhnian terms, since after 1900 hardly any significant remains of the classical period seemed to have subsisted, and the TE-paradigm gradually evolved to take over virtually the whole of the academic establishment in Britain                                                                                                               13

Ibid. Ibid. 15 Ibid., p. 167. 16 Ibid. 14

- C.11 -


and in the US—a hegemony which became synonymous with a monopoly on young, brilliant minds.17 The six tenets singled out by Screpanti and Zamagni cover the ground fairly well, although they neglect to make explicit one very important concept: equilibrium and its practical correlate, the tool of “comparative statics.” (It may in part be recovered through their tenet (5), by spelling out the way traditional economists construct their “laws.”) For the rest of this section, I would like to offer a more concise characterization of the TE-paradigm based on three “pillars”18: methodological individualism, methodological instrumentalism, and methodological equilibration. Traditional Economics can, I claim, be circumscribed fairly exhaustively by its axiomatic imposition, in any move of knowledge construction about the world, of instrumental rationality at the agent level and of static equilibrium at the inter-agent level. In fact, Traditional Economists never ask themselves what correlates in agents’ organisms and brains go along with the functioning of the economic system. And this functioning is analyzed as the coordination between instrumentally “rational” particles indirectly connected to each other through equilibrium “variables.” The “core” methodology of the TE-paradigm therefore has three main components: a) Methodological individualism (henceforth M.Ind) says that at the level of explanation any phenomenon must be analytically broken down into the actions and interactions of the individual atoms that make the phenomenon emerge; b) Methodological instrumentalism (henceforth M.Inst) says that any individual atom’s action can be rationalized as the result of optimization (i.e., rational, purposeful interest fulfillment) subject to perceived constraints that either pre-exist to the interaction or emerge from the interaction itself. c) Methodological equilibration (henceforth M.Eq) says that any social phenomenon can be rationalized as an equilibrium in the interaction of individual atoms. The TE-paradigm is therefore designed to explain all social phenomena exclusively as equilibria computed from the mutually compatible actions of optimizing individual atoms. (Note carefully that I have just written “computed from” rather than “emerging from,” and “mutually compatible actions” rather than “interaction.”) It is hard to imagine how any Traditional Economist could deny that this is what he or she is doing, and has been doing, in every single piece of research he or she has ever offered to the scientific community. Whether it is general equilibrium theory, non-cooperative game theory, non-Walrasian equilibrium theory, social choice theory, industrial economics, economic geography, political economics, analytical Marxism, public economics—all of these approaches differ widely in numerous aspects that can be, and actually are, hotly debated, but none of them strays from explaining all phenomena exclusively as equilibria computed from the mutually compatible actions of optimizing individuals. In a significant sense, these three “core” principles of the TE-paradigm can be seen as nested axioms: individualism is broader than instrumentalism, which in turn is broader than equilibration. To be more precise, M.Ind does not prescribe that all individuals be instrumental maximizers; in turn, M.Inst does not prescribe that all phenomena be the instantaneous, computable result of mutually compatible actions. The M.Eq axiom is extremely stringent in that it imposes that there be, in fact, no interaction—only actions which are already in equilibrium, so that any interaction which may have occurred can only be assumed, and remains forever invisible. The M.Inst axiom, too, is extremely stringent in that                                                                                                               17

See e.g. See Christian Arnsperger and Yanis Varoufakis, “Neoclassical Economics: Three Identifying Features”, in Edward Fullbrook (ed.), Pluralist Economics (London: Zed Books, 2008), pp. 13-25. 18

- C.12 -


it imposes that there be, in fact, only self-centered individuals, in the sense of individuals who are purposefully pursuing their ends, towards which they mobilize whatever means their environment makes available to them. As Jason Potts has argued recently, these three axioms, and especially the third one, make the TE-paradigm into what he calls a “complete field” approach.19 The economic situations as analyzed by the TE-paradigm are always already equilibrium situations: all the relations between the agents, all their connections, are assumed to already have been discovered and established, with the implication that their gradual emergence through interaction need not be analyzed; moreover, these connections between agents are assumed to be completely covering and instantly informative, so that whatever any agent does is supposed to get transmitted instantly—in the form of “information”—to all the other agents, with the implication that the actual interactions between agents need not be analyzed, either. Thus, Traditional Economists methodologically summarize all interactive processes of search for connections, and all processes of transmission of information, between agents as equilibrium states. C.6. The twofold reductionism of Traditional Economics The TE-paradigm is a reductionist sub-variant of an already in itself strongly reductionist approach that has pervaded economics almost from its inception. C.6.1. The economy as an interactive system Ever since it originated in the writings and practices, as well as the policy recommendations, of the seventeenth- and eighteenth-century economists—physiocrats, mercantilists, classics— economics has been a discipline oriented through and through towards a system-reductionist stance. From the very beginning, the various schools of economic analysis and governance that developed in England, France, and Germany had a fundamental interest in viewing “the economy” as a huge machine, or organism, made up of smaller machines or organs, all the way down to the individual “cells” that are considered by Traditional Economists to be the individual building blocks of social reality. True enough, the deep individualism of the TE-paradigm was slow in coming. Initially, the pre-classical and classical approaches were much more holistic and functionalist; they saw the economic system as a whole and when they did decompose it, the did so into broad subsystems such as sectors, industries, classes, “bodies,” or types of agents, or into broad mechanisms such as flows, sectorial inputs and outputs, and so on. The physiocrat Quesnay saw the economy essentially as a system of accounts with inflows and outflows between sectors, which he attempted to synthesize statistically in his famous Tableau Économique. Ricardo believed that he could, through his theorization of labor-value and the gravitation of market prices around natural prices, rely only on macro-laws base implicitly on the diversity of individual rationalities—but rationalities rendered irrelevant in the aggregate by a “law of large numbers” analogous to the one later used in statistical physics. The “scientific” later Marx of Capital can be interpreted as having proposed an explicitly interactive model with heterogeneous agents (since he was interested in how class struggle drives the dynamics of history), but he used a quite coarse-grained typology of agents, using at most two or three categories of representative or “average” agents (capitalists, proletarians, and rentiers);                                                                                                               19

Jason Potts, The New Evolutionary Microeconomics: Complexity, Competence and Adaptive Behaviour (Cheltenham: Edward Elgar, 2000), pp. 11-30. See also Sunny Y. Auyang, Foundations of Complex-System Theories in Economics, Evolutionary Biology, and Statistical Physics (Cambridge: Cambridge University Press, 1998), pp. 115-140.

- C.13 -


moreover, he considered the rationality of these classes of agents to be rather simplistic (reducible to the desire to appropriate material surplus) and he established between them very simplistic connections (reducible to the desire to own what the other is taking) which he considered sufficient in order to generate his interactive theory of historical change—in fact, he eventually ended up considering even such simplified interactions as being subsumable into the macro-laws of history, thanks to a fairly un-analytical and formally underspecified notion of “dialectics.” Thus, classical economists were much more systemic holists than methodological individualists. This “whole systems” heritage is deep and tenacious. Even Alfred Marshall, at whom Veblen leveled the adjective “neoclassical” in the first place, borrowed from evolutionary biology and Darwinism his organic, holistic image of an industry as a forest and of individual firms as trees that grow or wither along the competitive process. Traces of an incomplete individualism can still be found in the TE-paradigm when it treats firms as optimizing agents even though they are, quite obviously, a composite agent made up individuals—and it is this last remnant of holism inside the Traditional Economics which has been attacked and destroyed by the Coase-Williamson approach of firms as aggregate mechanisms that minimize the transaction costs incurred by its individual members. Thus, even though Traditional Economics has grown gradually more individualistic and less interactive, it is clear that the basic interest of economics as a discipline has always been—across paradigms—the analysis and comprehension of systems-in-process, that is, of the economy (or any subset of the economy) as a system of interacting components whose mutual relations are strong compared to their relations with any component of the system’s environment. In systems vocabulary, such systems are called complex adaptive systems (CAS’s). They are, and have always been, the basic object of interest of any economist. As a result, economics has from its very inception been haunted by the temptation to mimic the natural systems sciences—physics or biology.20 The ambition of economics as a sort of “social physics” dates back to the very beginning, and the perpetual search for economists for formal tools of conceptual analysis and empirical mastery of systems has its explanation mainly in this ambition. C.6.2. Traditional Economics and the loss of process The classical economists relied on the reduction to dynamic systems in order to study the phenomena of growth and, more generally, of wealth generation. This can certainly be explained by the fact that growth, rather than distribution of whatever had been produced, was considered in the eighteenth and nineteenth centuries as the main goal of economic management: increase the size of the pie, before investigating how any pie ought to be divided. As Screpanti and Zamagni emphasized above, the neoclassical economists gradually gravitated towards a less dynamic—and eventually totally static—analysis of resource distribution. This had, of course, also been on the agenda of Smith, Ricardo, Malthus, or Marx, but their preoccupation with first overcoming aggregate scarcity had overshadowed their analysis of how to manage scarcity, so that their distributional theories seemed in need of improvement. This shift in preoccupation has meant an additional reductionist move within the already reductionist discipline of economics: Traditional Economics focused on what systems theorists have called “synthetic microanalysis” within a methodology of “independent-agent approximation.” This has led the paradigm to a gradual loss of the interactive components of                                                                                                               20

For a detailed and enlightening discussion of the attraction of economists—mainly classical and neoclassical—for mathematics and physics, see Philip Mirowski, More Heat Than Light: Economics as Social Physics, Physics as Nature’s Economics (Cambridge: Cambridge University Press, 1989).

- C.14 -


the complex adaptive economic system, and also to a loss of the aspect of mechanical and/or organic dynamics that was so central for the classics. The TE-paradigm created a massive shift of focus from process to state, and from interaction to isolated action. In fact, when we speak of “isolated” action, we need to be careful. The TE-paradigm is individualistic but not micro-reductionist: it does not reduce all economic reality to the actions of individuals at the micro-level, as if there were no macro-level at which those actions needed to be aggregated. Macro-aggregation is crucial for the TE-paradigm, since it is by aggregating optimal individual decisions that it hopes to be able to explain all phenomena in the economic (and, more broadly, in society). However, what makes the TE-paradigm special is its particular method for micro-to-macro aggregation—a method known as synthetic microanalysis under an assumption of independent agents. How can independent micro-agents generate interdependent actions that “feed back” to the macro-level to generate aggregate phenomena? This is where the concept of equilibrium, in its neoclassical sense, becomes central. As we already saw earlier, Sunny Auyang has characterized the method of synthetic micro-reductionism as follows: Suppose we imagine the description of systems and the description of their constituents as distinct twodimensional conceptual planes, variously called the macro- and microplanes. Microreductionism rejects the macroplane, holism rejects the microplane, isolationism rejects the connection between the planes. They are all simplistic and deficient. Actual scientific theories of composition are more sophisticated. They open a three-dimensional conceptual space that encompasses both the micro- and the macroplanes and aims to consolidate them by filling the void between them. This is the approach of synthetic microanalysis.21

The properties of the economy are located on a distinct—or “higher”—level compared to the properties of the agents, yet you do need to know the mechanisms of action choice of the agents, as well as the mechanisms of interaction between them, to be able to deduce through (re-)composition the operation of the economy. So (a) there is a macroplane, which corresponds to the aggregate phenomena occurring in the economy; (b) there is a microplane, which corresponds to the agents of the economy if it is analyzed into its “parts”; and (c) there is a “bridge” between the two planes, which corresponds to the effects that emerge from the interaction of the agents, once they have been “re-synthesized,” so to speak, and the economy has started to operate. This emergence of aggregate phenomena from interactions between individuals is what makes most systems into complex interactive systems: Large-scale composition is especially interesting because it produces high complexity and limitless possibility. […] Myriad individuals organize themselves into a dynamic, volatile, and adaptive system that, although responsive to the external environment, evolves mainly according to its intricate internal structure generated by the relations among its constituents. In the sea of possibilities produced by large-scale composition, the scope of even our most general theories is like a vessel. […] Large composite systems are variegated and full of surprises. Perhaps the most wonderful is that despite their complexity on the small scale, sometimes they crystallize into large-scale patterns that can be conceptualized rather simply […]. These salient patterns are the emergent properties of compounds. Emergent properties manifest not so much the material bases of compounds as how the material is organized. Belonging to the structural aspect of the compounds, they are totally disparate from the properties of the constituents, and the concepts about them are paradoxical when applied to the constituents.22

Such a system can sometimes be reduced to a non-interactive system by an operation called an independent-agent approximation: In short, independent individual models replace familiar relations among individuals by the response of each individual to a common situation, or statements of the form “Each individual xi has character Ci and engages in relation Rij to every individual xj other than itself” by statements of the form “Each individual xi has

                                                                                                              21 22

Sunny Y. Auyang, Foundations of Complex-System Theories, op. cit., p. 55. Ibid., pp. 1-2.

- C.15 -


situated character C*i and responds to the situation Si, which is a rule generated by and common to all individuals in the system.” The replacement eliminates the double indices in Rij, which signify binary relations and cause most technical difficulties.23

A typical and well-known instance of this is the Walrasian model of general market equilibrium, in which the direct interaction of agents, bargaining, “higgling and haggling” about prices and/or quantities in pairwise negotiations, searching for the best price-quality ratio by trial and error, and so on, is replaced by a non-interactive setting in which each agent, in isolation, reacts to a price vector p. If this price vector happens to be the Walrasian equilibrium vector p*, no matter how it was attained—and even if it was just “thrown in” by fiat by some external agent without there having been any previous interactions out of equilibrium—then each agent will be able to buy or sell the quantities she desires to buy or sell at those prices: no one will experience any rationing, and the economy will “be in equilibrium”. Be careful: you cannot say it has reached equilibrium, because there is no process involved. The independent-agent approximation allows for completely static setups in which agents just receive the equilibrium prices that, in a truly dynamic interactive system, would have had to emerge from their interactions. At best, we could say that in the Walrasian approach, each agent i endowed with her preferences and her initial resources and all agents responding in parallel instantaneously generate the equilibrium price vector p*, which is such that when it is “fed back” to each agent, it elicits the set of “optimal” choices that generated it in the first place … All this happens in an instant, without interactions in real time (figure C.2). C

p*

A p* ?? D

B A

Figure C.1 – Interactive model (with interdependent agents)

B

C

D

Figure C.2 – Non-interactive model (with independent agents)

In figure C.2, the agents do not interact; each agent reacts in isolation to a shared “situational variable,” the unique market price vector p (each reaction is represented by a small upward arrow). In equilibrium—which exists under certain technical assumptions about the agents’ preferences, about the technologies of production, and so on—the value taken by this shared variable will be p*, and when this is “fed back” to each agent (through the big downward arrow) it leads each agent to “replicate” the reaction he had previously “sent up.” In figure 1, there are multilateral interactions—assuming, here, that all agents are already connected to all others, which is a very specific assumption akin to what Potts calls a “complete field.” These interactions generate pairwise prices (one for each double arrow), and it is only by amazing chance that these “local” prices will, when gathered together, coincide with p*. This sort of setup, which is part of the core of the TE-paradigm, assumes that agents have parametric rationality: in her choice of action, each agent takes her environment as a parameter which she cannot affect, and she includes in that parametric environment the                                                                                                               23

Ibid., p. 119.

- C.16 -


actions of all the other agents. Such an assumption really makes sense only if we take each agent to be of “Lebesgue measure zero,” i.e., to be a point on a continuous line segment.24 With just four agents as in figure C.2, this makes no clear sense since it must be the case that such agents would perceive their bargaining power and the possibility of looking up each other agent individually in order to negotiate a deal for a higher- or lower-than-Walrasian price. Outside of the Walrasian setup with an “infinite” number of traders, the assumption of parametric rationality is not plausible. However, as we will see later, even non-cooperative game theory which introduces strategic rationality—and is the basis for recasting a lot of nonWalrasian, “imperfect-competition,” and/or “imperfect-information” economics in a postneoclassical format—has recourse to the independent-agent fiction when it assumes common knowledge of (instrumental) rationality and uses the concept of Nash equilibrium. One quite important issue connected with figure C.2 is whether the shared “situational variable”—in this case, the price vector p*—that all agents face in isolation is not, in fact, provided by some “hidden” institution. Many economists within the TE-paradigm have come to the conclusion that the independent-agent fiction can only make sense if the agents are again made mutually interdependent through a centralized agency such as the Walrasian “auctioneer.” Thus, whereas figure C.1 represents an economy with decentralized interdependence between agents, figure C.2 would really represent an economy with centralized interdependence between agents who have no direct contact with one another. Some, such as Abba Lerner and Oskar Lange in the 1930s, have gone as far as saying that the independent-agent approximation is really a covert model of a planned economic system in which the constituent components are “steered” by some central agency—or central computer—into computing the “right” values of the shared situational variable so that the system will solve its set of multiple demand-and-supply equations. This insight provided a rationale for many National Planning Agencies—and it already indicates that TE-type systems analysis is by no means to be equated with “neoliberal” or “free-market” doctrine. Indeed, recall that as Screpanti and Zamagni emphasize, the TE-paradigm emerged in part as an attempt to “salvage” socialist ideas and ideals from the Marxist way of treating these ideals. Hence, it is no surprise that there are very many Left-wing, even Marxist, Traditional Economists: Their position is that even Left-wing ideas can be formalized and promoted with the toolbox of the TE-paradigm. To the idea that the independent-agents approximation presupposes a central coordinating agency, Sunny Auyang replies the following: The self-consistently determined situation [such as the equilibrium vector p*] is characterized in its own terms, which are qualitatively different from the concepts we use to describe the individuals. Despite its endogenous nature, its direct effect on the independent individuals is similar to that of external forces. Since an individual in a large system is insignificant compared to the aggregate effects of all the rest, the situation constrains individual behaviors and appears to be dominant. […] Since the competitive market model is an equilibrium theory that neglects dynamics, an auctioneer is invented to initiate and control price movements, enhancing the image of exogenous domination. […] The individuals are easily mistaken for isolated beings and the situation is mistaken for an externally imposed institution or norm. Ideological dispute follows. 25

What she is thus suggesting is that figures C.1 and C.2 should not be taken as the representations of two distinct economic systems, but rather figure C.2 should be seen as an “as if” reduction of figure C.1: agents really are interacting, the price vector really is endogenous and really does emerge gradually from these interactions, but for purposes of                                                                                                               24

See Robert J. Aumann, “Markets with a Continuum of Traders”, Econometrica 32: 39-50 and “Existence of Competitive Equilibria in Markets with a Continuum of Traders”, Econometrica 34: 3-27. The “Lebesgue measure” is a criterion for evaluating the “size” of a subset of Euclidian space; a point on a continuous line, or continuum, is the simplest example of a set that has a Lebesgue measure of zero. 25 Sunny Y. Auyang, Foundations of Complex-System Theories, op. cit., p. 121.

- C.17 -


simplification we can subsume this interactivity into a static, non-interactive model. As long as we are conscious of what we are doing, she seems to say, no harm is done and the TEparadigm’s massive recourse to methodological equilibration is vindicated as a valid “shortcut.” (Note in passing that Auyang herself is prone to conflating Traditional Economics with “economics” proper, as is evident from the title of her book and from her discussions of economics.) Thus, the Lange-Lerner suggestion that the centerpiece of the TE-paradigm—i.e., the system of decentralized Walrasian markets and its self-consistent prices—is really a centralized planned economy becomes, for her, an “ideological dispute” that misses the point. This sort of position has been frequent in the TE-paradigm: It says that completely unrealistic assumptions—such as independent, perfectly informed agents facing an anonymous vector of market prices—can be made as long as the results of these assumptions, the predictions and explanations they make possible, are valid and acceptable.26 C.6.3. Two versions of equilibrium Since the TE-paradigm locates itself within systems analysis and seeks—as we saw in the previous section—to gain “insights into” the (still external) operation of the system, its supporting community of Traditional Economists believe that the underlying concepts of individual rationality and systemic equilibrium are absolutely unavoidable. Indeed, they claim that without these concepts economic science itself would be impossible—by which they actually mean that their own preferred, TE-framed conception of science would break down. And they are, of course, quite correct. Without these two notions, the “as if” reasoning suggested by Auyang could not get off the ground. In fact, these two notions are two sides of the same coin: in equilibrium, the agents do not really make decisions; their “optimal” decisions are pre-inscribed into them by the system’s need to be in equilibrium. In other words, rational individuals in the TE-setup are individuals who are functionally constructed so as to respond only to equilibrium values of the situational parameters. This reminds us of the German philosopher Leibniz, who saw all of Reality as composed of self-enclosed “monads” who aggregate and disaggregate by obeying to what he called a “pre-established harmony”: since Reality exists (rather than nothing) and since moreover Reality’s way of existing is to be what it is (and not some other reality), it necessarily follows that some “principle of sufficient reason” must be at work which constantly coordinates the billions of monads into equilibrium—they have no choice but to form the Reality-That-Is! … Even though this may sound like remote metaphysics, it is in fact what the TE-paradigm’s core axiom of methodological equilibration says: since the world exists, and since there are economic events and occurrences out there, something—or some things—must be “in equilibrium” at all times; if not, reality would not be self-consistent and would not exist. In fact, there is some deep truth to this. Since the analysis of collective systems is essentially an analysis of the causes of rational collective order, it is indeed the case that, from a systems-analysis perspective the absence of order makes no sense at all, since disorder is synonymous with external invisibility! What does not respect some ordering principles simply does not appear in external reality. Thus, if we equate “equilibrium” with “consistency-creating order,” there is indeed no way that Reality cannot be an “equilibrium.” This is, in fact, what Karl Popper meant by his “rationality principle”:27 if one is going to                                                                                                               26

For a detailed defense of this sort of position, usually called “instrumentalist,” see Milton Friedman, “The Methodology of Positive Economics”, in Milton Friedman (ed.), Essays in Positive Economics (Chicago: Chicago University Press, 1953), pp. 3-43. 27 See Karl Popper, “The Logic of the Social Sciences” (1962), reprinted in K. Popper, In Search of a Better World: Lectures and Essays from Thirty Years (London: Routledge, 1994).

- C.18 -


construct a scientific explanation of how reality is structured, one cannot avoid imposing on one’s concepts some notion of macro-consistency (i.e., systemic “equilibrium”) as well as some notion of micro-consistency (i.e., sub-systemic, individual “rationality”) that is congruent with the requirements of macro-consistency. This is also what Sunny Auyang was saying when she earlier described the general method of synthetic microanalysis. In that sense, we indeed cannot live without the concepts of individual rationality and systemic equilibrium. The question, however, remains as to whether the TE-paradigm’s way of conceiving of rationality and equilibrium is the best way. Here the difference between figures C.1 and C.2 becomes crucial. They can only be seen as interchangeable if we can be certain that the complex interactive process of figure C.1 eventually converges on the situation portrayed in figure C.2. In other words, as long as we do not have a mediating mechanism that “links” these two figures, we are not certain at all that figure C.2 can be serenely taken to be the “summary” description of anything externally visible! This is one of the oldest and most puzzling questions in the TE-paradigm: under what conditions can the independent-agent approximation of figure C.2 be seen as the plausible outcome of a real-time, dynamic process such as that of figure C.1? Now, we should note immediately that this question is not confined to the Walrasian model where p* is a market-price vector. The exact same question can be put to the non-Walrasian model of “fixed-price equilibria” where p* is replaced by a quantity vector q* reached through so-called “quantitative rationing schemes,”28 and to any equilibrium model—possibly imperfectly competitive—that treats agents’ interactions as secondary. To make sense of this whole discussion, we need to distinguish two senses in which both rationality and equilibrium can be understood: (A)

(B)

Weakly functionalist (“basic-reality”) interpretation: Since reality exists, there has to be order somewhere rather than total chaos. In other words, some things have to adjust at various places in the system of reality at any moment in time, so that the world appears to us and is intelligible. All entities in reality have to adjust in some way, at all times, to ensure the macro-coherence of That-Which-Is (independently of what it is or whether it is acceptable, etc.). Therefore, micro-entities have to obey a rationality that makes them fit into the overall scheme of reality, which they de facto make up at all instants of time. Strongly functionalist(“system-at-rest”) interpretation: Within existing reality, there are situations of restful order in which all interactions have stopped because given the micro-entities’ rationality, all opportunities to “optimize” have been exhausted. Such situations are characterized by a fixed-point situation: given the values x* of the adjustment variables, each micro-entity has as its “optimal” choice a vector of actions a*i such that x* → [a*1, a*2, …, a*n] → x*. This implies that in the absence of any exogenous change in some or all micro-entities’ environments, none of these micro-entities has any incentive (given its rationality) to change its optimal choices.

Clearly, version (A) is compatible with figure C.1 as well as with figure C.2, while version (B) is compatible only with figure C.2. (B) selects within (A) those particular situations for which interaction, search, exploration, connection-building, etc., can be neglected because they are assumed to have “played themselves out” completely so that only something from outside the system—affecting some or all agents’ micro-environments—can generate new interaction, search, exploration, connection-building, etc. It is because of this quite restrictive                                                                                                               28

See Jean-Pascal Benassy, The Economics of Market Disequilibrium (New York: Academic Press, 1982).

- C.19 -


interpretation of the notions of rationality and equilibrium that, almost as a pun, we can say that in “perfect competition” as the TE-paradigm understands it, there is no competition at all. The main reason why the TE-paradigm has strengthened its core concepts of rationality and equilibrium from (A) toward (B) is that it has long been obsessed with the project of emulating physics and begging the prestigious status of classical mechanics. In fact, both classical and neoclassical economists were obsessed with the idea that economics should be made into a “social physics.” This is, of course, due to a quite narrow interpretation of what systems analysis means. For the Traditional Economist, a system’s operation is of interest only at its (B)-equilibrium states; thus, what sequence of (A)-equilibria might make the system go from one (B)-equilibrium to the next has long been deemed irrelevant, and for many decades this has been impressed upon students and junior researchers through the allpervasive recourse to the method of comparative statics, which is also part of the TEparadigm’s “core” toolbox. To see how a change in the system’s exogenous variables affects the system, one needs to compare the static (B)-equilibrium before the exogenous “shock” to the static (B)-equilibrium after the shock. The totally amazing part of the story is that this methodology has actually served as the basis for decades of econometric estimates and economic-policy advice. Now, to avoid any misunderstanding, let us emphasize yet again that one can perfectly well maintain at the same time the absolute necessity of version (A) while rejecting the necessity of the more restrictive version (B). When Traditional Economists vocally defend “their” paradigm by arguing that the notions of rationality and equilibrium cannot be dispensed with because of Popper’s “principle of rationality,” they are conveniently confusing (B) with (A). In reality, it is perfectly thinkable that there is an orderly reality, in the sense of (A), that never, ever converges on a restful reality, in the sense of (B). One can have a dynamic economic reality in which, at all moments, complex interactions between myriads of agents generate the emergence of order-creating variables (as in (A)) which are nevertheless not interaction-stopping equilibria (as in (B)). This is indeed a very trivial insight once one retrieves it, but it has been obscured by the TE-paradigm’s obsession with classical mechanics and the naïve, “comparative-statics” analysis of economic systems. As we will see later on, the majority of Traditional Economists only came around to this problem starting in the 1990s. It is a question whether they did so because more and more of their elite members (such as Kenneth Arrow or Alan Kirman) had started to listen to, and interact with, young mavericks such as Brian Arthur? Samuel Bowles or H. Peyton Young thought their ideas were good—or whether the shift happened first and foremost because of a dramatic improvement in computer technology and in simulation techniques. In any case, the non-Traditional economists who were working more in the tradition of Schumpeter and Hayek had seen the problem very much earlier—which testifies to the plausibility of Kuhn’s notion of “normal science.” In fact, already in 1945, the conservative economist Friedrich A. Hayek, very much a skeptic vis-à-vis the fledgling TE-paradigm, which he called “Mathematical Economics,” explained why he believed it would fail: Any approach, such as that of much of mathematical economics with its simultaneous equations, which in effect starts from the assumption that people’s knowledge corresponds with the objective facts of the situation, systematically leaves out what is our main task to explain. I am far from denying that in our system of equilibrium analysis has a useful function to perform. But when it comes to the point where it misleads some of our leading thinkers into believing that the situation which it describes has direct relevance

- C.20 -


to the solution of practical problems, it is high time that we remember that it does not deal with the social process at all and that it is no more than a useful preliminary to the study of the main problem.29

None of what Hayek said in 1945 is incompatible with (A). His message is, in essence: Equilibrium yes, comparative statics no; rationality yes, independent-agent approximations no! In more contemporary terms, which we will flesh out in section C.7, complex adaptive systems or “dissipative structures” have equilibria, too, though not of the (B)-type. Thus neither methodological individualism nor systemic equilibrium per se imply the absolute need for Traditional Economics; the specific “core” axioms of the TE-paradigm can be dispensed with—and have been since then—without rationality and equilibrium in themselves being at issue. C.7. Beyond Traditional Economics: The post-neoclassical paradigm In the wake of the TE-paradigm, a relatively diverse constellation has started to emerge. I choose to call it “post-neoclassical” economics, even though some of them have kept a strong connection to the core of the TE-paradigm—to the point of looking merely like a fleshed-out Traditional Economics. In one sense or other, each of these extensions can be seen as a development called for by the TE-paradigm itself in its effort to maintain itself while adapting to the demands of some of its most brilliant elite members. In her recent survey of the developments in “economics” after the 1980s, Diane Coyle has written: The key elements of economic methodology, unchanged from the classical days, are the status of rational choice and the use of equilibrium as a modeling concept. If these are limitations, so be it: every subject has core restrictions in its methodology, which in fact represent its strengths and distinctive insights. It’s not that we believe that everybody chooses rationally all the time—on the contrary, the most orthodox of economists is interested in learning from behavioral research. Nor do we think the economy is always in equilibrium. That would be just as silly. Nevertheless, both elements are core to our way of thinking. […] [T]he paradigm is unchanged, if that means the essential elements of economic methodology, but economists are unified by a new consensus as to what economics is about. Not the study of competitive markets, but rather an understanding of society as the aggregation of millions of individual decisions, in specific contexts shaped by history and geography, and by our own evolutionary history.30

To fully understand her point, we need to remember our key distinction between directly interdependent agents and the approximation through “independent”—that is, indirectly interdependent—agents. Coyle is in fact claiming that while equilibrium has been abandoned in the strongly functionalist sense of a system at rest, it has been kept on in the weakly functionalist sense of an ordered reality. The “new consensus” of which she is speaking is, to a large extent, the result of the TE-paradigm’s basic methodological stance as expressed by Hayek at the end of section C.6 above: equilibrium yes, comparative statics no; rationality yes, independent-agent approximations no. “Aggregation” and “heterogeneity” are indeed new buzzwords at the front line of Traditional Economics, and Coyle is correct when, in her book, she explains the key role played in this evolution by the emergence of new computer technologies and new simulation techniques. The “new” economics is out to explain how social situations emerge as ordered patterns out of the direct interactions of rational individuals; it wants to eschew both parametric rationality and static equilibrium and replace them with strategic rationality and ordered but out-of-equilibrium dynamics. In the “new”                                                                                                               29

Friedrich A. Hayek, “The Use of Knowledge in Society”, American Economic Review 35 (1945): 519–30; reprinted in F.A. Hayek, Individualism and Economic Order (Chicago: University of Chicago Press, 1948), pp. 77–91. 30 Diane Coyle, The Soulful Science, op. cit., pp. 251-253 passim.

- C.21 -


view—which is really the older, classical view resurrected—the economy is a mechanistic system endowed with order by agents’ rationality, but rendered “restless” by their strategic interactions and by the imperfections in information and calculation. In that sense, it is reasonable to talk about a “post-neoclassical” movement, even though—as we will see—the remnants of the usual TE-type of reasoning are strong. In particular, independent-agent approximations have not completely lost their appeal. The development of game theory in the 1940s and 50s, and its “invasion” of the economics territory after the 1970s, has been heralded by some as a fundamental scientific revolution in the sense of Kuhn. This is, alas, both true and untrue—it all depends on which core axioms one is looking at. As we will try to show, game theory has revolutionized economists’ views about instrumental rationality and some of the basic characteristics of social “optimality,” but it has done very little, if anything, to revolutionize their views about equilibrium. C.7.1. Instrumental rationality: From parametric to strategic When Coyle states that the core axioms of the TE-paradigm have not changed, she is basically correct: game theory, just like Traditional Economics, relies entirely on the notion of agents’ instrumental rationality. In that sense, one could even claim that game theory merely represents a more sophisticated version of the exact same M.Inst axiom. Indeed, Coyle’s claim that mainstream economists do not “believe that everybody chooses rationally all the time” is not true; being essentially pessimistic liberals, they do believe that whatever people do is intended to be instrumental: you and I are, according to M.Inst, constantly looking out for our best interests as we conceive them and for this purpose we are constantly attempting to mobilize all the means available in our environment. This is the key idea of the M.Inst axiom: Means-to-ends rationality, using whatever your environment makes available to you to realize your valued personal objectives, expressed as “preferences.” The question is: What sort of elements are available? Rationality will be said to be parametric if the agent takes her environment as a set of parameters—including not only the preferences, but also the actions of all the other agents. In independent-agent approximations, where interdependence is indirect or mediated, environmental parameters are usually assumed to be “summarized” by one (or a very small number of) situational variable(s) such as market price. However, one could also assume—if the agent had more detailed information as to who is contained in her relevant environment—that the agent perceives not just a “summary” constraint but a detailed vector of all other people’s actions and uses this to compute her own optimal action. This could still count as an independentagent approximation, if we assume that all agents get to know the vector of all agents’ actions, including their own, so that the publicized vector of all agents’ action would be the same for everyone. Graphically, we would have figure C.3 below.

- C.22 -


a = (aA, aB, aC, aD, aE)

A

B

C

D

E

Figure C.3 – Non-interactive model with detailed action vector

However, it is immediately apparent that in such a system of indirect interdependence, “rationality” is meaningless: the agent sends out an action ai which, gathered with all other agents’ actions, is “sent back” to her. There is no guarantee at all that, given the actions of all others, she can still consider her initial action as optimal. In other words, once it is other people’s actions themselves, and not a “summary” of them, which gets sent back towards agents, the question of they are to be coordinated becomes even more difficult. In fact, moving from a summary variable to a vector of actions uncovers an important aspect of the “hidden structure” of Traditional Economics: By working with independentagent approximations, it actually forbids agents from reacting directly to each others’ actions and fully taking into account their de facto interdependence; mediated interdependence presupposes that agents either are unable to, or agree not to, face up to their immediate interdependence. If they are unable to—as is the case for very large, complex systems such as the capitalist market system—then summarizing all actions into an aggregate adjustment variable is a way for everyone of economizing on information gathering and transaction costs; this is a point which transaction-cost economics has exploited to the full, while “Austrian” economics has rejected it by rejecting the relevance of the TE-paradigm’s notion of equilibrium. If agents have agreed not to face up to their immediate interdependence, this must mean that there is some hidden institutional constraint that prescribes specific “price taking”—or, more generally, “summary-variable-taking”—behavior to all participants to the system; this has led to the fiction of the “Walrasian auctioneer” and the idea that so-called perfect competition is, in fact, a centrally engineered coordination device using ultra-speed adjustment technologies so that the real-time adjustments required by the “back-and-forth” between the auctioneer and the agents can be neglected for practical purposes. The absurdity of these scenarios is apparent: parametric rationality makes sense only if every agent is effectively and fully separate from all other agents—in other words, if each agent lives in an environment where no one’s actions except her own have any impact on the satisfaction of her preferences. Such fully autarchic situations are rare enough in life, and in economic systems they are nonsensical—indeed, it is the very inter-dependence of people’s fates that creates and maintains the economy as a system of systems! Thus, all that we can say about the assumption of parametric instrumental rationality in economic contexts is that it is a degenerate case where each agent treats other agents’ actions as if they were irrelevant to her

- C.23 -


own choice of action. This leads us back to the idea of “zero Lebesgue measure” discussed earlier: Only if agents are effectively part of an infinite continuum of points can this “as if” reasoning be viewed as a conceptually satisfactory approximation. For practical purposes, this means that in economics—and, more generally in social science—the assumption of parametric rationality makes sense only in trivial cases where all interdependence has been resolved into an equilibrium situational variable. Although somewhat surprising, this insight is a deep one: if one assumes parametric rationality for agents, one is de facto locating one’s whole analysis within an already settled “equilibrium” situation, in the strongly functionalist sense discussed earlier. In other words, the neoclassical assumptions about rationality and about equilibrium are not separate: In Traditional Economics, axiom M.Inst is functionally subordinated to axiom M.Eq; the latter is the reason why the former is adopted. The immediate corollary of this is that if one gives up the restriction to strongly functionalist equilibrium, i.e., if one abandons the fiction of modeling people’s “optimal” choices within a system already at rest, then it becomes more problematic to assume that each agent treats all other agents’ actions as mere parameters in her choice. The reasoning is as follows: if each i knows that other people’s actions are directly relevant to her instrumental optimization, surely she must also admit that her own action is, in principle, directly relevant to all other people’s instrumental optimization; thus i ought to instrumentally take into account other agents’ reactions to her action. Thus, we arrive at the idea of strategic instrumental rationality: People form beliefs about other people’s actions and reactions, take actions on the basis of those beliefs, observe the discrepancy between the result they had expected and the result that actually materializes, learn by inference from that discrepancy, and adapt their beliefs, take new actions, and so on. Of course, generally speaking, such a dynamic sequence of action, observation, and cognitive adaptation could also exist in a completely parametric world where no parameter’s value depends on anyone else’s actions; learning from the discrepancy between our expected utility on the basis of our prior knowledge of the environment and our actually realized utility is something we do quite often, even when “the environment” is made up entirely of objects. In other words, imperfect information (due to unobservable elements in reality) and uncertainty (due to stochastic elements in reality) are part even of a purely material, non-human environment; so why focus here on human interdependence? The answer is that an agent’s relation to her unknown and/or uncertain non-human environment can hardly be called “strategic”: contrary to the case where you are fighting a battle against a human opponent, when you are battling material nature you do not devise strategies; you build techniques and tools to master nature, but you do not enter into strategic interaction with nature. You may, like the Chinese Taoists in their effort to “go with the flow” of their environment, come to treat human opponents like natural obstacles; but the reverse, i.e., treating natural obstacles like human opponents, is called clinical madness … Natural obstacles can pose problems of mastery due to lack of knowledge, but they are not conscious opponents—strategic interaction has an “infinite-mirror” aspect and occurs between conscious-reflexive individuals. (Perhaps it makes sense to say I strategically interact with a panther or a gorilla, which are non-human but conscious-reflexive animals, but such issues are not central to economics and social science more broadly.) So strategic instrumental rationality is the most general meaning of the M.Inst axiom. What is the relationship between strategic and parametric rationality? Basically, parametric rationality is a degenerate case of strategic rationality when the environment is made up of “non-reactive” parameters. Let Ei be agent i’s environment and let Mi(Ei) be the “model” she uses to formalize her environment. Finally, let M.Inst(p) and M.Inst(s) denote the parametric and strategic versions of the M.Inst axiom. We then get the following conversion formula:

- C.24 -


[M.Inst(s) + parametric Mi(Ei)]  M.Inst(p) Thus, what determines the parametric content of instrumental rationality is the purely parametric character of the agent’s model of her environment. Independent-agent approximations are a way of making each agent’s environment “artificially” and trivially parametric—in the case of the Walrasian model, we get simply Mi(Ei) = p* for all i. So it is not so much that agents’ instrumental rationality is parametric but, rather, that their strategic instrumental rationality gets applied to a non-strategic environment. C.7.2. Non-cooperative games and the dynamics of interaction Non-cooperative games are the most natural extension of the idea that people in the economy are in interactive, hence strategic, situations. In non-cooperative game, agents do not cooperate; they do not communicate with each other or perform any prior coordination of their decisions. All agents are assumed to live in a decentralized world where they simply maximize their payoff given their knowledge of the overall situation, called the “game situation.” Competitive interaction is a large part of non-cooperative games: The more I obtain, the less you will have, and vice-versa. (A very specific case of competitive games is that of “zero-sum” games where the payoffs add up to zero in all possible outcomes.) However, non-cooperative games are also used to try to understand how rule-guided cooperation might emerge from initial decentralized non-cooperation. The essence of strategic interaction is “groping along in the dark”: one starts to act, observes the result, learns from it by revising one’s beliefs about others and perhaps even revising one’s knowledge about who the others are in the first place, acts again, observes again, infers again, and so on. Each of these actions can be modeled, at the level of each agent, as a “game” against the environment, given the agent’s knowledge and beliefs. In other words, each strategically rational individual will, at the very best, construct her own representation of what she believes the “game” she is playing in looks like—either drawing up an extensive form given what she knows and believes (who is playing against her, what the opponents’ payoffs are as they conceive them, how rational she believes these opponents are and what she believes they believe about her own rationality and about her own beliefs, and so on), or summarizing this extensive form in a strategic-form game matrix that associates to every n-tuple of strategies an n-tuple of payoffs. The agent may then “solve” this, her own perceived game, in order to get an idea as to how she ought to act in her best interest. Let Gi(t) be the game as perceived by agent i at moment t, and let pi[Gi(t), σi(t)] be the optimal payoff which i expects to get in t given what she knows and believes and given the solution concept σi(t) she uses. Thus we get the following decision sequence guided by instrumental rationality: [Gi(t), σi(t)] → pi(t) → a*i(t) → ρi(t) ≠ pi(t)

(C.1)

This decision is then confronted to the reality of the strategic interaction, which—given what all the other actual agents in the game, some of whom i may not even be aware of in t, have similarly decided to do—yields for the agent some payoff ρi ≠ pi. Since by axiom M.Inst(s) all this agent cares about is her payoff, she will use this discrepancy to revise her beliefs and/or to search for and acquire new elements of knowledge about the game situation: who is playing along, what sort of decision criteria they use, and so on. This revision process, called learning, may be carried out via Bayesian methods or non-Bayesian ones; what matters most is that it does occur but may itself be very “imperfect,” in the sense that it may take a very long time for i to arrive—by exploration and connection-building—at the stage where she is

- C.25 -


perfectly informed about all relevant aspects of the interaction. That is, it may take a nearinfinite number of periods until we arrive at a stage where [Gi(t), σi(t) ⎪ pi(t–1), ρi(t–1)] → pi(t) → a*i(t) → ρi(t) = pi(t)

(C.2)

This would be a stage where, given the past discrepancy between expected and realized payoff, the agent has revised her game form and her solution concept in such a way that now, her expected payoff leads her to act in a completely self-fulfilling way given what all other agents do (which is captured in her knowledge of G and σ). Of course, even this very complicated sequential process is based on the assumption that “all” the agent has to so is discover the structure of the game which she and others are playing over repeated time periods—a structure assumed to be preexistent and already fully formed when the exploration process starts. In actual fact, as complexity and evolutionary economics has emphasized, the game form G itself changes and shifts as the various agents i explore and build it… This means that the process is not just one of discovery but of construction. The system of strategic interaction is not just sitting “out there” waiting to be discovered by each agent; it changes as the agents interact. Strategic interaction is so complicated and anxiety-generating that it is just crying out for some theoretical “pain killer” to be offered by game theorists. In fact, just as was the case for Traditional Economics earlier, one major source of appeal of game theory as a sub-discipline of applied mathematics was the need for decision-makers—for instance, army officials or corporate planners—to get their hands on analytical tools that would help put some order into the above process. In fact, if one looks at the two above formulae, one sees immediately that with a good knowledge of the structure of game G but with no idea as to how that game gets “solved” in interaction—i.e., without an idea of what the solution concept σ might be—the agent i is virtually “lost in translation.” Thus, offering a plausible solution concept for noncooperative games has been regarded as a feat worthy of several Nobel Prizes in economics over the past two decades. By far the most widely used solution concept is the so-called Nash solution, pioneered by John Nash in the late 1940s and early 1950s. C.7.3. The Nash solution and the return of the independent-agent approximation In 1950, Nash published a two-page note in which he changed the face of Traditional Economics forever. As I have suggested above, what non-cooperative game theory has done is to uncover and put out in the open the “hidden truth” of Traditional Economics—namely, that no economic agent is ever really a parametric optimizer and that most economic situations are in fact non-cooperative games in which agents confront their strategically rational actions. Here is how Nash himself described his key concepts: One may define a concept of an n-person game in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n-tuple of pure strategies, one strategy being taken for each player. […] Any n-tuple of strategies, one for each player, may be regarded as a point in the product space obtained by multiplying the n strategy spaces of the players. One such n-tuple counters another if the strategy of each player in the countering n-tuple yields the highest obtainable expectation for its player against the n–1 strategies of the other players in the countered n-tuple. A self-countering n-tuple is called an equilibrium point.31

Holt and Roth offer the by now standard translation of this into the language of game theory:                                                                                                               31

John F. Nash, “Equilibrium Points in n-Person Games”, Proceedings of the National Academy of Sciences of the United States of America 36 (1950): 48-49, pp. 48-49.

- C.26 -


… a Nash equilibrium is a set of strategies, one for each of the n players of a game, that has the property that each player’s choice is his best response to the choices of the n–1 other players. It would survive an announcement test: if all players announced their strategies simultaneously, nobody would want to reconsider.32

Graphically, this means the following as illustrated in figure C.4. s* = (s*A, s*B, s*C, s*D)

A

B

C

D

Figure C.4 – Nash equilibrium and the “announcement test”

Contrary to what was the case in figure C.3, here the agents’ actions are all self-consistent, as expressed in Holt and Roth’s idea of an “announcement test”: if after having computed the Nash solution of the game we were to tell each agent, who does not know any features of the overall game situation, her Nash strategy, and if all agent were to play their Nash strategies simultaneously, the payment received would be such that no agent would feel she ought to have played another, different strategy. Equation (C.2) would be realized at once for each i. As the authors express it, When the goal is to give advice to all of the players in a game (i.e., to advise each player what strategy to choose), any advice that was not an equilibrium would have the unsettling property that there would always be some agent for whom the advice was bad, in the sense that, if all other players followed the parts of the advice directed to them, it would be better for some player to do differently than he was advised. If the advice is an equilibrium, however, this will not be the case, because the advice to each player is the best response to the advice given to the other players. This point of view is sometimes also used to derive predictions of what players would do, if they can be approximated as “perfectly rational” players who can make whatever calculations are necessary and so are in the position of deriving the relevant advice for themselves.33

This is an extremely important point. It draws attention to the fact that Nash equilibrium is not just a weakly functionalist, but a strongly functionalist notion of equilibrium: it assumes that a “frustrated” or “groping” agent [in the sense of equation (C.1) above] would do everything to unsettle the game’s result, so that only a “game situation at rest” can be in equilibrium. As we saw earlier, this is indicative of an independent-agent approximation: Holt and Roth’s “announcement test” is, in fact, such an approximation since it states that, were we to tell each agent in isolation to play her part of the Nash strategy vector s*, the outcome of the game would be immediate mutual compatibility of all strategies. In fact, as was the case for the Walrasian equilibrium price vector p* earlier, here also there is nothing else but s*: given the assumptions necessary to have an equilibrium, non-equilibrium strategies—in the strong sense of equation (C.1)—are simply not part of intelligible reality. That is the main reason why, rather astonishingly, Robert Aumann is able to simply equate the notion of Nash equilibrium with the notion of strategic instrumental rationality, confirming our earlier point                                                                                                               32

Charles A. Holt and Alvin E. Roth, “The Nash Equilibrium: A Perspective”, Proceedings of the National Academy of Sciences of the United States of America, 101 (2004): 3999–4002, p. 3999. 33 Ibid.

- C.27 -


that from the point of view of independent-agent approximations, axiom M.Inst is functionally subordinated to axiom M.Eq: The Nash equilibrium is the embodiment of the idea that economic agents are rational; that they simultaneously act to maximize their utility. If there is any idea that can be considered the driving force of economic theory, that is it. Thus in a sense, Nash equilibrium embodies the most important and fundamental idea of economics, that people act in accordance with their incentives.34

Aumann’s candid conflation of axioms M.Inst and M.Eq is the direct consequence of the fact that the Nash solution concept is an independent-agent approximation of an interdependentagent situation. Let us be careful, however. We saw above that while such an approximation makes sense when agents can be imagined to receive—through some centralized agency or through the economy’s information-dissemination mechanisms—a “summary variable” such as price, it makes no sense when agents are directly taking into account each other’s actions. Thus, in figure C.4, there seems to be something crucial missing to justify the independent-agent approximation, or—equivalently—the centralized interdependence inherent in the Nash solution. The missing element is to be found in what is one of the most crucial assumptions in post-neoclassical game theory, namely the assumption of common knowledge of rationality (CKR). Recall that in order to imagine a gradual process of convergence from equation (C.1) towards equation (C.2), there had to be many complicated revisions, by the agent, of her beliefs concerning what others will do, as well as concerning what they believe she believes, etc. The CKR assumption cuts this very long story short: … expectations regarding what others will do are likely to influence what it is (instrumentally) rational for you to do. Thus fixing the beliefs that rational agents hold about each other is likely to provide the key to the analysis of rational action in games. The contribution of CKR in this respect comes in the following way. If you want to form an expectation about what somebody does, what could be more natural than to model what determines their behavior and then use the model to predict what they will do in the circumstances that interest you? You could assume the person is an idiot or a robot or whatever, but most of the time you will be playing games with people who are instrumentally rational like yourself and so it will make sense to model your opponent as instrumentally rational. This is the idea that is built into the analysis of games to cover how players form expectations: We assume that there is common knowledge of rationality held by the players [which means that] I know that you are instrumentally rational and since you are rational and know that I am rational you will also know that I know hat you are rational and since I know that you are rational and that you know that I am rational I will also know that you know that I know that you are rational and so on… […] Formally it is an infinite chain as follows: (a) each person is instrumentally rational (b) each person knows (a) (c) each person knows (b) (d) each person knows (c) …and so on ad infinitum.35

Together with the equally crucial assumption that each player knows the extensive—or at least the strategic—form of the “game situation,” CKR generates an independent-agent approximation based on two very strong, but nearly always implicit ideas: first, that somehow all agents have been taught the form G, and second, that all agents have been immersed in a “culture of shared instrumental rationality.” Apart from that, there is nothing that differentiates figure C.4 from figure C.3. In both cases, we have a centralized interdependence (either through an “auctioneer” or through extensive shared knowledge) masquerading as                                                                                                               34

Robert J. Aumann, “What is Game Theory Trying to Accomplish?” (1985), reprinted in R. J. Aumann, The Collected Papers, volume 1, Cambridge, MA: MIT Press, 2000, pp. 5–46, quote from p. 19. 35 Shaun Hargreaves-Heap and Yanis Varoufakis, Game Theory: A Critical Text (London: Routledge, 2004), p. 27.

- C.28 -


mutual independence. Despite being initially prepared to take explicit account of direct, faceto-face interactivity between agents, non-cooperative game theory has ended up sacrificing once more the full implications of the M.Inst axiom to a strongly functionalistic version of the M.Eq axiom. The main reason why this has occurred should be familiar to us by now, and it is well expressed in Holt and Roth’s earlier idea that for purposes of prediction and empiricalanalytical mastery of social reality, agents may “be approximated as ‘perfectly rational’ players who can make whatever calculations are necessary and so are in the position of deriving the relevant advice for themselves.” An M.Inst axiom functionally subordinated to the M.Eq axiom is instrumental to game theory’s pretension of being a tool for managementoriented comprehension; this is what Aumann has in mind when he writes, somewhat cryptically, that game theory purports to describe not Homo sapiens, but Homo rationalis; and that it actually is descriptive of Homo sapiens only to the extent that he can be modeled by Homo rationalis. On the other hand, when we come to advise people, it is clear that we should give them rational, utility-maximizing advice, i.e., precisely what Homo rationalis would do; so that the two aspects [descriptive and normative] are in this sense quite close.36

In a sense, the Nash solution concept is merely an expression of the “hidden truth” of Traditional Economics and its focus on perfect competition—a focus that was due to the fact that Traditional Economists believe a “perfect” market to be one in which no direct interaction takes place so that there is never any strategic reasoning. This focus has been shown to be abusive by game theorists, who have driven home the very important point that such “perfect” markets are highly specific, and usually degenerate, sub-cases of more general non-cooperative games. It is this characteristic of Nash equilibrium being a “generalizer” of much of neoclassical theorizing, and more generally of the TE view of society, that has led Roger Myerson to make a grand claim: … Nash’s theory of noncooperative games should now be recognized as one of the outstanding intellectual advances of the twentieth century. The formulation of Nash equilibrium has had a fundamental and pervasive impact in economics and the social sciences which is comparable to that of the discovery of the DNA double helix in the biological sciences.37

Similarly, Holt and Roth claim that “game theory, with the Nash equilibrium as its centerpiece, is becoming the most prominent unifying theory of social science.”38 C.7.4. The flaws of game theory and the advent of bounded rationality The realization that strategic rationality is one of the “bedrocks” of interaction in any economic system has come at a cost—that of having to reduce and eliminate the interactivity in order to focus on the solution of the interaction, the “rest” situation that would be attained if we could assume that all truly strategic interactions had played themselves out (this was equation (C.1)) and had led to a situation of perfect mutual compatibility of actions, beliefs about actions, beliefs about beliefs, and so on (this was equation (C.2)). Symptomatically, this has led even cutting-edge game theorists such as Joseph Greenberg to focus on what they call “social situations”39 rather than on social systems and processes—the difference being that a                                                                                                               36

Robert J. Aumann, “What is Game Theory Trying to Accomplish?”, loc. cit., p. 14. Roger B. Myerson, “Nash Equilibrium and the History of Economic Theory”, Journal of Economic Literature 37 (1999): 1067–82, p. 1067. 38 Charles A. Holt and Alvin E. Roth, “The Nash Equilibrium: A Perspective”, art. cit., p. 3999. 39 Joseph H. Greenberg, The Theory of Social Situations: An Alternative Game–Theoretic Approach (Cambridge: Cambridge University Press, 1990). 37

- C.29 -


“situation” is a cross-section of a process in which all the relevant variables have fully adjusted. What non-cooperative game theory has allowed us to discover is that once one takes into account the strategic nature of instrumental rationality, any “situation”—such as the one modeled through the solution concept of Nash equilibrium—is in fact a momentary step in a process. The difficulty with usual game-theoretic settings, and this shows that they are still very much heirs to the TE-paradigm, is that the strongly functionalist notion of equilibrium employed allows basically only two ways to introduce “unrest”: either an exogenous shock to the players’ preferences or to the game’s structure (leading in both cases to an exogenous change in the structure of the game’s payoffs), or a repetition of the game at given preferences and structure. The exogenous-shock approach leads to the well-known practice of comparative statics: what is the “post-shock” Nash equilibrium and how does it compare with the “pre-shock” one? The repetition approach leads to a dynamic path of sorts, but a largely exogenous one: unless the interaction itself generates the motivations for players to repeat the game, all we will have is a modeler-engineered sequence of the same game played over and over again. This makes sense only if one assumes that there is an (again) exogenous social process forcing the agents, so to speak, to go through repetitions of standardized games. To summarize, the way game theory has tended to view situations and their possible succession has been—somewhat like the “temporary equilibrium” approach within Traditional Economics—characterized by a lack of emphasis on both (a) the endogeneity of the process of mutual adjustment and (b) the endogeneity of the interaction structure itself. Comparative statics and repeated games are clear symptoms of this neglect of endogeneity. What economists focus on is how a given interaction structure, defined by a game form G, “resolves itself” into a profile of mutually compatible actions, beliefs, and meta-beliefs of n degrees. This profile has to be shown to exist (which explains the space taken up in formal economic analysis by “existence theorems”), and the analysis then goes on to assume that reality is structured “as if” the existing equilibrium were in place, sui generis. The endogeneity of mutually adjusting actions can thus be “summarized” in an instant equilibrium: actual interactivity is relinquished and time is viewed as a sequence of exogenized “nutshell instants” connected by exogenized “propelling technologies”—either exogenous shocks or exogenous instructions to repeat the game. Alan Kirman has had the following critical things to say about this state of affairs—and note that his point exactly mimics the difference we established earlier between equations (C.1) and (C.2): There are a number of ways of moving from considering a static equilibrium notion to that of studying the dynamic evolution of […] systems […]. One approach is to think of a repeated series of markets or games, each of which is the same as its predecessor and in which the payoffs of any specific action in the current game are unaffected by the players’ actions in previous rounds. […] [Another approach is to ask] under what circumstances players will, by simply learning from previous experience, converge to some particular state which would correspond to the equilibrium that would have been obtained by much more complicated game theoretic reasoning. The difference between the two approaches, which [can be] described as “eductive” and “evolutive,” should not be underestimated. […] When agents take account of the consequences of their own actions and those of other agents for current and future payoffs the situation becomes highly complex. There are two ways of dealing with this. Either one can try to solve the full-blown equilibrium problem ab initio or, alternatively, one might ask whether players would learn or adjust from one period to the other and whether such behavior would converge to any specific outcome. Then one can compare the limit point with an equilibrium that might have occurred if all the actors had solved the problem at the outset. The idea that individuals learn in relatively simple ways from their own experience and that of

- C.30 -


others is more persuasive than the alternative idea that they solve highly complicated maximization problems involving calculations as to their own course of action and that of others.40

Notice here again that Kirman, just like Aumann earlier, conflates rational action and equilibrium action: the reference point—the ab initio equilibrium—is “an equilibrium that might have occurred if all the actors had solved the problem at the outset,” and this makes sense only if one assumes “that they solve highly complicated maximization problems involving calculations as to their own course of action and that of others.” In other words, it is the demands of equilibrium analysis that make agents’ supposed “calculations” so difficult: they need to be assumed to be as smart as the economist himself (!) in order for their “rational” actions to make sense. M.Inst(s) functionally subordinated to M.Eq implies that each agent i immediately plays her i-coordinate s*i of s*, and this puts very stringent demands on agents’ calculative capacities if we want to assume, in accordance with CKR, that they “can make whatever calculations are necessary and so are in the position of deriving the relevant advice for themselves” (in the previously quoted words of Holt and Roth). What is the main problem here, according to Kirman? In both Traditional Economics and in game theory, agents are taken to be “too rational” because they are assumed to be theorists with an economist’s or a game theorist’s mind: they can solve games in such virtuoso ways that it seems they have the whole system inside their minds and can act in isolation from one another, so that actual hands-on interaction becomes a secondary feature of existence for them. In fact, actual interaction is forbidden in most models of strategic interaction! That is because the strongly functionalist version of M.Eq has taken priority over the complex dynamics that would be generated by the actual, full-blown exercise of M.Inst(s). In essence, both Traditional Economists and game theorists are telling us, through their fundamental explanatory strategy, that reality can be considered rationally intelligible only if one assumes that individuals are merely the functional “servants” to an overall order that has been pre-inscribed into them: they are, in fact, not really individuals but “monads” in the sense of Leibniz, formal analytical entities deduced from the M.Eq axiom in order the Whole to make sense. This explains the strong criticism expressed by Hayek against Traditional Economics (a criticism he would no doubt have leveled at game theory as well): a science based on functionally subordinating M.Inst to M.Eq is actually not an individualistic but a holistic science—so that Traditional Economics and game theory might in fact be considered to violate the M.Ind axiom! The notion of systemic order imposed by Traditional Economics and by Nash-game theory are simply too stringent and make individuals into mere puppets of the system’s “need for equilibrium.” Individual rationality is really “rationality-forequilibrium.” This offers us the occasion of insisting again on an important, and often neglected point: One can speak routinely of “individuals” and still be reductionist. Just because economists speak of “individuals” when explaining the working of an economic system does not mean they are exiting the mechanistic realm; in Traditional Economics and game theory, the way in which such “individuals” are modeled is simply dictated by the imperative of intra-systemic explanation. The homo economicus is therefore not an individual in the moral and political sense. He is a formal entity constructed for the purposes of respecting an ontology of equilibrium-assystem-at-rest that molds the way agents must act. No wonder a significant minority of Traditional Economists, including Kirman himself, have long suspected that the TE-paradigm as well as the Nash solution concept are shortcuts to the theory of a centrally planned                                                                                                               40

Alan P. Kirman, “The Economy as an Interactive System”, in W. Brian Arthur, Steven N. Durlauf and David A. Lane (eds), The Economy as an Evolving Complex System II (Cambridge, MA: Perseus, 1997), pp. 491–531, quote from pp. 493-494 passim.

- C.31 -


economy. A “truly” individualistic society—and economy—is one in which aggregate reality emerges from, rather than being the precondition for, individual rationality. This implies, however, that a weaker notion of equilibrium, a weakly functionalist version of M.Eq has to be found—one that no longer “pre-formats” M.Inst(s) and therefore becomes truly compatible with M.Ind. The answer has been found in the alternative approaches that developed after the 1950s but gained prominence only after the 1990s, namely the approaches gathered under the name of bounded rationality and adaptive rationality. One of the pioneers of this direction has been Herbert Simon, who was awarded the Nobel Prize in 1978. One of his many contributions is to have offered the notion of “bounded rationality,” rooted in the criterion of “satisficing,” as an alternative to the usual interpretation of the M.Inst(s) axiom as Nash-equilibrium-optimization. Simon’s basic idea is that any economic phenomenon arises out of a multitude of actual interactions which are inscribed within what he calls “artificial” systems—by which he simply means man-made, as opposed to natural, systems. He is therefore completely in line with the cybernetic, engineering view of economics; and he was in fact himself educated in engineering and did a lot of work in the areas of computing and Artificial Intelligence. Individuals are viewed—correctly but partially—as “intelligent systems” and the economy as an artificial coordination system. An absolutely crucial assumption in the whole bounded-rationality literature is that, in line with the Hayekian critique, individuals do not have, and do not attempt to acquire, “global” knowledge of the world but are restricted to “local” environments. Agents use their reason in three sorts of ways: to discover the main features of their environment, they use cost- and cognition-constrained exploration devices; to discover the kind of adaptive behavior suited for their environment, they use their procedural rationality; and to actually adjust to their environment through calculation and decision, they use their substantive rationality. Thus, individual rational action is no longer just the immediate “application” of a “contemplative” knowledge of the world—of a world that manifests as intelligible equilibrium. Here, on the contrary, agents are essentially exploration devices which initially can make almost no sense of what is around them; they grope around and very slowly gather up the limited information which their limited capacities for reasoning and computation allow them to process. Here is how Simon formulates his point: Human beings viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves. […] [B]ehavior is adapted to goals, hence is artificial, hence reveals only those characteristics of the behaving system that limit the adaptation. […] The evidence is overwhelming that the [human information-processing] system is basically serial in its operation: that it can process only a few symbols at a time and that the symbols being processed must be held in special, limited memory structures whose content can be changed rapidly. […] The claim [is] that the human cognitive system is basically serial […].41

We notice immediately that Simon is claiming to be basing his ideas on brain science and cognitive science. This is indeed a central characteristic of the more recent advances in the post-neoclassical paradigm. Let us briefly investigate the implications of Simon’s position. First of all, notice that he qualifies the cognitive functioning of humans as that of “behaving systems”—not, for instance, of behaving animals or of behaving organisms. In fact, the focus on artificial systems means that Simon (who admits to this quite lucidly) is reducing biological-organic features to computational ones. While he views the economic systems as artificial because they are obviously man-made (if not man-mastered), he also reduces the cognitive organism to an artificial system by reinterpreting the functioning of the brain as that of a Turing machine which sequentially accesses bits of environmental information; in this                                                                                                               41

Herbert A. Simon, The Sciences of the Artificial, 3rd ed. (Cambridge: MIT Press, 1996), pp. 80-81.

- C.32 -


way, Simon attempts to ground economics in a generalized systems science. While he certainly does not literally believe that the human brain is a digital computer, he nevertheless completely artificializes the biological brain—and the derived behaviors—by inserting it into a generalized science of artificial (i.e., man-made and/or man-interpreted) systems. As JeanPierre Dupuy has quite rightly noted, this means that the underlying scientific project is that of a “mechanization of the mind”: The mind, or rather each of the various faculties that make it up […], was conceived as a Turing machine operating on the formulas of a private, internal language analogous to a formal language in logic. The symbols—the marks written on the tape of the machine—enjoy a triple mode of existence: they are physical objects (being embodied in a neurophysiological system) and therefore subject to the laws of physics (in the first instance those of neurophysiology, which is supposed to be reducible to physics); they have form, by virtue of which they are governed by syntactic rules (analogous to the rules of inference in a formal system in the logical sense); and, finally, they are meaningful, and therefore can be assigned a semantic value or interpretation. The gap that would appear to separate the physical world from the world of meaning is able to be bridged thanks to the intermediate level constituted by syntax, which is to say the world of mechanical processes—precisely the world in which the abstraction described by the Turing machine operates. The parallelism between physical processes subject to causal laws and mechanical processes carrying out computations, or inferential or syntactical operations, ceases to seem mysterious once one comes round to the view that the material world contains physical versions of Turing machines: computers, of course, but also every natural process that can be regarded as recursive. […] Turing-style functionalism constitutes the heart of what is called “cognitivism,” which remains today the dominant paradigm of the sciences of cognition [and which claims] that thought, mental activity, this faculty of the mind that has knowledge as its object, is in the last analysis nothing other than a rule-governed mechanical process, a “blind”—might one go so far as to say “stupid”?—automatism.42

Thus, in Simon’s perspective, the economy is indeed a system, but one whose individual components are themselves to be viewed as systems: as thinking automata who use their powers of symbolic computation for the sake of adapting to their local environments in “satisficing” ways. In other words, given the feeble powers of the human computer and its sequential access to data with a very limited memory, bounded rationality has to replace (Nash) equilibrium rationality viewed as the agent’s ability to compute “on his own” the equilibrium of the games in which he acts. But even though the Simon-type automata are characterized by so-called “bounded” rationality, they are in fact significantly more sophisticated than the TE-type automata because they explore their environment and discover interactive connections through adaptive interaction with other, similar automata—whereas the TE-type automaton receives the data from its environment. The Simon-type agent is still a programmed automaton—it has an “internal model” created by its programmer—but it is much less pre-programmed than its TE counterpart, so that it is much more sophisticated in its behavioral patterns. However, in standard competitive-market settings the aggregate results of the interaction of such “stupid” little machines closely mimic what happens in independent-agent approximations: There have been many recent laboratory experiments on market behavior, sometimes with human subjects, sometimes with computer programs as simulated subjects. Experimental markets in which the simulated traders are “stupid” sellers, knowing only a minimum price below which they should not sell, and “stupid” buyers, knowing only a maximum price above which they should not buy, move toward equilibrium almost as rapidly as markets whose agents are rational in the classical sense.43

Actually, it is puzzling that Simon should present such results (drawn in particular from laboratory experiments about which we will say more in section C.7.6) as surprising or                                                                                                               42

Jean-Pierre Dupuy, The Mechanization of the Mind: On the Origins of Cognitive Science (Princeton: Princeton University Press, 2002), pp. 38-39. 43 Herbert A. Simon, The Sciences of the Artificial, op. cit., p. 32.

- C.33 -


interesting. In fact, it all depends on what we mean by “rational in the classical sense.” Agents who react mechanically to prices that lie within some interval of values are more, but barely more, sophisticated than agents whose supposed “rationality” consists—as in the TEapproach—in reacting to one single preordained “equilibrium” price. Thus, if by “rational in the classical sense” Simon means agents who directly act in equilibrium, the experimental results with “stupid” agents are not really surprising: such agents are doing barely more calculating than “already-in-equilibrium” agents. However, if by “rational in the classical sense” he means agents who have beliefs about the game form and beliefs about the solution concept, and engage in a real-time dynamic process of learning about and revision of their beliefs (as in equation (C.2) above), then what is surprising is the fact that repeated market games with “stupid” agents converge on the equilibrium “almost as rapidly,” and not much more rapidly, than the games where there is real dynamics. Hence, under both meanings of “rational in the classical sense,” the fact that stupid agents converge almost as quickly is classical agents is not surprising—it just confirms the idea that independent-agent approximations are more conceptually comfortable and analytically tractable than truly interactive models… The significance of Simon’s approach is that it uncovers another “hidden truth” of the TEparadigm. As we saw earlier, game theory uncovered the hidden truth that in the TE-paradigm instrumental rationality is never parametric and is always strategic, and that it is only agents’ perception of their environment as parametric that makes their strategic rationality seem parametric. Now, we see that in the TE-paradigm instrumental rationality is never infinite and complete and is always bounded and incomplete, and that it is only the independent-agent approximation—which allows boundedly rational agents to react to one single parameter computed for them by the all-knowing economist—that makes their bounded rationality seem unbounded. To put it differently, in Traditional Economics as well as in the Nash-solution approach, the enforcing of an independent-agent approximation makes bounded and unbounded rationality observationally equivalent: [IAA, “in-equilibrium action”]

[bounded rationality ⇔ unbounded rationality]

With a strongly functionalist, timeless equilibrium approach such as that which has pervaded Traditional Economics all the way into its game-theoretic extensions, one can treat boundedly rational or even “stupid” agents as if they had by chance “stumbled upon” the full equilibrium solution computed for them by an economist—or, if the latter is also too “stupid,” by a powerful computer. Once this scenario is in place, it even allows economists and game theorists to entertain the fiction that, to use Holt’s and Roth’s phrase yet again, “‘perfectly rational’ players […] can make whatever calculations are necessary and so are in the position of deriving the relevant advice for themselves.” This fiction crumbles to dust as soon as we ask—as Hayek, Simon, and their followers have asked—how these agents could arrive at that solution alone, in real time, without an economist or a central computer looking out for the self-coherence of their experienced environment. The implication has been yet another a deep revision of the basic interpretation of the axioms M.Inst(s) and M.Eq. The most basic feature of bounded-rationality economics is its reinterpretation is that it views strategic instrumental rationality in a very modest, “local” way that replaces Nash calculation by a “satisficing,” “myopic,” adaptive way of thinking. As Simon expresses it, Finding a local maximum is usually easy: walk uphill until there is no place to walk. Finding the global maximum, on the other hand, is usually exceedingly complex unless the terrain has very special properties (no local maxima). The world of economic affairs is replete with local maxima. It is quite easy to devise systems in which each subsystem is optimally adapted to the other subsystems around it, but in which the

- C.34 -


equilibrium is only local, and quite inferior to distant equilibria that cannot be reached by the up-hill climb of evolution. […] [F]rom the fact that an economic system is evolving, one cannot conclude that it has reached or is likely to reach a position that bears any resemblance to the equilibria found in the theory of perfect competition. Each species in the ecosystem is adapting to an environment of other species evolving simultaneously with it.44

Thus, Simon’s work allows us to write down the following crucial idea: a strategic rationality that is bounded becomes a form of adaptive rationality. [bounded rationality] ⇒ [M.Inst(s) ≡ adaptive instrumental rationality] Now Simon was no adversary of markets. In fact, he viewed them as formidable, evolutionarily emerging “solutions to the central problem of accommodating to our bounded rationality”.45 What he rejected, however, just like Hayek, is the idea that the way in which markets coordinate agents’ actions can be modeled through independent-agent approximations: … the evolution […] of economies does not lead to any easily predictable equilibrium, much less an optimum, but is a complex process, probably continuing indefinitely, that is probably best understood through an examination of its history. As in any dynamic system that has propensities for following diverging paths from almost identical starting points, equilibrium theories of an economy can tell us little about either its present state or its future.46

This means effectively that once we accept that strategic rationality is fundamentally adaptive and that economic institutions are “solutions” to this “defect,” we must replace the independent-agent approximation—which is really a form of indirect interdependence— by interdependent-agent description. [M.Inst(s) ≡ adaptive instrumental rationality]

[IndAA’s replaced by IntAD’s]

This shift has two aspects. (a) We replace “independence as indirect interdependence” (Ind) by “direct interdependence” (Int), which is what game theory already wanted to do but shied away from, as we saw. (b) As a corollary, we shift from an “approximation” (A) to a “description” (D): the models of strongly functionalist equilibrium (= system at rest) have to be replaced by models of … something else, which has to respect the basic need for weakly functionalist equilibrium (= ordered system) while giving up the idea that in equilibrium no agent has an incentive to change his behavior. What is this “something else”? To put it briefly, it will be called emergence from adaptation. It is a concept of “out-of-equilibrium order” that relies on one key idea: once rationality is adaptive within a myopically experienced environment, any number of agents may have an incentive—a strong desire even—to leave the situation that has emerged from their interactions with others. Failure, crushed expectations, unfulfilled aspirations—all this plays a crucial role in actual economic life and accounts for the dynamics of innovation and competition which, for a pessimistic liberal, are the core of social interaction. This, however, implies something quite significant which Traditional Economics as well as Nash game theory have consistently ignored: market outcomes, whether perfectly or imperfectly competitive, emerge as messy interactive processes rather than “pop into the world” as neat equilibrium situations. The weakly functionalist idea of equilibrium as order of course still requires something to adjust so that reality can make sense; reality, that is, cannot be self                                                                                                               44

Ibid., p. 47. Ibid., p. 49. 46 Ibid., p. 48. 45

- C.35 -


contradictory; it has to be an ordered, consistent reality—but what adjusts along gradual processes of emergence is not a small set of centralized equilibrium variables undergoing exogenous shocks. Rather, complex processes of emergence are propelled by learning, error, suffering, desire, and so on: our individual perceptions of the world are what mainly adjusts to the chaos of disequilibrium prices and quantities, of failed products and inadequate technologies, and so on. Thus, the new research in economics launched by Simon’s bounded-rationality program amounts to breaking down the functional subordination of M.Inst to M.Eq. Simon’s starting point is that if we are going to take the limits of agents’ rationality seriously, we need to reform the content of the M.Inst(s) axiom, independently of what we would like reality to look like; and this, then, implies that we give up the too stringent content of the M.Eq axiom: “order” can be, and has to be, be conceptualized without independent-agent approximations that tie the actions people take to the sort of “equilibrium” we—as economists—want them to generate. Instead, recognizing that M.Inst(s) is in fact an axiom of adaptive rationality, which implies myopic agents in environments that are initially too large and complex for them, forces us to replace “M.Eq as Nash equilibrium” by “M.Eq as emergence”: [M.Inst(s) ≡ “local” adaptive rationality]

[M.Eq ≡ emergence processes, evolutionary paths]

These crucial shifts in conception have given rise to a thriving research domain which, for lack of a better expression can be called “evolutionary complexity economics.” It has many strands and cannot possibly be studied exhaustively in all its details in the short time and scant space we have here. However, we do need to gain some understanding about this project because it is spreading currently and becoming the “front line” of much of post-neoclassical research.47 C.7.5. Elements of complexity economics The turn of economics to complexity theory is due in large part to the increasing influence of both cognitive science in biology and psychology and “emergence” approaches in the explanation of aggregate phenomena, whether they are natural or social. As we saw with the development of Simon’s work, there has been a strong impact of cognitive science on Traditional Economics. Local cognition and adaptive rationality have crowded out unbounded rationality. The complexity approach has today become intimately connected with the name of the Santa Fe Institute in California and its exploration of complexity in all its dimensions. A small but fast-growing group of post-neoclassical economists is seeking to model emerging phenomena and qualitative leaps in dynamic trajectories. The intent is to move away from independent-agent approximations and to really— finally—take the heterogeneity of agents and their direct interactions seriously. The independent-agent formalism can in fact support neither interactive resource allocation nor interactive exploration, but as Jason Potts has perceptively shown,48 allocation can be mimicked by a non-interactive, “full-field” fiction (which is the essence of the independentagent approximation) whereas exploration is intrinsically an “incomplete-network” affair. Both interactive resource allocation and interactive exploration require a network view of

                                                                                                              47

For analyses of how the complexity project fits into today’s overall landscape in front-line economics, see the studies gathered in David C. Colander (ed.), The Complexity Vision and the Teaching of Economics (Cheltenham: Edward Elgar, 2000). 48 Jason Potts, The New Evolutionary Microeconomics, op. cit.

- C.36 -


economic life.49 Kirman argues that this is indeed how truly interactive models should be built: One way of bringing back finiteness of dependence and, thus, to restore the unique relationship between individual and aggregate probability laws is by assuming that agents are influenced by a finite number of “neighbors”; but this requires the specification of a graph-like structure on the agents and the study of local interaction […].50

If we look back to figure C.1, this sort of graph is what Potts and Kirman have in mind: a graph in which each agent can be represented as a node connected to some—but not necessarily all—other nodes by lattices that represent the agent’s connections within the social network in which he is evolving. Adaptive instrumental rationality means that these connections have to be looked for locally at first, then constructed and consolidated, then generalized through other connections, and so forth. The “fully covering” network drawn in figure C.1 is already itself a limit case of an initially much less tight network. Thus we have two main components to the truly interactive model: (1) Given each “local” network nk, the interactions between agents generate emergent phenomena specific to that network. Within the set of all networks, N = {n1, …, nK}, there are thus K simultaneously arising emergences; these may be prices, quantities, worldviews, or anything else that “comes out of” agents’ interactions in a network. (2) Agents also launch explorations across networks and this exploration generates networkextension phenomena, which means the creation of a modified set N, and so on. The cycle going from (1) to (2) is what propels the general dynamics of the economy: intranetwork emergences combined with cross-network extensions. Emergence is a very specific phenomenon of ordering, linked to the idea that individual constituents in a system often generate aggregate results whose properties could not be predicted from the properties of the constituents. Sunny Auyang has described it as follows: Large-scale composition is especially interesting because it produces high complexity and limitless possibility. […] Myriad individuals organize themselves into a dynamic, volatile, and adaptive system that, although responsive to the external environment, evolves mainly according to its intricate internal structure generated by the relations among its constituents. In the sea of possibilities produced by large-scale composition, the scope of even our most general theories is like a vessel.[…] Large composite systems are variegated and full of surprises. Perhaps the most wonderful is that despite their complexity on the small scale, sometimes they crystallize into large-scale patterns that can be conceptualized rather simply […]. These salient patterns are the emergent properties of compounds. Emergent properties manifest not so much the material bases of compounds as how the material is organized. Belonging to the structural aspect of the compounds, they are totally disparate from the properties of the constituents, and the concepts about them are paradoxical when applied to the constituents.51

What propels the dynamics of emergence and extension is the use, by individuals, of simple rules of adaptive behavior based on Simon’s idea that “human thought processes are simple”.52 In a groundbreaking book, Robert Axelrod and Michael Cohen claim that today’s overarching concept for an economic context is that of a complex adaptive system:                                                                                                               49

For an introduction to social networks, see Alain Degenne and Michel Forsé, Les réseaux sociaux (Paris: Armand Colin, 1994). A more detailed and elaborate treatment can be found in StanleyWasserman and Katherine Faust, Social Network Analysis: Methods and Applications (Cambridge: Cambridge University Press, 1994). 50 Alan P. Kirman, “The Economy as an Interactive System’, loc. cit., p. 502. 51 Sunny Y. Auyang, Foundations of Complex-System Theories, op. cit., pp. 1-2. 52 Herbert A. Simon, The Sciences of the Artificial, op. cit., p. 85.

- C.37 -


Whether or not we are aware of it, we all intervene in complex systems. We design new objects or new strategies for action. […] Whether simple or sophisticated, such actions change the world and […] lead to consequences that may be hard to imagine in advance. […] The complexity of the world is real. We do not know how to make it disappear. […] For us, “complexity” does not simply denote “many moving parts.” Instead, complexity indicates that the system consists of parts which interact in ways that heavily influence the probabilities of later events. Complexity often results in features, called emergent properties, which are properties of the system that the separate parts do not have.53

Recapitulating some of the pioneering research of John Holland 54 and others at Santa Fe, the authors offer the following definition of such a system: Agents, of a variety of types, use their strategies, in patterned interaction, with each other and with artifacts. Performance measures on the resulting events drive the selection of agents and/or strategies through processes of error-prone copying and recombination, thus changing the frequencies of the types in the system.55

The agents that make up the nodes of the social graph interact through strong connections by individually using various elements drawn from three sets of rules. There is a set R1 of interaction rules which are particular ways of realizing, activating, or deactivating, the various available connections between vertices. There is a set R2 of credit-attribution rules which allow the agent to evaluate the relative success or failure of her actions. Finally, there is a set R3 of revision rules by which the agent modifies her interaction rules in the hope of obtaining higher credit in the next round of interactions. Such a system is complex if the interactions between agents generate so-called emergent properties that are more than just the sum of all individual-level properties. In other words, individual actions aggregate in nonlinear fashion through interaction. The system is adaptive in a twofold sense: (a) the emergent properties of interaction generate certain credit measures (profit, fitness, etc.) which may allow the agents to learn from mistakes and successes, and (b) perceived lack or loss of credit may trigger adaptation, i.e., a change in agents’ interaction rules. To the post-neoclassical economists, a key element in such a system is that nontrivial aggregate behavior—or systemic behavior—can be generated by trivial rules in R1, R2 and/or R3. Agents may be endowed with bounded rationality, such as myopic horizon, small calculative abilities, simple routines, or rules of thumb. They may base their behavior on expectations which may not at all be consistent with past observations. They may, for all practical purposes, be downright “idiots” in the neutral sense of the word—entities limited to extremely narrow procedures and routines. And still, they may interact to produce very elaborate-looking patterns in certain aggregate variables such as share prices, inflation rates, or attendance of a nightclub. Complexity theorists pride themselves of having dispelled the illusion that to generate such intricate patterns as can be viewed in nature or in society, there has to be either an omniscient designer or a cognitively sophisticated population. How does adaptive learning work? The two keys to understanding how agents adapt in a complex adaptive system are the notions of an internal model and of credit attribution. They are closely linked, in the sense that the agent is assumed to use his internal model to undertake actions whose high or low credit value subsequently may lead him to partly modify the internal model. An internal model is an evolving module in any agent’s cognitive setup. It represents that agent’s more or less formalized, more or less exhaustive and more or less rigorously scientific, view of the array of situations in which he believes he can find himself. This array                                                                                                               53

Robert Axelrod and Michael D. Cohen, Harnessing Complexity: Organizational Implications of a Scientific Frontier (New York: Basic Books, 2000), pp. 1-2 and 15. 54 See e.g. John H. Holland, Hidden Order: How Adaptation Builds Complexity (Cambridge: Perseus, 1995). 55 Robert Axelrod and Michael D. Cohen, Harnessing Complexity, op. cit., p. 154.

- C.38 -


is formed either on the basis of the agent’s own past experience in the interaction, or on the basis of the experience communicated by other agents through the interaction. A boundedly rational agent will obviously not deduce his decision rules from an “overall model” of the whole economy, be it only because of the cognitive limitations he experiences or of the enormous cost of acquisition of expertise on this full-scale model, if it existed. Moreover, such an agent will not usually submit his partial internal model to the most extensive intersubjective testing available at any moment, since experience is itself a way of spreading the testing costs over time, and economizing on testing as long as things do not go too wrong. An action that “goes too wrong” means simply an action that fails to fulfill the agent’s aim as measured by his own interests. This failure implies that the action is given a low credit value. The agent’s interest consists in accumulating high credit, or at least sufficient credit (as in Simon’s approach of “satisficing”), in order to survive according to the system’s norms of evaluation. Therefore, a low credit value signals to the agent that he must learn quickly if he wants to stay on board. There are essentially three ways in which learning will affect the internal model: (1) Suppose the model is used by the agent as a credit prediction device. This means the internal model is a cognitive tool that yields an expectation of the credit value of the agent’s action in a perfectly defined momentary context. In that case, failure implies that the internal model has to be modified in congruence with what the agent believes will be his future (also perfectly defined) context. In other words, he needs to change his internal model contingent on his expectation of the new situation with which it will have to help him cope. (The extent to which this will have to be done obviously depends in part on how “flexible” or “open-ended” the internal model is.) (2) Suppose the internal model is used by the agent as a situation interpretation device. This means it as a hermeneutic tool that yields a description of the momentary context in which the agent will have to maximize a perfectly defined credit function. In that case, failure implies that the internal model has to be modified so as to provide a less incongruous description of the future context—independently of how the credit function to be maximized will evolve into the next period. (3) Suppose finally, as is most likely, that the internal model is used to do both (1) and (2). This means that neither the situational context nor the expectation function are welldefined. In that case, subtle practical judgment is required to understand the reasons of the failure and, hence, the direction in which to look for improvement. Either of the three improvements is carried out through direct or indirect interaction with other, similarly struggling agents. Experiences are shared, “best practices” are pooled, and so on. It is crucial to realize that the agent’s perception of his action’s credit value is clearly a situated perception. So is his perception of his consequent need for a modified internal model. Maybe he truly does not realize that the locally perceived signal comes to him from a subjectless, emergent aggregate phenomenon. Or maybe he does realizes it, but he decides he has no choice anyway. In either case, the adaptive consequences are exactly the same. This means that the agent’s awareness of the “origin” of the perceived failure or success in no way affects the subsequent adaptive sequence. The reason is that not only the triggering signal, but also the ensuing criteria for a better adaptation, are considered to flow from the emergent phenomenon. In the complexity approach, the agent’s behavior is passive in a paradoxical sense. He is passive because he is purposefully, actively, “busily” adapting to emergent signals whose genesis he uncritically accepts because these signals are “given.” This suggests that in a complex adaptive system you can, as an individual agent, be “busily passive.” It all depends on how, in your busyness, you stand towards the emergent

- C.39 -


phenomena. Axelrod and Cohen define “harnessing complexity” as “seeking to improve but without being able to fully control.” They view it as “a device for channeling the complexity of a social system into desirable change, just as a harness focuses the energy of a horse into the useful motion of a wagon or a plow”.56 They suggest three key questions which, according to them, summarize these instrumental strategies: 1. What is the adequate balance between diversity and uniformity? 2. What should interact with what, and when? 3. Which agents or strategies should be copied and which should be destroyed?57 They then offer eight practical principles for the rational agent who—provided she has any power to intervene in the system at the right levels—wishes to optimally manipulate the prevailing complexity: (I)

Arrange organizational routines to generate a good balance between exploration and exploitation. (II) Link processes that generate extreme variation to processes that select with few mistakes in the attribution of credit. (III) Build networks of reciprocal interaction that foster trust and cooperation. (IV) Assess strategies in light of how their consequences can spread. (V) Promote effective neighborhoods. (VI) Do not sow large failures when reaping small efficiencies. (VII) Use social activity to support the growth and spread of valued criteria. (VIII) Look for shorter-term, finer-grained measures of success that can usefully stand in for longer-run, broader goals.58 Axelrod and Cohen establish this list partly on an inductive basis, using concrete examples of successful instrumentalizations of a complex system, and partly on a deductive basis, using the general abstract properties of complex adaptive systems. In the survival-oriented adaptations reviewed here, the emergent signal is taken as an imperative “message” from the environment that you had better learn something quick. This corresponds to what we could call opportunistic harnessing. The agent’s instrumental calculations boil down to mere adaptation to the “circumstances of change.” Today’s widespread discourse on “flexibility” in organizations relies on this kind of harnessing. A significant portion of the management literature even values opportunistic harnessing when it comes to normatively prescribing the “right” actions. An enduringly fascinating aspect of complex adaptive systems is that even such passive, mechanical, rule-following adaptation on the part of the individuals can generate aggregate emergents whose behavior is far from simplistic and does not just reproduce at the aggregate level the mechanical rules used at the sub-aggregate level. As is well known, this is in fact one of the hallmarks of complexity models: Agents adapt—they are not devoid of rationality—but they are not hyper-rational. They look around them, they gather information, and they act fairly sensibly on the basis of their information most of the time. In short, they are recognizably human. Even in such “low-rationality” environments, one can say a good deal about the institutions (equilibria) that emerge over time. In fact, these institutions are precisely those that

                                                                                                              56

Ibid., pp. xvi and 2. See ibid., pp. 22-23. 58 See ibid., pp. 156-158. 57

- C.40 -


are predicted by high-rationality theories […]. In brief, evolutionary forces often substitute for high (and implausible) degrees of individual rationality when the adaptive process has enough time to unfold.59

Such models are adequate if one seeks parsimony in scientific explanation—especially if the behavior of the aggregate matters more to you than the understanding of individual particles’ trajectories. They are also adequate if, like Simon and most engineers converted to economics, one believes the central task of social science is to understand what can hold together a collection of blind or strongly myopic automata. This cognitive and reflexive minimalism is one of the cornerstones of post-neoclassical economics. Its core question is: What are the most idiotic and unsophisticated agents we can assume so that the aggregate phenomena we find important can be accounted for? In a sense, post-neoclassical economics perpetuates the enduring fascination with emergence out of interacting idiots. To the extent that the agent is treated like a sophisticated grain of sand or a self-propelled billiard ball, the notion of “critical mass” that complexity theorists want to convey is fascinating but ultimately technocratic. Reductionism has not been left behind by post-neoclassical approaches, quite to the contrary. The mastering of scaling laws and pattern-generating mechanisms is mainly useful for city planners, park and museum designers, or other professional planning experts who have learnt that well set-up incentives are much less self-defeating than attempts at outright control. Does this mean complexity economics is useless? Of course not. It is true that a limited number of contexts really do require little or no knowledge about “deep human aspirations” and can be carried out much more easily if one can isolate the “idiotic automaton” (sometimes also called the “automatic pilot”) that coexists in each of us along with our more sophisticated aspects. Critically-minded people with a broad range of motivations may still maniacally invest in the stock market. Highly reflexive people with very diverse aims in mind may still routinely walk through Central Park every day. Social activists and humanists may still single-mindedly drive their cars through city avenues and into traffic jams. In these specific, limited contexts it does indeed seem useful to be able to explain the formation of a “critical mass” with the tools of cellular automata, of the mechanics of gases, or of condensed-matter physics. The striking thing about complexity economics is that it insists both on the importance of modeling interaction (so that it rejects traditional equilibrium theories’ strategy of independent-agent approximation) and on the possibility of dispensing with the interacting components’ reasoning and reflexivity. This gives the whole research area an aspect indeed well captured by the expression “social physics.” Society is a more or less sophisticated pile of grains of sand. The following two passages illustrate both the potential and the limitations of the approach: In a system of interacting agents, emergent properties are those that cannot be reduced to statements about the individual elements when studied in isolation. […] One important aspect of emergence is that it breaks any logical relationship between methodological individualism and reductionism. What I mean is that emergent properties cannot be understood through the individual elements of a system, as they are intrinsically collective. This is so even though the behaviors of these elements determine whether or not emergent properties are present.60 Think of a pile of sand on a table that has a continuous flow of sand falling on the top of the pile. For a while, the sand builds up into a large conical sandpile, but at periodic times, when the sandpile builds up to what Bak calls self-organized criticality, there is an avalanche or series of avalanches until the pile “relaxes”

                                                                                                              59

H. Peyton Young, Individual Strategy and Social Structure: An Evolutionary Theory of Institutions (Princeton: Princeton University Press, 1998), p. 5. 60 Steven N. Durlauf, “A Framework for the Study of Individual Behavior and Social Interactions’, Sociological Methodology 31 (2002): 47–87, quote from pp. 71-72.

- C.41 -


back to a state where avalanches cease. […] The distribution of sizes and “relaxation times” of these avalanches follows scaling law patterns, e.g., Zipf’s law, Pareto’s law, power law, etc.. The study of complexity tries to understand the forces that underlie the patterns or scaling laws that develop.61

Critical mass, here, is an aggregate phenomenon which the theory seeks to explain without endowing the grains of sand with more than the physical properties (attraction, repulsion, and so on) required to explain the observed aggregate. This means that complexity economics will be interested essentially in empirically observable and “persistent” patterns, as characterized by John Holland: Emergence occurs in systems that are generated. The systems are composed of copies of a relatively small number of components that obey simple laws. Typically these copies are interconnected to form an array […] that may change over time under control of the transition function. The whole is more than the sum of the parts in these generated systems. The interactions between the parts are nonlinear, so the overall behavior cannot be obtained by summing the behaviors of the isolated components. Said another way, there are regularities in system behavior that are not revealed by direct inspection of the laws satisfied by the components. […] Emergent phenomena in generated systems are, typically, persistent patterns with changing components. Emergent phenomena recall the standing wave that forms in front of a rock in a fastmoving stream, where the water particles are constantly changing though the pattern persists […]. Persistent patterns often satisfy macrolaws. When a macrolaw can be formulated, the behavior of the whole pattern can be described without recourse to the microlaws (generators and constraints) that determine the behavior of its components. Macrolaws are typically simple relative to the behavioral details of the component elements.62

There is no denying that complexity economics has represented a breakthrough compared to the TE-paradigm and to game theory, by combining adaptive rationality and emergence within a dynamic, evolutionary perspective that takes explicit account of agents’ heterogeneity and of real-time, face-to-face as well as institution-mediated interactions. One of the pioneers of the approach, Joshua Epstein, has offered the expression of “generative social science”63 to describe what he presents as a new, post-neoclassical paradigm. This seems close to an adequate characterization: the rationality axiom has been transformed beyond recognition by the “invasion” of strategic rationality by bounded rationality, which has generated the general concept of adaptive rationality; the equilibrium axiom has also been transformed beyond recognition by the replacement of spot-on coordination by complex emergence, which has generated the general concept of evolutionary trajectories. The M.Ind axiom remains essentially untouched and still rules in the background; in fact, as we indicated earlier, it is only within radically interactive and process-oriented models that the notion of “individual” really comes into its own—before, both in Traditional Economics and in Nash game theory, the stringent demands of the independent-agent approximations meant that what were presented rhetorically as individuals were, in fact, just functional entities pre-programmed to “play” equilibrium strategies “fed” to them by the economist. Epstein calls the individuals in the complexity approach “autonomous,” by which he means that “There is no central, or ‘top-down,’ control over individual behavior in agentbased models”.64 Thus, we may write the new axioms as follows: [M.Ind + generative approach] →→→ [Autonomous agents] [M.Inst(s) + generative approach] →→→ [“Simple” adaptive rationality]                                                                                                               61

William A. Brock, “Some Santa Fe Scenery”, in David C. Colander (ed.), The Complexity Vision and the Teaching of Economics, op. cit., pp. 29–49, quote from p. 30. 62 John H. Holland, Emergence: From Chaos to Order (Cambridge: Perseus, 1998), pp. 225-227 passim. 63 Joshua M. Epstein, Generative Social Science: Studies in Agent–Based Computational Modeling (Princeton: Princeton University Press, 2007). 64 Ibid., p. 6.

- C.42 -


[M.Eq + generative approach] →→→ [Emergence, evolutionary trajectories] In fact, the very demanding notions of rationality and equilibrium that was so central in Traditional Economics and in Nash game theory becomes just one, quite small, “province” of the overall conceptual landscape: To begin, the agents differ differently in the different models. [Sometimes] they differ by age and fertility. [In other models,] they have different decision rules. Some decide by tossing coins; some (very few!) decide like homo economicus; and some play a coordination game in their social networks, where the social networks are themselves heterogeneous and dynamic. [The agents may also] differ by what they store in their memory (their recent interactions), […] by dynamic search radius, [they may] exhibit diversity in ages and levels of accumulated wealth, [they may be] heterogeneous by economic hardship, as well as by local information and political grievance, both of which are dynamic, [or they may] face different and dynamic local environments. [Moreover, for the generative social scientist,] to explain a pattern, it does not suffice to demonstrate that […] if society is placed in [a specific] pattern, no (rational) individual would unilaterally depart (which is the Nash equilibrium condition). Rather, one must show how a population of boundedly rational (i.e., cognitively plausible) and heterogeneous agents, interacting locally in some space, could actually arrive at the pattern on time scales of interest—be it a wealth distribution, spatial settlement pattern, or pattern of violence. Hence, to explain macroscopic social patterns, we try to “grow” them in multi-agent models. The preceding critique applies even when the pattern to be explained is an equilibrium. But what if it isn’t? What if the social pattern of interest is itself a nonequilibrium dynamic? What if equilibrium exists, but is not attainable on acceptable time scales, or is unattainable outright? […] [T]he agent-based generative approach can be explanatory even in such cases—where “the equilibrium approach,” if I may call it that, is neither infeasible or is devoid of explanatory significance.65

Epstein’s claims show how far the complexity paradigm has moved from the initial TEparadigm. It offers a mix between reductionist brain science—with the claim that the human brain is basically a low-performance Turing machine—and highly sophisticated systems analysis. The idea of “growing” artificial societies and economies,66 just like one grows flowers or biological cells, is made possible by this specific combination of simplified individual cognition—which makes agents into so-called “finite-state automata”—and complex collective evolution: the economist can artificially create aggregate patterns as emergent phenomena from the automata’s interactions and then see how these patterns change when small changes in initial conditions are introduced. This can be of great help for the management of traffic flows and other planning and management tasks. As Diane Coyle has rightly emphasized, most of the work currently done by complexity economists on systems with simple but heterogeneous agents would have been totally unthinkable just a few decades ago because the computer technology was missing. In fact, Epstein and Axtell are wish to view society as a computer and to consider each agent to be an autonomous processing node in a computer, the agent society. Individual agents (nodes) compute only what is best for themselves, not for the society of agents as a whole. Over time, the trade partner network describes the evolution of connections between these computational nodes. Thus there is a sense in which agent society acts as a massively parallel computer, its interconnections evolving through time.67

To the question of what has allowed economics to change so much in its analysis of systems of interaction, Coyle offers two conjoint replies. One is that economics has evolved more towards what she calls an “economics for humans,” and this is connected to the topic of our section C.7.6: the incursion of economists into behavioral psychology, neurology, and                                                                                                               65

Ibid., pp. xiii, xvi passim. See Joshua M. Epstein and Robert L. Axtell, Growing Artificial Societies: Social Science from the Bottom Up (Cambridge: MIT Press, 1996). 67 Ibid., p. 13. 66

- C.43 -


laboratory experimentation more generally; it is this which has made it possible to move away from TE- and Nash-types of agents. The second reply is The availability of cheap computer power. […] [C]omputers transformed economics, just as they transformed biology and geology and other sciences whose theoretical underpinnings were previously, as it turns out, limited by the amount of computation that was feasible. […] [O]ne of the appeals of the textbook model (rational, identical agents, linear equations, and so forth) was its analytical tractability. This kind of model can be solved and estimated econometrically. More recent theoretical approaches, whether in behavioral economics or drawing on network theory […] are not so neat analytically. But it doesn’t matter: it’s now possible to simulate the results of the new types of model. […] The simulations make it easy to see, visibly on the screen, the emergent properties of a nonlinear model with nonidentical individuals.68

In other words, where previously the economist was constrained by his own brain and the need to be able to solve his models in reduced form on paper, now he or she is able to use sophisticated software that will compute the emergent patterns and display them. (Epstein’s 2007 book indeed contains a CD-ROM for the display of simulations.) There is thus less need than before to make drastically simplifying assumptions about preferences, technologies, and so forth. What is more, if a powerful computer is unable to converge on a solution, this implies that the model is not only analytically intractable, but in fact describes what Epstein calls a “hard social problem” that may possess a “nonconstructive” equilibrium—i.e., an equilibrium that can be said to exist in theory—but not a “constructive,” actually attainable one: [Let us] adopt the definition that social states are hard to attain if they are not effectively computable by agent society in polynomial time […] In a number of models, the analogous point applies to economic equilibria: There are nonconstructive proofs of their existence but computational arguments that their attainment requires time that scales exponentially in, for instance, the dimension of commodity space. In our tentative definition, then, computation of (attainment of) economic equilibria would qualify as another hard social problem.69

Computer simulation in fact allows to create a sort of “artificial real time” in which, following an idea of Herbert Simon, time is defined as the number of elementary computational steps needed to solve a problem. This allows to gain much more insight into the way an interactive system generates its evolutionary trajectories. To round out this discussion of complexity economics, we still need to reflect on what may be its most central issue, apart from the many fascinating technical advances it has produced: have generative social scientists, and complexity-evolutionary economists really been able to get rid of all independent-agent approximations? And to the extent they have, how far from such approximations has the combination of “stupid” agents and sophisticated dynamics allowed them to move? There are two closely connected, but analytically separate issues here. First, in many dynamic, sequential, and interactive models economists still put agents in non-cooperative game situations. Does this not a blatant violation of Simon’s bounded-rationality principle? Not necessarily, unless the economist also imposes the Nash equilibrium as a solution concept for each game; in that case, the dynamic interactive model becomes nothing more than something we dismissed as irrelevant earlier—namely, a repeated game which, in every period, is played according to the Nash solution and whose only endogenous features are the agents’ “memory” of past strategies. This may perhaps allow us to explain some interesting things. However, it is not what we have had in mind here within complexity economics. The basic idea is, rather, that agents should be allowed to start anywhere in strategy space and                                                                                                               68 69

Diane Coyle, The Soulful Science, op. cit., pp. 240 and 244-245 passim. Joshua M. Epstein, Generative Social Science, op. cit., p. 25.

- C.44 -


gradually “discover” the stable solution(s) to the game—if the game form G is assumed to be invariant—or the game form and the corresponding solution(s)—if G itself changes over time. In other words, agents should be allowed to go through many steps such as equation (1) before anything like equation (2) can be postulated as an “equilibrium.” This means that, in a Prisoners’ Dilemma, agents who do not already know the game form and who are not endowed with CKR may start with completely self-defeating strategies and only very gradually learn (i) that they are perhaps better off playing other strategies, (ii) that they are playing a Prisoners’ Dilemma, and perhaps also (iii) that the form of the game they are playing is changing gradually as they play it. This sort of terribly complicated problem is the stuff that evolutionary game theory is, in principle, made of. As can be seen by glancing at any textbook on the subject, the actual advances of the discipline have been focused on still quite impressive, but less complicated issues. Game forms G are usually considered fixed and what varies is the agents who play it; in other words, the Prisoners’ Dilemma—or any other game, usually symmetrical, such as Hawk-Dove or Coordination—is viewed as the basic structure for the repeated interaction between “species” in a “population.” The basic idea, somewhat simplified to a two-type setting, is that whenever an agent of type A or B meets any other agent, also of type A or B, they play a Prisoners’ Dilemma. The difference, now, is that each “species” is assumed to be programmed to play a given strategy, based on a “behavioral code.” Initially “stuck” in their strategies, agents within species can nevertheless switch behavioral codes over time—for various reasons, perhaps due to a “trembling hand” (a haphazard error that may become beneficial) or to mere inquisitiveness (wanting to experience “the other thing”), but mainly through inductive learning. Anyway, agents play certain strategies, try out others, and depending on which strategies are played by the opponents, who are similarly programmed and similarly experimenting, they reap the payments in the corresponding box of the game’s matrix—which they do not know beforehand but which is assumed to remain invariant. In some cases, this can lead to mutations, in others it can lead to extinctions. The main issue tackled by evolutionary game theory is whether “at the end”—meaning, once learning and experimentation have stabilized—the equilibrium that would have been attained by the Nash solution will come into force as an evolutionary equilibrium, implying that the Nash strategies that eventually get “discovered” are evolutionarily stable strategies. There are thus two distinct notions of equilibrium involved: •

In each game played by an n-tuple of agents, with each of them programmed to play a given strategy or some of them trying out new strategies, “equilibrium” means—rather trivially—that some combination of strategies is played and can be observed. This is not the Nash combination, except by pure chance. It means simply equilibrium as order, or what we have called weakly functionalist equilibrium: “something” occurs, rather than “nothing.” As time goes by, more and more such games end up being “solved” through the evolutionary solution, i.e., more and more of the n-tuples of agents play the n-tuple of evolutionarily stable strategies—if they exist, which is not guaranteed in all cases. Thus, eventually, some equilibrium stabilizes and becomes dominant—it becomes “the” way interactions get ordered, and it may be called an “institution” or a “norm.” If we look at the economy at that end point in the process, we may be misled into thinking the agents always had the sort of knowledge and rationality that would make them coordinate on the evolutionary equilibrium immediately, but this is not so. The evolutionary equilibrium is simply a stabilized combination of strategies after a long process of emergence with errors, extinctions, learning, and so on. Still, it mimics what we have called strongly functionalist equilibrium: at the “resting point” of the system of interaction, what occurs is

- C.45 -


not just “something” rather than “nothing,” but “something definite” generated by the history of the interactions. This is indeed a version of the general problem formulated ealier—namely, will a sequence of equations of type (1) converge on equation (2)? Along the way, mutation and learning combine to create convergence—in the best cases, since quite a few evolutionary games circle or cycle without converging. In a passage we already quoted in part, Peyton Young has explained the basic idea in terms quite similar to Epstein: Agents adapt—they are not devoid of rationality—but they are not hyper-rational. They look around them, they gather information, and they act fairly sensibly on the basis of their information most of the time. In short, they are recognizably human [or what Epstein calls “cognitively plausible”]. Even in such “lowrationality” environments, one can say a good deal about the institutions (equilibria) that emerge over time. In fact, these institutions are often precisely those that are predicted by high-rationality theories—the Nash bargaining solution, subgame perfect equilibrium, Pareto-efficient coordination equilibria, the iterated elimination of dominated strategies, and so forth. In brief, evolutionary forces substitute for high (and implausible) degrees of individual rationality when the adaptive process has enough time to unfold.70

By stressing the central importance of learning, Young represents most post-neoclassical economists in throwing a bridge between the logic of independent agents and the logic of interaction. He shows how high-rationality solution concepts in game theory can emerge in a world populated by low-rationality agents [so that] the evolutionary approach is a means of reconstructing game theory with minimal requirements about knowledge and rationality.71

This means the answer to the question, “Have independent-agent approximations been relinquished?” is affirmative. However, there is still the second question: How far from such approximations have postneoclassical economists been able to move? Here, the answer is: not very far… In fact, Young envisages four main learning mechanisms for “low-rationality agents”: natural selection, reinforcement, imitation, and optimal response.72 Of these four, only the latter two are really interactive, while the two former ones are closer to an independent reaction of each “isolated” agent facing a common situation that emerges from the whole set of interactions. As to what regards the two interactive mechanisms, we could call them ultra-low density interactive mechanisms. The reason is that neither the imitation of others nor the hypothetical calculation of what I would do if others did x (this is revealingly called “fictitious play”) require any strong social connections. Nor do they require any dense communication network. In fact, most ultra-low density interaction mechanisms can easily be transformed into mechanisms with isolated agents, either through emergent phenomena such as fashion or through purely additive phenomena such as a publicly available list of agents having pursued “winning” or “losing” strategies (bad bill payers, debtors, successful entrepreneurs, and so on). The actual part of dense, direct interaction is quite reduced, or even nonexistent. Independent-agent approximations are no longer literally there, but they loom in the background in the form of ultra-low density interactive mechanisms that can be—in whole or in part—rendered anonymous and that can be redefined in terms of shared situational variables. True, these are no longer always “equilibrium” situations, so the strongly functionalist notion of equilibrium is gone; but the difficulty of having close, real-time interaction within n-tuples still shows in the way Peyton Young conceives “learning.”                                                                                                               70

H. Peyton Young, Individual Strategy and Social Structure, op. cit., p. 5. Ibid., p. 144. 72 Ibid., pp. 27-28. 71

- C.46 -


Such ultra-low density interactive mechanisms are about as far as complexityevolutionary economics currently goes in terms of actual communication between people and in terms of their shared reflection on the economic system in which they live. In short, there is no inter-subjectivity and the inside of the collective is nonexistent. Thus, although complexity economics is truly post-neoclassical in having gotten rid of independent-agent approximations, it is bound to remain stuck in a view of economic interaction with no intersubjectivity and, even more so, no subjectivity. C.7.6. Elements of behavioral and neuroeconomics Whereas Nash-style game theory has remained firmly rooted in a (sophisticated) version of the TE-paradigm, complexity economics does indeed appear to have inaugurated a new, genuinely post-neoclassical paradigm. As we saw earlier, one of the main reasons for this radical novelty is that, notably under the influence of Herbert Simon, and also thanks to significant improvements in computer technology, complexity economics has been able to inaugurate a truly “bottom-up” way of modeling which has become known as agent-based modeling and is being used in “generative social science” to understand the emergence of economic phenomena out of the real-time interactions between boundedly rational individuals. Replacing M.Eq as “system at rest” with M.Eq as emergence, and thus decoupling M.Inst(s) from the Nash requirements and accepting a truly autonomous, adaptive rationality—these two new features mean that the frontier of mainstream economics has now definitely left the TE-paradigm and has established itself in the post-neoclassical paradigm. Individuals endowed with—“recognizably human”—bounded rationality practice two things simultaneously in their interactions: they apply their substantive rationality in order to satisfice within the environment in which they are immersed, and they apply their procedural rationality in order to learn about that environment. This means that within an objective—but, to the agents, unknown—environment, economic phenomena emerge out of the interaction between agents who hold a subjective “model” of their environment: each objective environment and the phenomena that emerge in it are a function of agents’ subjective modelbeliefs about that environment. Whether the false subjective belief-models will all, in the end, through inductive learning converge on the true objective reality (which is itself a constantly emerging function of all these false subjective belief-models) is a deep and unsolved question—but we saw that the legitimacy of most “equilibrium” notions used in economics hangs on what answer we give to this question. A crucial feature of the new, post-neoclassical paradigm is that the M.Inst axiom is no longer functionally subordinated to the M.Eq axiom: on the contrary, whatever “equilibria” emerge from interactions cannot be predicted from any requirement of Nash equilibrium— emergence means order (the weakly functionalist version of “equilibrium”) but it does not mean a situation where every agent has reached his global optimum given the global optima of all other agents. Satisficing interactions mean that there is room for an individual’s desire to change, and for an individual’s exploration of the social network. However that gradual process of exploration, learning, and revision takes place, one thing is certain: postneoclassical economics is genuinely individualistic, in the sense that the basic building block of the analysis is now individual instrumental rationality and no longer the search for (strong) equilibrium. This implies that individuals’ procedural as well as substantive rationality have to be studied for their own sake, in order to be able to construct the “right” agent-based models of emergence. Important issues arise: How exactly is rationality “bounded”? What is the meaning of “satisficing”? How does our “recognizably human” psychological structure impact on the way we reach our decisions?

- C.47 -


As we witnessed when we discussed Simon’s automaton-based approach, these questions require that the post-neoclassical paradigm build a clear basis for its agent-based strategy. Not surprisingly, therefore, two closely related developments have coincided with the emergence of the new paradigm: one the one hand, theoretical work on economic psychology, behavioral economics, and so-called “neuroeconomics”; on the other hand, empirical work in experimental economics. These areas of economic research have literally exploded since the 1980s; as with the other domains of innovation, we will not be able to offer here an exhaustive account of the whole diversity of approaches and results. The aim, rather, is to understand how post-neoclassical economists have tried to construct their own reductionism—which they rightly view as a “broadening” compared to the previous narrower reductionism of the TE-paradigm. It is notable that most post-neoclassical economists identify “psychology” with behavioral psychology—that is, the subfield of psychology that focuses on observed behavior and on measurable motivations and actions. Psychoanalysis, for instance, is left out of the picture, as is most of clinical psychology and, more generally, “logotherapy” which relies on a one-to-one, language- and interpretation-based relationship between a patient and a therapist. The focus on behavioral psychology is intimately bound up with the requirements of macro-management, as Colin Camerer, a tenor of the behavioral economics movement, makes quite clear (and Herbert Simon would not have disagreed): “Behavioral economics” improves the realism of the psychological assumptions underlying economic theory, promising to reunify psychology and economics in the process. Reunification should lead to better predictions about economic behavior and better policy prescriptions. Because economics is the science of how resources are allocated by individuals and by collective institutions like firms and markets, the psychology of individual behavior should underlie and inform economics, much as physics informs chemistry; archaeology informs anthropology; or neuroscience informs cognitive psychology. […] A recent approach, “behavioral economics,” seeks to use psychology to inform economics, while maintaining the emphases on mathematical structure and explanation of field data that distinguish economics from other social sciences.73

There are two striking claims in this passage. First, there is Camerer’s claim that behavioral economics is just a “re-unification” of economics with psychology—the underlying idea being that such a unity used to exist and was gradually abandoned. The second claim is that the use of “psychology” to inform economics is still put in the service of hypotheticodeductive formalism—the underlying idea being that the way psychology is to be used has to remain consistent with mathematization and with formal modeling. The claim to reunification is based on the idea that behavioral economics is not “a brand new synthesis” but rather a return to “early thinking about economics [which] was shot through with psychological insight”.74 Unsurprisingly perhaps, the precursor here is considered to be Adam Smith—not so much the author of The Wealth of Nations as the author of The Theory of Moral Sentiments: Adam Smith’s psychological perspective in The Theory of Moral Sentiments is remarkably similar to “dualprocess” frameworks advanced by psychologists […], neuroscientists […] and more recently by behavioral economists, based on behavioral data and detailed observations of brain functioning […]. It also anticipates a wide range of insights regarding phenomena such as loss aversion, willpower and fairness […] that have

                                                                                                              73

Colin F. Camerer, “Behavioral Economics: Reunifying Psychology and Economics”, Proceedings of the National Academy of Sciences of the United States of America 96 (1999): 10575–7, quote from p. 10575. 74 Ibid.

- C.48 -


been the focus of modern behavioral economics […]. The Theory of Moral Sentiments suggests promising directions for economic research that have not yet been exploited.75

The idea of a “dual-process perspective” means here the fact that, in Smith’s approach, “passions” and the “impartial spectator” coexist within individual agents’ observed behavior, hence their preferences, so that there is a balance to be struck between agents’ passionate facets which lead them to fear loss, to make wrong intertemporal choices, and to frequently display overconfidence, and their impartial facets which impel them to be altruistic, to demand fairness (for themselves and others), and to build and value trust in institutions. To take just the case of loss aversion and intertemporal illusion, Daniel Kahneman (who was awarded the Nobel in 2002) and Amos Tversky demonstrated in the late 1970s that rational individuals routinely display a behavioral asymmetry between their reactions to gain and their reactions to loss. “Modern research,” claim Ashraf, Camerer and Loewenstein, “has produced a wealth of evidence from human behavior supporting [loss aversion], and Capuchin monkeys also exhibit loss-aversion […]. Brain imaging technology has shown that losses and gains are processed in different regions of the brain […], suggesting that gains and losses may be processed in qualitatively different ways” (Ashraf, Camerer, and Loewenstein 2005: 133).76 Such properties may help explain observed aggregate patterns on labor markets, financial markets, or housing markets. This is all, indeed, old stuff which only a century of the TE-pradaigm’s obsession with equilibrium-induced rationality could obscure. Note, however, Ashraf’s, Camerer’s and Loewenstein’s insistence on “behavioral data and detailed observations of brain functioning.” This sort of approach is partly different from the one initially inaugurated by Simon which, as we saw, aimed at modeling human agents as finite-state automata, i.e., small machines with internal “states” composed of a set of parameters that can take on a finite number of values. Here, there is more emphasis on laboratory work and bio-magnetic brain imagery—something which Simon could hardly envisage in the 1950s and 60s when the numerical computer unleashed enthusiasm and today’s sophisticated, non-invasive PET-scan machines were far from existing. What the current behavioral approach and Simon’s strictly computational approach have in common is a focus on individual exteriors: Brain states and behavioral patterns, yes; feelings, emotions, and states of consciousness, no. The reason, of course, is that there is still an ultimate privilege given to “mathematical structure and explanation of field data,” and this requires objectivizable (i.e., quantitative and/or easily quantifiable qualitative) concepts—for which brain waves and behavioral patterns are much better suited than subjective, messy, verbal expressions of emotions, intuitions, and feelings. None of this means, of course, that emotions or feelings may not become objects for behavioral economics. What it does mean, however, is that they will become such objects only to the extent that they can be measured in ways that allow their insertion into hypotheticodeductive, formal models. In other words, emotions and feelings will become objects of behavioral economics to the extent that subjectivity can be reduced to brain states—or, which comes to the same, subjectivity-related aspects not reducible to brain states can be neglected. Emotions, so it is argued, are mental states based on beliefs, and these beliefs trigger behaviors in interaction, which in turn can be modeled in classical ways such as with the toolbox of evolutionary game theory. Another tenor of the behavioral economics movement, Richard Thaler, has offered the following (actually quite classic) example of a behavioral study of the consequences of “emotion”:                                                                                                               75

Nava Ashraf, Colin F. Camerer and George Lowenstein, “Adam Smith, Behavioral Economist”, Journal of Economic Perspectives 19 (2005): 131–145, quote from p. 132. 76 Ibid., p. 133.

- C.49 -


How can emotions be incorporated into economic analyses? The ultimatum game offers one simple example. In the ultimatum game one player, the Proposer, is given a sum of money, say $10, and makes an offer of some portion of the money, x, to the other player, the Responder. The Responder can either accept the offer, in which case the Responder gets x and the Proposer gets $10–x, or reject the offer in which case both players get nothing. Experimental results reveal that very low offers (less than 20 percent of the pie) are often rejected. Speaking very generally, one can say that Responders react emotionally to very low offers. We might get more specific and say they react indignantly. What is certain is that Responders do not act to maximize their own payoffs, since they turn down offers in which they receive a small share of the pie and take zero instead.77

It is important to note that the “emotions” underlying the agents’ behaviors—such as “reacting indignantly”—are not mediated by language and interpretation. In fact, the economist observing this Proposer and this Responder has no idea at all of what is going on in their minds. Emotion is induced from the observed behavior, it might be induced from brain imagery in addition, but it is not—expect by accident—based on the agents’ own words. We might call this a differential induction of emotivity, based on the implicit assumption that emotion appears in the “gap” between how a fully self-interested person would have behaved (accepting any x>0) and how the actual person behaved (refusing even x=0.2). In essence, behavioral economics is—well, about behavior, meaning the directly visible effects of whatever goes on in agents’ brains. Motivations are inferred, or induced, from what one observes, just as in Traditional Economics preferences could sometimes be viewed as being “revealed” by observed choices. This follows a basic stance that has been, in economics, attributed to Stanley Jevons, who is supposed to have claimed in 1871 that “men will never have the means of measuring directly the feelings of the human heart. It is from the quantitative effects of the feelings that we must estimate their comparative amounts.” Our earlier remarks on “emotions” in the Ultimatum Game went in the same direction: behavior is the quantitatively visible result of alleged brain processes and also—as Jevons rightly remarks—feelings “of the human heart,” meaning the psyche or the spirit. In other words, for the Marginalist economists who built neoclassical economics, as well as for their successors all the way into behavioral economics, the human brain and heart are “black boxes.” This is a limitation which the newly arising “neuroeconomics” claims to have overcome. As Camerer, Loewenstein, and Prelec put it in a passage worth quoting at length, Neuroscience uses imaging of brain activity and other techniques to infer details about how the brain works. […] [N]euroscience has proved Jevons’s pessimistic prediction wrong; the study of the brain and nervous system is beginning to allow direct measurement of thoughts and feelings. […] Neuroscience […] points to an entirely new set of constructs to underlie economic decision making. The standard economic theory of constrained utility maximization is most naturally interpreted either as the result of learning based on consumption experience […] or careful deliberation—a balancing of the costs and benefits of different options—as might characterize complex decision making […]. While not denying that deliberation is part of human decision making, neuroscience points out two generic inadequacies of this approach—its inability to handle the crucial roles of automatic and emotional processing. First, much of the brain implements “automatic” processes, which are faster than conscious deliberations and which occur with little or no awareness or feeling of effort […]. Second, our behavior is strongly influenced by finely tuned affective (emotion) systems whose basic design is common to humans and many animals. […] Human behavior thus requires a fluid interaction between controlled and automatic processes, and between cognitive and affective systems. However, many behaviors that emerge from this interplay are routinely and falsely interpreted as being the product of cognitive deliberation alone […]. These results […] suggest that introspective accounts of the basis for choice should be taken with a grain of salt. Because automatic processes are designed to keep behavior “off-line” and below consciousness, we have far more

                                                                                                              77

Richard H. Thaler, “From Homo Economicus to Homo Sapiens”, Journal of Economic Perspectives 14 (2000): 133–41, quote from pp. 139-140.

- C.50 -


introspective access to controlled than to automatic processes. Since we see only the top of the automatic iceberg, we naturally tend to exaggerate the importance of control.78

In other words, we are back with a Simon-type project of modeling sophisticated automata— but not just on the basis of computer science and “naïve” computational views, and adding the newly acquired insights of state-of-the-art neuroscience which views the brain as a sort of “biocomputer.” In a sense, the bias toward cognitive science, which was inherent in early cybernetics, is now being replaced by a bias toward neuropsychology. This is certainly more intriguing than the mere “intelligent systems” view initially propounded by Simon that considers the brain like a calculator, or the purely behavioral approach that considers the brain like a black box. Camerer, Loewenstein, and Prelec identify some of the following new techniques that have reshaped economic psychology (in the same way as new mathematical tools and new computer techniques have reshaped economics as a whole in the more or less recent past): brain imaging (PET scans, functional MRI), “single-neuron measurement,” electrical brain stimulation (EBS), cognitive psychopathology and the study of brain damage (especially “transcranial magnetic stimulation” [TMS]), psychophysical measurement (heart rate, sweating, muscle micro-movements), and “diffusion tensor imaging” (DTI). These tools, they claim, are useful not just to get an image of what zones of the brain “light up” under what behavioral circumstances; they are aimed, ultimately, at understanding how the human brain uses its neural wiring to solve decision problems: … the long-run goal of neuroscience is to provide more than a map of the mind. By tracking what parts of the brain are activated by different tasks, and especially by looking for overlap between diverse tasks, neuroscientists are gaining an understanding of what different parts of the brain do, how the parts interact in “circuitry,” and, hence, how the brain solves different types of problems.79

This neuropsychological approach “views consciousness as anchored in neural systems, neurotransmitters, and organic brain mechanisms. Unlike cognitive science, which is often based on computer science and is consequently vague about how consciousness is actually related to organic brain structures, neuropsychology is a more biologically based approach. Anchored in neuroscience more than computer science, it views consciousness as intrinsically residing in organic neural systems of sufficient complexity.”80 In fact, neuroeconomists take a relatively balanced approach combining cognitive/ computational aspects (= the brain as a digital computer) and affective/ emotional aspects (= the brain as an elaborate organ/ biocomputer). Cognitive processes and affective processes coexist in the brain; roughly, they correspond to David Hume’s old split between “reason” and “passion”—or to the split between deliberation and reflex, perhaps even between the conscious and the subconscious. Camerer, Loewenstein, and Prelec suggest their own “four-quadrant” model:

                                                                                                              78

Colin F. Camerer, George Loewenstein and Drazen Prelec, “Neuroeconomics: How Neuroscience Can Inform Economics”, Journal of Economic Literature 63 (2005): 9–64, quote from pp. 9-11 passim. 79 Ibid., p. 14. 80 Ken Wilber, “An Integral Theory of Consciousness” (1997), reprinted in Ken Wilber, Collected Works, vol. 7, Boston, MA: Shambhala, 2000, pp. 369–402, quote from p. 370.

- C.51 -


Cognitive processes

Affective processes

Controlled processes

I

II

Automatic processes

III

IV

Here is how they briefly summarize its meaning of their quadrants: Quadrant I is in charge when you deliberate whether to refinance your house, poring over present-value calculations; quadrant II is undoubtedly the rarest in pure form. It is used by method actors who imagine previous emotional experiences so as to actually experience those emotions during a performance; quadrant III governs the movement of your hand as you return serve; and quadrant IV makes you jump when somebody says “Boo!”81

Neuroeconomics and behavioral economics are very closely related in their basic theoretical aim, which is to give various post-neoclassical as well as older-style TEapproaches (complexity theory and evolutionary game theory in the post-neoclassical paradigm, conventional “Nash” behavioral game theory in the TE-paradigm) a mix of cognitive and neuropsychological foundations. We have seen repeatedly that neuroeconomics, but above all behavioral economics, rely very heavily on experimental data—from field research sometimes, but mainly from the laboratory. How are these data generated? This is the third, more empirically oriented component of the post-neoclassical paradigm—namely, so-called “experimental economics,” which seeks to ground the basic behavioral assumptions used in the area of behavioral economics. Vernon Smith (Nobel laureate in 2002) has pioneered the experimental approach to economic modeling. An experiment, in his vocabulary, is a combination of several elements.82 Within a “laboratory” context—which is, in itself, a heresy for many social scientists who believe social reality cannot be replicated as it can be in physics or in chemistry—one draws up an environment which specifies agents’ endowments, their preferences and the costs they have in exchanging. Usually, creating this environment involves monetary rewards calibrated for the purposes of the experiment. One also creates an institution which specifies the sorts of messages to be exchanged and how they are to be communicated, as well as the rules under which they can become binding contracts. Invariably, creating this institution involves drawing up experimental instructions to be given to the “participants” in the laboratory. Most experiments are computer-driven, but some also have human participants in addition. Finally, one observes the participants’ behavior within the environment and institution. This combination of environment, institution, and behavior basically takes literally Herbert Simon’s notion of an economic system being an “artificial” system, and Hayek’s idea that, after all, economic institutions and modes of interaction are created by humans through a long and winding process of evolution—but it stylizes these insights into the idea that artificial systems can be created in an instant within a confined setting, without the long evolutionary path that makes real-world artificial systems emerge.                                                                                                               81

Colin F. Camerer, George Loewenstein and Drazen Prelec, “Neuroeconomics: How Neuroscience Can Inform Economics”, loc. cit., p. 19. 82 The following paragraph draws on Vernon L. Smith, “Economics in the Laboratory”, Journal of Economic Perspectives 8 (1994): 113–31.

- C.52 -


When Smith started his pioneering work in the 1960s, it was the heyday of neoclassical general equilibrium theory and Simon’s work, though already prestigious, was not yet influential. Smith believed that to study the functioning of markets, the Walrasian as well as the non-Walrasian independent-agent approximations were far from sufficient, and one had to study actual interactions; short of being able to conduct large-scale ethnological surveys, one could at least try to replicate experimentally the functioning of markets, viewed as institutions and not just formal coordination devices. Part of Smith’s impetus came from the lack of databased falsification and of inductive—as opposed to deductive—reasoning in standard neoclassical economics: Economics as currently learned and taught in graduate school and practiced afterward is more theoryintensive and less observation-intensive than perhaps any other science. I think the statement [by Milgrom and Roberts in 1987] that “no mere fact ever was a match in economics for a consistent theory” accurately describes the prevailing attitude in the profession […]. This is because the training of economists conditions us to think of economics as an a priori science, and not as an observational science in which the interplay between theory and observation is paramount. Consequently, we come to believe that economic problems can be understood fully just by thinking about them. After the thinking has produced sufficient technical rigor, internal coherence and interpersonal agreement, economists can then apply the results to the world of data. But experimentation changes the way you think about economics. If you do experiments you soon find that a number of important experimental results can be replicated by yourself and by others. As a consequence, economics begins to represent concepts and propositions capable of being or failing to be demonstrated. Observation starts to loom large as the centerpiece of economics. Now the purpose of theory must be to track, but also predict new observations, not just “explain” facts, ex post hoc, as in traditional economic practice, where mere facts may be little more than stylized stories.83

Experimental economics locates itself firmly within the bounded-rationality paradigm and aims to understand how “normal” humans interact and what emerges from their interactions— with the idea that it is what emerges that has to be made sense of by theory, and not theory that has to use experiments to validate itself ex post. “Aprioristic” economics of the hypothetico-deductive sort has to be replaced—at least according to Smith—by “aposteriostic” economics in which theory is guided by experimentally observed facts. Joining in with Camerer, Thaler, and others, Vernon Smith pays homage to Adam Smith, to Bernard Mandeville and to David Hume, those “Scottish philosophers” who had originally, so he claims, offered an approach to rationality quite distinct from what the TE-paradigm made of it: … experimental economists have reported mixed results on rationality: people are often better (e.g., in twoperson anonymous interactions), in agreement with (e.g., in flow supply and demand markets), or worse (e.g., in asset trading), in achieving gains for themselves and others than is predicted by rational analysis. Patterns in these contradictions and confirmations provide important clues to the implicit rules or norms that people may follow, and can motivate new theoretical hypotheses for examination in both the field and the laboratory. The pattern of results greatly modifies the prevailing, and I believe misguided, rational SSSM [= standard socioeconomic science model], and richly modernizes the unadulterated message of the Scottish philosophers.84

In other words, taking seriously the actual results of behavioral experiments, rather than viewing all of reality through the lenses of the M.Inst(s) and M.Eq axioms of the TEparadigm, will lead to an across-the-board abandonment of that TE-paradigm—and to a return to “the unadulterated message of the Scottish philosophers,” which is the one we have already encountered: when boundedly rational persons interact, the result is an “ecological”                                                                                                               83

Vernon L. Smith, “Theory, Experiment and Economics”, Journal of Economic Perspectives 3 (1989): 151– 170, quote from pp. 169-170. 84 Vernon L. Smith, “Constructivist and Ecological Rationality in Economics”, American Economic Review 93 (2003): 465–508, quote from p. 466.

- C.53 -


emergence from limited adaptation through rules of behavior guided by low-capacity human brains. It is worth quoting Vernon Smith on this, since what he says of the deep aims of experimental economics ties in perfectly with complexity economics and its behavioralist foundations: Ecological rationality uses reason—rational reconstruction—to examine the behavior of individuals based on their experience and folk knowledge, who are “naïve” in their ability to apply constructivist [i.e., TEstyle and Nash-style] tools to the decisions they make; to understand the emergent order in human cultures; to discover the possible intelligence embodied in the rules, norms, and institutions of our cultural and biological heritage that are created from human interactions but not by deliberate human design. People follow rules without being able to articulate them, but they can be discovered. This is the intellectual heritage of the Scottish philosophers, who described and interpreted the social and economic order they observed. […] In experimental economics the eighteenth-century Scottish tradition is revealed in the observation of emergent phenomena in numerous studies of existing market institutions […].85

The a posteriori attitude here consists in asking oneself what sort of rationality— probably not TE-style or Nash-style, which are usually contradicted in experiments—agents may have used in order to interact in such a way as to generate emergence. To study the operation of adaptive instrumental rationality, experimentalists ask: “What is the subjects’ perception of the problem they are trying to solve?”86—and then they feed their findings into computers and run simulations, rather than analytically solving a hypothetico-deductive model on paper. Sometimes, experimental insights allow us to falsify not only a model as a whole, but the core assumptions. Thus, it is no surprise that the core axioms of the TE-paradigm have finally been destroyed not so much by theoretical critique but by the persistent evidence coming from the experimental arena that axiom M.Inst(s) was untenable in its strongly functionalistic version subordinated to axiom M.Eq. But once the cause of a theory’s failure have been diagnosed, experimental work can “range beyond the confines of current theory to establish empirical regularities which can enable theorists to see in advance what are the difficult problems on which it is worth their while to work.”87 C.8. Beyond the post-neoclassical paradigm: Integrating ecological economics, monetary behaviorism, and critical political economy Sections C.5, C.6, and C.7 have offered a detailed discussion of today’s two mainstream paradigms—the one that is on the way out, called Traditional Economics (TE), and the one that is on the way in, termed the post-neoclassical paradigm. For both historical and conceptual reasons, despite the fact that the latter pretty much “revolutionized” the former, these two paradigms remain closely linked and share some common limitations. This is why, in Money and Sustainability: The Missing Link, Bernard Lietaer, Stefan Brunnhuber, Sally Goerner, and myself have endeavored to promote yet another approach, based on three crucial elements we find missing in even the post-neoclassical paradigm: Much of complexity science and behavioral psychology, as used and applied in mainstream economics, continues to neglect the fact that the real-world economy is actually an open system: It draws resources (some of them non-renewable, others renewable but at finite rates) from a universe that is essentially a closed system; and it transforms those resources in ways that inevitably produce wastes. In other words, the                                                                                                               •

85

Ibid., pp. 469-470. Ibid., p. 471. 87 Vernon L. Smith, “Theory, Experiment and Economics”, loc. cit., p. 114. 86

- C.54 -


economy as an open system draws on upstream sources and fills downstream sinks. The laws of thermodynamics rule the whole economic process, meaning that one cannot do economics as if the agents who satisfice, learn, and adapt over time do so within an “empty world.” As Money and Sustainability emphasizes, the issue of building an economic system that can function on a permanent basis over time is not merely a side issue for economists: It is part and parcel of any economist’s work, to the extent that he or she has come to terms with the absolutely binding character of the biosphere’s finiteness. In that sense, any future paradigm in economics cannot avoid embracing ecological economics. Both the TE-paradigm and its post-neoclassical successor share a neglect of the behavioral efficaciousness of economic and social institutions. Changing institutional environments may lead agents to satisfice, learn, and adapt more or less “easily” over time, accumulating fitness credit in different manners and through different channels, but the basic content of people’s relationships to one another is assumed not to change. In other words, institutions are supposed to affect people’s choice sets—i.e., the “bundles” of goods they can reach for through their interactions—but not the type of social relations these people engage in or the qualitative ways in which they produce and exchange goods and services. In particular, money (defined, let us recall from Money and Sustainability, as any means of exchange that members of a community agree to accept in their mutual operations) is assumed to be behaviorally neutral: It is taken as a mere technical tool to facilitate exchanges and transactions that, by assumption, would have occurred in any case. One central message of Money and Sustainability is that this assumption—which lies at the very core of the TE- and post-neoclassical paradigms—is wrong. One cannot do economics as if the monetary or non-monetary incentive systems that are in place did not affect people’s deeper aspirations and the kind of relations they cultivate—even the very kinds of goods and services they are prepared to provide to one another. Institutions, and money in particular, simply are behaviorally efficacious. In that sense, any future paradigm in economics cannot avoid embracing institutional, and in particular monetary, behaviorism. Nowhere in the post-neoclassical paradigm are economic agents assumed to be endowed with any capacities for critical reflection on the economy and for active analysis of the rules and institutions under which they would like to live their lives. Whether it is under the axiom of strongly functionalist equilibrium with the associated strong assumptions on (parametric or strategic) instrumental rationality, or under the axiom of weakly functionalist equilibrium and the associated assumptions on bounded and adaptive rationality, people are never assumed to be able to “look up” from their immediate tasks—whether it be all-embracing calculation or patchy, local adaptation—and deliberate on the mechanisms and constraints of the economy itself. Thus, no amount of “citizens’ economics” can be made sense of in the way complexity and behavioral economics models agents. However, another central message of Money and Sustainability is that this epistemological posture vis-à-vis individuals and groups is incorrect: In actual fact, citizens do think about, and then create, new economic tools and institutions. In fact, complementary currencies are one of the areas in which bottom-up citizen activity is extremely important. Our conclusive call for the involvement of businesses, citizens’ movements, and NGOs in the creation of an appropriate “monetary ecology” demonstrates that we conceive of economic agents as active participants in the medium- and long-term shaping of the economy itself, not just in its day-to-day functioning. One cannot do economics as if citizens were mere mechanical or bio-computers busying themselves solely with the

- C.55 -


tasks of strategic adaptation and narrow-range thinking. Economic agents, if inserted in the right sort of social and political environment, are willing and able to think critically and to form aspirations as to what “adaptation” and “thinking” are supposed to mean. In that sense, any future paradigm in economics cannot avoid embracing agents’ critical rationality and the construction of a “critical political economy” in which they actively exercise that rationality. C.8.1. Making the post-neoclassical paradigm ecologically rational While, for obvious reasons, all ecologists are sensitive to complexity science and behavioral psychology—since they study ecosystems which are, by their very nature, complex adaptive systems and, in many case, complex flow systems—I should emphasize that not all economists preoccupied with complexity science and behavioral psychology within the postneoclassical paradigm are also sensitive to ecological issues. In fact, as we saw in section C.7, what led to the emergence of the new paradigm was an enduring dissatisfaction with the TEparadigm and its axioms—but Traditional Economics is, in and of itself, not at all attuned to ecological matters. The post-neoclassical paradigm started out from an impulse to revise the neoclassical notions of equilibrium and rationality, in order to bring process and interactivity into the picture, thus harking back to many intuitions harbored by the classical economists: Smith, Malthus, Ricardo, or Marx. But none of those prestigious eighteenth- and nineteenthcentury thinkers were very much concerned with the economy being an open system within a closed biosphere. They were preoccupied by—at the time, quite legitimate—questions of population and economic growth, resource allocation, the development of industry and trade, technological progress, and so on. When Vernon Smith talks about “ecological rationality,” he is actually attempting to create an alternative to what he calls “constructivist rationality,” which he sees as pervasive and problematic within Traditional Economics. He takes the opposition from Hayek. According to Smith, The SSSM [standard socioeconomic science model] is an example of what Hayek has called constructivist rationality (or “constructivism”), which stems particularly from Descartes (also Bacon and Hobbes), who believed and argued that all worthwhile social institutions were and should be created by conscious deductive processes of human reason. […] Cartesian rationalism provisionally assumes or “requires” agents to possess complete payoff and other information—far more than could ever be given to one mind. […] These considerations lead to the second concept of rational order, as an undersigned ecological system that emerges out of cultural and biological evolutionary processes: homegrown principles of action, norms, traditions, and “morality.” Ecological rationality uses reason—rational reconstruction—to […] discover the possible intelligence embodied in the rules, norms, and institutions of our cultural and biological heritage that are created from human interactions but not by deliberate human design.88

Basically, ecological rationality as understood by Smith is a form of reason used by the economist—not the agents themselves—to reconstruct a posteriori the emergent genesis of social institutions and rules. It embodies all types of “invisible hand” arguments whereby an economist rationalizes an outcome or a set of rules by explaining how they might have been the result of the “blind” interaction of boundedly rational agents adopting rudimentary adaptive strategies. This sort of account of the genesis of outcomes and rules is “ecological” in a very broad and general sense: It mimics the idea of non-intentional evolution which tries, on the basis of simple survival strategies on the part of genes or individuals, to explain how elaborate aggregate patterns grew, unintended and unpredicted, out of simple or even simplistic individual patterns.                                                                                                               88

Vernon L. Smith, “Constructivist and Ecological Rationality in Economics”, loc. cit., pp. 466-470 passim.

- C.56 -


Clearly, one can build models of bounded rationality in which interaction between agents generates the emergence of polluting industries or of steep economic growth paths. Adam Smith’s famous “invisible hand” model of a capitalist market economy is an ecologicalrationality explanation of the emergence of an ecologically destructive economic system.Simply because one happens to be using the vocabulary of “evolution,” “adaptation,” “ecosystems,” and “organic emergence” does not mean one is also heeding the necessities of the thermodynamics of natural resources. One can build models of the “evolution” of heavy industries, of the “adaptation” of the oil industry to new market conditions, of “ecosystems” of chemical SMEs, or of the “organic emergence” of low-cost airlines. Such models are far from “ecological” in another sense—i.e., embodying a concern for the finiteness of natural resources and of natural sinks. In his otherwise pathbreaking book, Eric Beinhocker comits this exact same conflation of the two meanings of the word “ecological” when he explains that financial markets do not conform to the efficient-market thesis because they are “ecosystems”: A fundamental claim of Traditional finance is that any patterns or signals in the market will be arbitraged away by ever vigilant and greedy investors. […] Traditional finance assumes that all investors have access to the same information, and if there are any patterns in stock prices, investors will see them and take them into account in pricing decisions, thus driving the market beck to its random walk. [Physicist Doyne] Farmer found, however, that […] markets form a kind of evolving ecosystem. The markets are populated by heterogeneous traders and investors with a variety of mental models and strategies. As those agents interact with each other over time, they constantly learn and adapt their strategies—in fact, one could say they deductively-tinker their way through the Library of All Possible Investment Strategies. The complex interactions of these agents, their changing strategies, and new information from their environment causes patterns and trading opportunities to constantly appear and disappear over time. The Santa Fe Institute’s Brian Arthur poetically once called markets “ecosystems of expectations” to describe this interplay between agents and their strategies. […] The results of the Santa Fe model did a good job of replicating the key statistical characteristics of real-world markets, such as clustered volatility (i.e., punctuated equilibrium).89

While such formal properties of markets certainly can be compared to the similar properties harbored by actual ecosystems in nature, there is no connection at all here with what the newly emerging paradigm of ecological economics views as the central aspect of a genuinely scientific economics: … conventional economics sees the economy, the entire macroeconomy, as the whole. To the extent that nature and the environment are considered at all, they are thought of as parts or sectors of the macroeconomy—forests, fisheries, grasslands, mines, wells, ecotourist sites, and so on. Ecological economics, by contrast, envisions the macroeconomy as part of a larger enveloping and sustaining whole— namely, the Earth, its atmosphere, and its ecosystems. The economy is seen as an open subsystem of that larger “Earthsystem.” That larger system is finite, nongrowing, and materially closed, although open to solar energy. […] [I]f the economy is the whole, then it can expand without limit. It does not displace anything and therefore incurs no opportunity cost—nothing is given up as a result of physical expansion of the macroeconomy into unoccupied space. But if the macroeconomy is a part, then its physical growth encroaches on other parts of the finite and nongrowing whole, exacting a sacrifice of something—an opportunity cost, as economists would call it. […] The Earth-ecosystem is not a void; it is our sustaining, life-supportive envelope. It is therefore quite conceivable that at some point further growth of the macroeconomy could cost more than it is worth. […] Growth can be uneconomic as well as economic. There is an optimal scale of the macroeconomy relative to the ecosystem. How do we know we have not already reached or passed it?90

                                                                                                              89

Eric Beinhocker, The Origin of Wealth, op. cit., pp. 391-393 passim. Herman E. Daly and Joshua Farley, Ecological Economics: Principles and Applications, second edition (Washington, DC: Island Press, 2011), pp. 15-16. 90

- C.57 -


In the context of ecological economics, the notion of “ecological rationality” implies that we—both as top-down policy makers and as bottom-up citizens—ask ourselves how we can manage the economy as a subsystem of the Earth-ecosystem, i.e., how we can use the powers of reason in order to build and operate an economy that will durably remain within the strict boundaries prescribed by the closed Earth-ecosystem. Being ecologically rational means building an ecologically literate economic paradigm which, although it does not reject the important acquisitions of the post-neoclassical paradigm, broadens or deepens it to include various ecologically crucial concepts and tools, both at the level of macro-communities (bio- and geoscientific knowledge, entropy thinking, systems thinking, resilience, intergenerational calculus, evaluation of ecosystem services, etc.) and at the level of the micro-citizens (commons thinking, ecological intelligence, grounded economic awareness, etc.).91 Economic agents simply can no longer be assumed to be the myopic, narrowly adaptive entities postulated by the post-neoclassical paradigm. Nor can the economy as a whole be conceived as if the material limitations of sources and sinks did not exist. In fact, the basic thermodynamic facts of entropy have to become part of individual agents’ rationality, along with basic aspects of systems thinking and commons thinking. In other words, any economic paradigm building on today’s post-neoclassical mainstream will have to quite significantly alter its conception of people’s rationality and values. This does not mean we have to naively postulate that all agents already care equally strongly about ecological issues—they do not, of course, as recent experimental fieldwork on denial attitudes demonstrates.92 However, we do have to include the notion of denial itself in our reflection on the economy: How is it that so many “rational” economic agents can be induced, by the very economic system within which they interact, to neglect their own critical potentials and forego a higher level of environmental awareness? What are the deep-seated fears and anxieties that lead so any agents to remain in denial vis-à-vis ecological threats, and how is the functioning of the economic system instrumental in solidifying these fears and anxieties? This crucial theme—which I view as part of a paradigm of existential ecological economics to be developed urgently—ties in with the topic of “critical political economy” to be developed in section C.8.3 below. Is ecological economics constructivist? Is it a threat to the “ecological rationality” put forward by Hayek and Vernon Smith? Yes and no. There is, indeed, a constructivist element in ecological economics, in that certain large environmental externalities can only be dealt with at an aggregate level through partly non-market mechanisms that involve planning and political consensus. Particularly because of the problem of denial, and because of the related difficulty of the dominance of short-term reasoning among agents in the current economic system (a feature which Money and Sustainability also connects with the functioning of our monetary system, as we shall see in section C.8.2 below), bottom-up emergence through market interactions or collective action cannot necessarily be counted on. The agents postulated in the mainstream post-neoclassical paradigm are way too unaware of global ecological issues to be entrusted with spontaneously creating the conditions for the survival of humankind within the Earth-ecosystem. In that sense, a strong element of constructivism has to be present: Political decision makers, as well as more ecologically aware or enlightened citizens, need to take the lead in setting up mechanisms and institutions that will create incentives for even non-committed, non-enlightened agents to contribute to sustainability. (Some of the complementary currencies discussed in Money and Sustainability are driven by this realistic philosophy.)                                                                                                               91

See e.g. Arran Stibbe (ed.), The Handbook of Sustainability Literacy: Skills for a Changing World (Totnes: Green Books, 2009). 92 See, in particular, Kari Marie Norgaard, Living in Denial: Climate Change, Emotions, and Everyday Life (Cambridge: MIT Press, 2011).

- C.58 -


However, bottom-up emergence is not thereby rendered useless. We saw that the agents postulated by the post-neoclassical paradigm do have a capacity, and a will, for interindividual learning. Therefore—and this, again, ties in with the “critical political economy” theme of section C.8.3 below—if the mechanisms and institutions initially constructed succeed in raising the level of sustainability awareness of a sufficient number of economic agents (consumers, entrepreneurs, investors, etc.), the spread of new ideas and practices may get triggered, and markets as well as NGOs may gradually become more and more conducive to sustainability generated from the bottom up. Ultimately, the distinction between top-down regulation and bottom-up emergence may vanish completely, as critically and ecologically rational economic agents end up internalizing by themselves the necessities of sustainability and cooperate politically to create the mechanisms and institutions subject to which they wish to interact. We would then have a framework in which to make sense of ecologically driven social complexity. Thus, intersecting the current post-neoclassical paradigm and the ecological economics paradigm is certainly an urgent task for any economist who wishes to contribute to a sustainable world while firmly remaining within the state-of-the-art canons of his or her profession.93 In Money and Sustainability we have not developed such an intersection fully at the methodological level, but we have endeavored to offer the reader sufficient operational elements in the area of monetary reform so that he or she can get a taste for what the new paradigm might feel like. C.8.2. Making the post-neoclassical paradigm sensitive to the behavioral efficaciousness of money It is striking that different findings from neuroscience seem to have opposed implications when it comes to the behavioral efficaciousness of institutions, and of money in particular. Let me highlight three findings put forward by Colin Camerer, George Loewenstein, and Drazen Prelec. 1. First, “neuroscience findings raise questions about the usefulness of the most common constructs that economists commonly use, such as risk aversion, time preference, and altruism.”94 Degrees of risk aversion, preference for the present, or other-regarding behavior may differ across situations for a given individual, because of a property of the brain called “modularity,” which means that the brain “composes” our motivations out of a number of basic modules which get combined differently in different situations. All intertemporal tradeoffs, and all “me-or-him” tradeoffs, may not be the same even for the same agent. 2. Second, “the existence of specialized systems [within the brain] challenges standard assumptions about human information processing and suggests that intelligence and its opposite—bounded rationality—are likely to be highly domain-specific.”95 The basic idea there is that (to focus just on the cognitive half) our brains work with certain specialized, quadrant-III subsystems which work incredibly fast and effortlessly but also with certain serial and effortful, quadrant-I processes; thus, to solve a given problem some agents may have an automatic process available and look like very rational individuals, while others may have to do the same task in a conscious, controlled way and look like clumsy flat-foots… There are some domain-specific expertises in some areas for some agents, so that using an                                                                                                               93

A good example of what is already being done in this area is Giuseppe Munda, “Conceptualising and Responding to Complexity”, EVE (Environmental Valuation in Europe), Policy Research Brief #2, Cambridge Research for the Environment, 2000. 94 Colin F. Camerer, George Loewenstein and Drazen Prelec, “Neuroeconomics: How Neuroscience Can Inform Economics”, loc. cit., p. 31. 95 Ibid., p. 32.

- C.59 -


across-the-board assumption of bounded rationality or of unbounded rationality is unwarranted. 3. Third, “brain-scans conducted while people win or lose money suggest that money activates similar reward areas as do other ‘primary reinforcers’ like food and drugs, which implies money confers direct utility, rather than simply being valued only for what it can buy.”96 Findings seem to suggest that the same brain areas are activated when subjects perceive a beautiful face, a funny cartoon, a sports car, a drug, or money. This means that money enters agents’ utility functions not just indirectly as a source of purchase, but also directly—and that, therefore, parting with one’s money may be painful even if one is buying a pleasant or desired object. “The pain-of-paying may also explain why we are willing to pay less for a product if paying in cash than by credit card.”97 What is striking about the way in which these three findings are presented is that while the first two point toward a capacity of the human brain to be highly modular and to mobilize very contextual competences—i.e., a capacity to be very flexible depending on the overall situation in which the agent finds him- or herself—the third appears to put “money” on the same level as “a beautiful face” or “a drug.” Never do the researchers ask: What kind of money? Are we sure that what we are detecting in our experiment is the effect of “money per se,” or are we in fact seeing the neurological effects of monopolistic bank-debt money? Thus, money is presented as a generic “thing” on a par with other things that have fixed neurological effects: an addictive substance or a perception of beauty. This makes money— which in these contemporary experiments can only be monopolistic bank-debt money—into an all-purpose, neutral object. This reflects the commonly held view that “money” is a homogenous substance. In fact, as we argue at great length in Money and Sustainability, this view is erroneous. That it should slip even into the research presuppositions of high-ranking experimental and behavioral economists is revealing of how deep the problem is. Here is how Richard Douthwaite expresses this problem: Most people think that there’s only one type of money because one type is all they’ve ever known. […] Money is money, they think, regardless of the form it takes. Only the few who know a little monetary history, or are members of a Local Exchange Trading System (LETS), realize that this is not the case. There are, potentially at least, many different types of money, and each type can affect the economy, human society and the natural environment in a different way. Most economists think that there’s only one type of money too. That is when they think about it at all. […] David Hume, one of the founding fathers of economics, referred to money as “the oil which renders the motion of the wheels smooth and easy,” and this attitude persists to this day. Indeed, Paul Samuelson’s well-known economics textbook defines economics as “the study of how men and society choose, with or without money, [my italics] to employ scarce productive resources.” In other words, economists see money acting as a catalyst that eases and speeds up economic interactions that would have taken place anyway. […] However, very few seem to have ever considered the possibility that the particular type of monetary catalyst in use might be affecting the outcome of the economic interaction, and that if other forms of money were used the results might be quite different. […] [H]istory is littered with examples of monetary systems that operated on quite different lines to the one we know at present. If these systems had survived, they would have produced cultures most unlike today’s unsustainable, unstable global monoculture. […] Certainly, if we wish to live more ecologically, it would make sense to adopt monetary systems that make it easier for us to do so.98

Similarly, Thomas Greco writes that

                                                                                                              96

Ibid. Ibid., p. 37. 98 Richard Douthwaite, The Ecology of Money (Totnes: Green Books, 1999), pp. 9-10. (Italics added except for those already added by Douthwaite.) 97

- C.60 -


Money is the vital medium within which we live our economic lives. It is the central element around which many of our interpersonal relationships are organized. It is no exaggeration to say that the quality and essence of our medium of exchange, our money, are crucial to the quality of our lives—our social interactions, our personal priorities, our relationship to the earth, and our ability to satisfy basic human needs. As water is to the fish, so money is to people. Though we are largely unconscious of it, its quality (as opposed to quantity) is crucial. When the water is polluted, the fish sicken and die; when money is “polluted,” our economy malfunctions, and people suffer as their material needs go unmet and social dynamics are distorted.99

There is a widespread consensus among monetary reformers that the unspoken reduction of monopolistic bank-debt money to a generic, unqualified entity called “money” is a major source of intellectual as well as political powerlessness. In fact, the very notion of “economic rationality,” with its postulated primacy of competitiveness over cooperativeness, is partly a by-product of a specific way of creating and circulating money, as Bernard Lietaer has suggested: When the bank creates money by providing you with your £100,000 mortgage loan, it creates only the principal when it credits your account. However, it expects you to bring back £200,000 over the next twenty years or so. If you don’t, you will lose your house. Your bank does not create the interest; it sends you out into the world to battle against everyone else to bring back the second £100,000. Because all other banks do exactly the same, thing, the system requires that some participants go bankrupt in order to provide you with this £100,000. To put it simply, when you pay back interest on your loan, you are using up someone else’s principal. […] In summary, the current monetary system obliges us to incur debt collectively, and to compete with others in the community, just to obtain the means to perform exchanges between us. No wonder “it is a tough world out there,” and that Darwin’s observation of the “survival of the fittest” was so readily accepted as self-evident truth by the 18th century English, as well as by any societies that have accepted, without question, the premises of the money system that they designed, such as we have today.100

The intense focus of individuals and groups—such as businesses—on growth is also a byproduct of interest-bearing bank-debt money. So is, for instance, our postmodern civilization’s obsession with novelty and obsolescence, as well as our tendency to demand and consume low-quality, non-durable goods.101 By contrast, different ways of creating and circulating means of exchange have, in the past, led to substantially different cultures and civilizations. The Central Middle Ages are one example: Two different types of currencies functioned in parallel to one another throughout much of Western Europe during the Central Middle Ages. One type of currency consisted of centralized royal coinage, with many features in common with present-day national currencies. Its usage was primarily for long-distance trading and for the purchase of luxury goods. The second type of currency consisted of an extensive network of different local currencies, used primarily for community exchanges. Many of the local currencies had a very peculiar feature—a demurrage charge. Similar to a negative interest on money, the demurrage feature functions like a parking fee, which is levied for holding onto the currency for too long without spending it. […] In technical terms, when demurrage is applied, money continues to function as a “medium of exchange” but no longer as a “store of value,” that is, something worth hoarding. Though saving was much encouraged, it was not done by storing currency, but took the form of productive assets. Examples of such investment were land improvements or high-quality maintenance of equipment such as water wheels or windmills, or enduring investments in the community such as the cathedrals. […] In effect, a pattern of longer-term investments became the norm rather than the exception. […] Demurrage-charged complementary currencies also help to explain the particular Central medieval economy. Given that savings [through hoarding] were inherently discouraged by demurrage, these currencies would remain in

                                                                                                              99

Thomas H. Greco, Money: Understanding and Creating Alternatives to Legal Tender (White River junction: Chelsea Green, 2001), p. 3. 100 Bernard Lietaer, The Future of Money: Creating New Wealth, Work and a Wiser World (London: Century, 2001), pp. 51-52. 101 See e.g. Michael Rowbotham, The Grip of Death: A Study of Modern Money, Debt Slavery and Destructive Economics (Charlbury: John Carpenter, 1998).

- C.61 -


circulation and were exchanged with far greater frequency at all levels of society, in contrast to other forms of money. The greater velocity of circulation […] enabled the less privileged classes to engage in substantially more transactions, which significantly improved their standard of living.102

It is therefore time to move from what we could call monetary essentialism—i.e., the misguided view that “money” is a neutral, almost metaphysical entity whose effects remain the same regardless of the concrete shape it takes—to monetary behaviorism: the empirically correct view that money can take different concrete shapes and that depending on the shape it takes, its effects will actually vary greatly. This second property is, as we saw, vindicated by the neuroscientific findings that tell us that the human brain is characterized by modularity and context-specific competences—with the implications that engineering new forms of money beyond the monopolistic bank-debt form is, in fact, likely to profoundly modify the behavioral basis of our culture and our civilization. Monetary behaviorism can and should be—as we have begun to do in Money and Sustainability—intersected with the “ecologically driven social complexity” approach. In doing so, we can promote a paradigm that recognizes the foundational character of money as an incentive mechanism while allowing us to focus some of those monetary incentives on sustainability issues within a complex economy. C.8.3. Opening the post-neoclassical paradigm to critical and existential rationality Intersecting the post-neoclassical paradigm with the ecological economics paradigm and with monetary behaviorism creates a rich methodology, which we want to promote through Money and Sustainability. However, there is an element still missing—one we have already touched upon briefly in section C.8.1 when we discussed the two meanings of “ecological rationality”: is it the economist’s own rationality as a “thinker”’ who realizes the importance of ecological and sustainability issues and then constructs the “right” incentive mechanisms (from the top down, as it were), or is it also the citizens’ rationality as they gradually become aware of the ecological threats and urgencies weighing upon the planet and begin to take action (from the bottom up)? What we have found consistently disturbing in our detailed study of post-neoclassical economics in section C.7 is the deeply ingrained mechanicism that pervades even the most sophisticated approaches. Complexity, behavioral, and neuroeconomics have been found lacking because they consistently present collective and individual economic life as mindless—not as “brain-less,” of course, since quite to the contrary the role of the human brain’s chemistry, electricity, and emergence-generating complexity has become more and more central; but as mind-less, in the sense that individual systems embedded in larger collective systems are seen as highly sophisticated “problem-solving” devices, as individual biocomputers within collective sociocomputers. The disturbing issue is: What if this is indeed a true description of what we are? Not true in an absolute sense, but true given the lives we lead and have been leading for centuries or even millennia? What if the majority of humanity is indeed, in its current state, a collection of “subjectless” entities scurrying around in a “blind” natural world, determined by “blind” natural processes, and generating the emergence of “subjectless” social processes? To put things differently, is not the way in which post-neoclassical economics portrays social and economic life simply the “hidden truth” of how we actually lead our lives? Are we, in fact, more than mechanical entities interacting with other mechanical entities to generate emergent mechanical entities? What if exteriors without interiors are all there is? Recall, for instance, Camerer, Loewenstein and Prelec’s discussion of “affective” processes their survey                                                                                                               102

Bernard Lietaer and Stefan Belgin, New Money for a New World (Boulder: Qiterra, 2012), p. 68-70 passim.

- C.62 -


of neuroscientific contributions to economics. They made it quite clear that what goes on in our affective neural processes is not conscious and is not linked to feelings or emotions as conscious experiences. I quoted them as saying the following: Most people undoubtedly associate affect with feeling states, and indeed most affect states do produce feeling states when they reach a threshold of intensity. However, most affect probably operates below the threshold of conscious awareness […]. As Rita Carter [in her book Mapping the Mind, 1999] comments, “the conscious appreciation of emotion is looking more and more like one quite small, and sometimes inessential, element of a system of survival mechanisms that mainly operate—even in adults—at an unconscious level.” For most affect researchers, the central feature of affect is not the feeling states associated with it, but its role in human motivation. […] Affective processes […] are those that address “go/no-go” questions—that motivate approach or avoidance behavior.103

This can be seen as a positive, “scientific” description of the way things are—or it can be seen as a critique of the habitual “mechanicalness” of human behavior in the world as we humans have shaped and consolidated it. In other words, it can be seen as scientific support for a critique of our fundamental spiritual alienation! Jeanne de Salzmann has written the following, which could be considered as a conscious subject’s pained reaction to the sort of neuroscientific data described by Camerer, Loewenstein, and Prelec: I have the power to rise above myself and to see myself freely … to be seen. My thought has the power to be free. But for this to take place, it must rid itself of all the associations which hold it captive, passive. […] Otherwise, our thoughts are just illusions, objects which enslave us, snares in which real thought loses its power of objectivity and intentional action. Confused by words, images, forms that attract it, it loses the capacity to see. It loses the sense of I. Then nothing remains but an organism adrift. A body deprived of intelligence. Without this inner look, I can only fall back into automatism, under the law of accident. […] Each time, the first step is the recognition of a lack. I feel the need for real thought. The need for a free thought turned toward myself so that I might become truly aware of my existence. An active thought, whose sole aim and sole object is I … to rediscover I. So my struggle is a struggle against the passivity of my ordinary thought. Without this struggle a greater consciousness will not be born. […] Without this effort, thought falls back into a sleep filled with words, images, preconceived notions, approximate knowledge, dreams, and perpetual drifting. This is the thought of a man without intelligence. It is terrible to suddenly realize that one has been living without a thought that is independent—a thought of one’s own—living without intelligence, without something that sees what is real […].104

In other words, what the neuroeconomists are describing is not false, but it might not be spiritually acceptable: if something here is “spiritually ill,” perhaps it is not the economics profession as such, with its focus on the post-neoclassical paradigm, but the psychosomatic and socioeconomic reality it apprehends via that paradigm. More precisely, it may be that post-neoclassical economists dismiss the autonomy of personal subjectivity and collective intentionality because, in actual fact, the economic agents they are modeling—you and I—are themselves virtually unaware of the existence of subjectivity and collectivity in their own ongoing awareness… This amounts to implicitly assuming that the agents themselves use a paradigm of knowledge that instructs them to “treat” their environment as if they were the sorts of agents which neuroeconomics conceives them to be—sophisticated, informationprocessing automata with no awareness of any “I” or any “We.” Thus, we would have two paradigms coexisting: the economist’s paradigm and the agents’ paradigm. From the point of view of the economist, the agents’ paradigm is something like the four-quadrant model set out by Camerer, Loewenstein, and Prelec in section C.7.6.                                                                                                               103

Colin F. Camerer, George Loewenstein and Drazen Prelec, “Neuroeconomics: How Neuroscience Can Inform Economics”, loc. cit., p. 18. 104 Jeanne de Salzmann, “The Awakening of Thought” (1958), reprinted in Jacob Needleman (ed.), The Inner Journey: Views from the Gurdjieff Work (Sandpoint: Morning Light Press, 2008), pp. 2–3.

- C.63 -


The issue we are grappling with at the moment can now be reformulated as follows. Given the nature of what we could call the agents’ “spontaneous” paradigm, economists seem to be justified in structuring their scientific paradigm in the way they have—i.e., as a paradigm combining the agents’ paradigm with complexity economics. Why, then, should we insist that economists “open up” their approach to agents’ awareness of “I” and “We”? Is not everything conditioned by the agents’ own narrow and mechanical view on reality, which it is the duty of economists to study as such, something which we epistemologists seem to have no right to criticize them for? Asking the question in this way is disturbing because it implies that, as long as the agents themselves do not, at their own level, open up their awareness to “I” and “We,” economics need not, at its own level, “force” empirical reality and study subjectivity- and culturerelated issues which “real agents” are indifferent to. If neuroeconomists and the neurophysiologists who inspire them are empirically right that we humans and our societies are basically nothing more than sophisticated biocomputers without ultimate interiors, there is no rush to build an economics that denies this, is there? However, powerful as it is, the objection does not ultimately hold water. In fact, what if awareness of oneself as more than a mere computer and of the world around oneself is, indeed, constantly active—though not always acknowledged nor even always “remembered”—in any agent’s awareness? Then the only way to make sense of this is to accept the idea that the agents observed by neuroeconomics, and combined with complexity economics in the post-neoclassical paradigm, are people who live their lives in actual “forgetfulness” of their own interiority and of the “big picture” surrounding them. These agents cannot be called anything else but existentially and culturally alienated, though actually real, people. Most spiritual traditions qualify such people as “asleep” or “unaware,” even “unintelligent” as Jeanne de Salzmann’s above text shows (and contrary to Herbert Simon who talks about adaptive economic agents as “intelligent systems”). Positive economics of a post-neoclassical variety can legitimately take such people and their interactions as their object, since indeed these people are real as attested by the data collected by brain science and experimental economics. However, what such positive economics will thereby do is simply reinforce the empirical validity of alienation. By structuring economic knowledge along the lines of the post-neoclassical paradigm, we are bound to reinforce the “automaticity” or the “mechanicalness” with which most empirical economic agents spontaneously go about their economic lives, without an experienced sense of “I” and of “We”—spiritual traditions would say, without any “self-remembrance.” How exactly does post-neoclassical economics reinforce the reality it studies? Simply by teaching and disseminating a theoretical and empirical picture of human biocomputers who seem to have lost all capacity to perform what de Salzmann calls “the first step,” which is “the recognition of a lack.” This lack, she says, is the fact that most of the time, most of us are oblivious to “free thought turned toward myself so that [we] might become truly aware of [our] existence”—both individually (which is what she dwells on most) and collectively. She also calls this the “inner look,” without which we “can only fall back into automatism.” Echoing this insight, and insisting on the fact that automatic action is—even when “rational”—connected only with exteriors, Piotr Ouspensky has said the following: What does it mean that man is a machine? It means that he has no independent movements, inside or outside of himself. He is a machine which is brought into motion by external influences and external impacts. All his movements, actions, words, ideas, emotions, moods, and thoughts are produced by external influences.

- C.64 -


By himself, he is just an automaton with a certain store of memories of previous experiences, and a certain amount of reserve energy.105

One could not dream of a more exact description of the sort of human agent complexity and neuroeconomists have in mind, and use successfully in experimental economics in order to explain real-world phenomena. The question is, Is this “real world” spiritually acceptable? Without an ability for the same agents to actually exercise free thought and to perform the “inner look” so as to validate that external reality existentially, who can say that the reality economists successfully describe is humanly adequate? Humans may be unconsciously, and without any awareness of it, living in systemic hell of their own—mechanical—making. And they are the only ones who can, through subjective experience (“I”) and communication of that experience (“We”), find out whether it is hell and, if so, what to do about it. This requires ceasing to be mere machines: In the English language there are no impersonal verbal forms which can be used in relation to human actions. So we must continue to say that man thinks, reads, writes, loves, hates, starts wars, fights, and so on. Actually, all this happens. Man cannot move, think, or speak of his own accord [i.e., man has no interior]. He is a marionette pulled here and there by invisible strings. If he understands this, he can learn more about himself, and possibly even things begin to change for him. But if he cannot realize and understand his utter mechanicalness, of if he does not wish to accept it as a fact, he can learn nothing more, and things cannot change for him. Man is a machine, but a very peculiar machine. He is a machine which, in right circumstances, and with right treatment, can know that he is a machine, and, having realized this, he may find the ways to cease to be a machine.106

The question is whether the post-neoclassical language which speaks of “rational” action within a setting without any interior experience of “rationality” can be of any help in “find[ing] the ways to cease to be a machine.” The answer is clearly negative: Without interiors explicitly taken into account, and not reduced to sophisticated sorts of exteriors, sophisticated biocomputers will remain what they are—and an economic science which presents them as such will consolidate the perception by the users of economics, that agents are indeed mere biocomputers and can, especially for macro-management purposes, be treated as such. So, often without being fully conscious of it, harbor an intellectually as well as socially perilous agenda; they would devise macro- or micro-policies—to implemented on us—subject to the “working assumption” that our interiors do not “really” matter. These are often harsh measures which neglect human frailty and ambiguity and make it sound as if, when you are unemployed or poor, there is something wrong with your functional fit into the system. “Incentive schemes” are being built up so that, on the basis of simplistic behavioral stimulus-response models, you can be expected to react in such a way that it will be your rational choice to fit back into the system, regardless of whether the function then assigned to you is a humanizing, potential-deploying function. That is not part of social statistics, and it is not part of cutting-edge neuroeconomics and complexity economics, either. Keep It Simple is the standard economics instructor’s motto when he educates budding economists—and it means: keep it mathematically and/or computationally tractable. Who needs interiors when exterior macro-management of the economy, and hence parsimonious “explanation-to-manage,” is what is ultimately involved? As David Weissman (2000: 212) has so strikingly put it, “separable if distinguishable is our social policy.”107 And he adds, more explicitly:                                                                                                               105

Piotr D. Ouspensky, The Psychology of Man’s Possible Evolution (1950) (New York, NY: Vintage, 1974), p. 12. 106 Ibid., pp. 12-13. 107 David Weissman, A Social Ontology (New Haven: Yale University Press, 2000), p. 212.

- C.65 -


Never doubting that we humans have a distinguishing character and privacy, I challenge the psychological and moral self-sufficiency ascribed to individual persons. The individuating conditions for persons—our separate bodies—are not in doubt. The conditions for our psychological, moral, and cultural identity are a different matter. Their development is a process of engagement and dependence, nit one of monadic selfarticulation. […] Our bodies are separate; yet, our psychic postures are distinguishable, not separable, from the postures of others.108

Economists who adopt such an ontology are, regardless of their very high intellectual abilities or professional integrity which are not in question here, permanently stuck in mechanistic and top-down territory. Such an application of a purely systemic perspective to individuals who are themselves unaware of their own subjectivity and of the broad issues surrounding them is bound to reinforce the alienation of “unintelligent machines.” It may well be that contemporary economics is anti-humanistic not so much because it wrongly portrays us human agents as unreflexive automata, but because it correctly portrays us as such and, thereby, consolidates and reinforces the reality of our spiritual alienation. Only a genuine access to our subjective and cultural dimensions can bring the hope for a less spiritually alienated economic reality. This means basically two things: 1. Economics should become post-post-neoclassical in the sense of introducing individually and collectively interior dimensions explicitly into its theories and models. This will make economics into an initially unrealistic discipline, since it will theorize possible but unactualized, rather than empirically observable, agents. 2. Economics should study the economic factors which have, up until today, tended to make us empirically into biocomputers. This will make economics into a critical discipline, since it will theorize on systemic factors which make us into the sorts of agents we are but ought not to be. What sort of economics might we start building? We might certainly start from the recognition that the fact of the post-neoclassical paradigm’s being empirically relevant implies that we are a kind of individual we ought not to be. As de Salzmann and Ouspensky above, like so many other spiritual thinkers through history, have emphasized, an interior-less economic agent is an automaton that goes through routines and optimizes without any awareness or consciousness of what she is doing. In particular, as long as we remain such agents we have no ability to form a judgment about what is wrong with our “sleepy” lives in the ongoing economy, or about what aspects of that economy should be changed if our lives were to be truer to what, through her religion, spirituality, and/or culture, she views as her fullest human potential—namely, an “awakened” awareness and a socially critical reflexivity. In other words, as automaton-agents we are able of neither existential nor critical judgment because we are devoid of a deliberate, conscious individual interior (subjective) and are foreign to any deliberate, conscious collective interior (cultural). So, as a first step, economics has to make room, within the notion of “economic rationality,” for existential rationality and critical rationality. It hence must aim to understand • •

an economic system’s existential performance: How do agents experience their deeper existential dimensions within the system? an economic system’s critical performance: To what extent does the system allow the agents within it to develop, and act upon, critical abilities?

                                                                                                              108

Ibid., p. 21.

- C.66 -


One of the main implications of this is that economics has to become evolutionary in a way that it is not yet today: it has to embrace our ability, and our (often stifled) desire, for individual work on what it means to be “economically rational” and for collective work on what a truly “human-potential-enhancing” economic system would be. In short, post-postneoclassical economics has to fully honor our ability for conscious evolution, while at the same time explaining why we have remained so deeply “asleep” and why we are so “mechanical” and have so little conscious desire for such conscious evolution. On the contrary, today’s post-neoclassical complexity economics is still stuck within a narrowly adaptive and computational form of consciousness and allows reflection and ideology to intervene only to explain agents’ low-consciousness “adaptation” to “changing circumstances” through time. Intersecting the post-neoclassical paradigm with an approach that allows for the exercise of agents’ critical and existential rationality—this is, in my view, one of the most urgent tasks at hand for economists. I call it the creation of a “critical political economy”: the model of an economic system within which economic agents are (a) aware of their own personal interior and of the quest for meaning that goes on there, and are (b) willing and able to reflect critically on the kind of system they inhabit and on the kinds of rules and mechanisms they want to live their lives and carry out their economic activities. The agents modeled by the post-neoclassical paradigm, with their bounded rationality of a very specific kind, are by definition devoid of these “reflexive” features. In Money and Rationality, we have begun to introduce these features by showing that, under the right circumstances, some citizens will mobilize from the bottom up, out of actual concern for the world around them—because they have reflected critically on the meaning of the economic system and on the urgent demands of its sustainability—in order to create complementary currencies. At the same time, we are aware that not all citizens will spontaneously mobilize in this way—there are many of us who are still “asleep” and “mechanical” in de Salzmann’s and Ouspensky’s sense. Therefore, top-down governance—in the form of the government imposing taxes in certain complementary currencies, or legally imposing the circulation of certain currencies such as the ECOs—is also a part of the setup. C.9. Toward a new economic paradigm? As I emphasized at the very beginning, despite its length and detail this Appendix is desperately partial and fragmentary. Many economists, especially at the more “heterodox” fringes of the profession, will feel their paradigmatic approaches haven’t been done justice, and they’re right. I have chosen—in sections C.5, C.6, and C.7—to stay within the currently developing mainstream and to investigate the directions in which that mainstream could be further developed. Of course, in the process, I have suggested—in section C.8—additions and extensions which venture into resolutely heterodox territory. This is, as we saw in section C.4, part and parcel of the very dynamics of paradigms. The general orientation of my discussion is that the currently developing post-neoclassical paradigm still possesses some features in common with its neoclassical, or Traditional Economics, predecessor which need to be overcome. This is why I have offered up a threefold intersection. Essentially, the post-neoclassical paradigm—which encompasses complexity economics, behavioral economics, neuroeconomics, and experimental economics—should be intersected with (a) ecological economics, in order to integrate structurally into its analytic tools the laws of thermodynamics and the concepts of entropy and finiteness; (b) monetary behaviorism, in order to acknowledge the primacy of the creation and circulation of means of exchange not only as

- C.67 -


incentives but as shapers of people’s aspirations and social relations; and (c) critical political economy, so as to come to grips with the fact that economic agents, as citizens of a cultural and political community, do not only react to incentives like lab rats but are actually active in creating and legitimizing the incentives to which they agree to submit. This new paradigm, which for lack of a better expression I would call complexity-based, ecologically oriented monetary behaviorism with critically rational agents, is the paradigm we have tried to put forward in Money and Sustainability. Our approach still has things in common with key features of the post-neoclassical paradigm. Because it recognizes inescapable social complexity as well as the complexity inherent in natural ecosystems, it is not completely hypothetico-deductive. It can use analogy and intuition to sketch out predictions—for instance, what might happen if citizens adopt such and such complementary currency—but it does not bank on the possibility of “resolving” formal models in any simple way. Consequently, in keeping with the tenets of experimental economics, our paradigm relies heavily on social experimentation—with the difference that the experiments being looked at are full-scale, real-life ones built by citizens, companies or governments with the help of experts. (Bernard Lietaer, in particular, is often called upon to assist communities in setting up special-purpose payment systems or complementary currencies.) In that sense, our approach is even more heavily inductive and “aposterioristic” than what Vernon Smith suggests; it might therefore benefit from a more rigorous anchoring in the clinical-trial type of methodology pioneered in poverty economics by Esther Duflo and her colleagues.109 However, there remains a big difference between our approach and Duflo’s: Many of the complementary currency experiments we look at would need to be so large-scale that the technique of clinical trials—which implies the creation of two groups, one that experiments and one that serves as a control group—will not necessarily be feasible. On the other hand, “control groups” already exist naturally in any monetary regime in the sense that the default compared to using any complementary currency is the routine use of the established, official national bank-debt currency. The ecological as well as monetary literacy of economic agents—entrepreneurs, consumers, workers, investors—is crucial to our approach. In fact, as Money and Sustainability argues throughout, the two types of literacy are closely linked in today’s world: Without better knowledge of what money is about and how it works now and could work in the future, many opportunities for ecological sustainability remain untapped. The current monetary system acts as an intellectual and behavioral straightjacket. This is the reason why our proposed paradigm insists so much on agents’ critical (and existential) rationality: Contrary to what is often the case in clinical-trial experiments when it comes to basic education, basic health or other poverty issues where experts and political decision makers target disenfranchised populations, in the case of “new money” there is often a very vibrant and committed fringe of citizens who seek to launch experiments and who already have a fairly elaborate idea as to why they want a new monetary tool and what they wish to attain with it. In fact, within a complex and non-linear economy, the learning processes triggered by local experiments can sometimes produce unexpected, large-scale effects. That is why Money and Sustainability calls on decision makers to facilitate monetary experimentation, to allow it to happen wherever possible, and to participate in the evaluation and generalization of what worked well. This is the very essence of any deliberate process of “harnessing complexity,” as described in detail in section C.7.5 above. One of the key messages underlying our proposed paradigm is that money is not just an incentive tool to make existing institutions work better and to facilitate exchanges that would have taken place anyway. It is, in fact much more than that—a tool that can be shaped so as to                                                                                                               109

See Abhijit V. Banerjee and Esther Duflo, Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty (New York: Public Affairs, 2011).

- C.68 -


create new types of exchanges and new types of social relations that would not take place and could not exist under the current monetary regime. How do we know which new exchanges and which new types of social relations are desirable? The urgencies of long-term sustainability certainly impose themselves on anyone’s plans and decisions, but in a complex, free society the orientations of sustainability policy—hence the kinds of money that should be created—emerge from the interaction of experts, political decision makers, and citizen groups. Economics can become a science in the service of freely thinking, freely acting, selfaware, and world-oriented citizens.110 This is the picture that Money and Sustainability wants to offer.

                                                                                                              110

This statement, which informs this whole Appendix, is motivated in great detail in Christian Arnsperger, Critical Political Economy, op. cit.

- C.69 -

Appendix C: Mapping Paradigms  

Appendix C to Money and Sustainability: The Missing Link

Read more
Read more
Similar to
Popular now
Just for you