Why DGSE is so hard to Displace, and a Programme for its Displacement

I have finally got around to writing a more serious piece with references etc. It is on SSRN here Not being an economist be profession it’s a little scary. On the one hand you risk being humiliated, on the other hand you can be less indoctrinated in the fallacie of neo-classical economies and can offer insights from other fields, which is what I’ve tried to do here. Its an early draft but I wanted to contribute to this very live debate about the ‘core’ fallacies of neo-classical economicis of which DGSE has become the grand core.

Andrew Lainton

April 2012

Abstract: DGSE models have come under increased criticism, but have not yet been displaced as the dominant core of current economic thinking. This paper asks why they became so popular? Popular because they were seen as a potential ‘theory of everything’ uniting micro and macroeconomics through the ‘microfoundations’ programme. The paper examines the validity of this claim and finds the claimed advantages to be non-existent. The paper argues this is because of an excessively narrow conception of the economic agent, and familiar concepts such as ‘reflexivity’ but from the perspective both of wider social theory, modern decision theory and of systems/control theory. A wider view of economic agency – with fallible individuals acting against uncertainty and creating systemic homeostasis effects is found able to explain major aspects of economics which the ‘microfounded’ strong methodological individuals with perfect rational expectations is not. Examples included failed entrepreneurial decision, the madness of crowds and the excessive build up of debt.

Keywords: DGSE, Structure, Agency, Reflexivity, Decision Theory, Realism, Economic Modelling, Stock Flow Consistent Economics, Radical Uncertainty, Control Theory, Robustness, Financial Stability

DGSE – A Dead or Dying Bird?

Birds have a particular place in our language about ideas. Nothing is deader than a Norwegian Blue parrot, though some may believe otherwise. Black swans shatter our conventional wisdom, when semantics get in the way we say if it quacks like a duck it passes the duck test. For portents of death, doom, peace and new dawns we have used birds for millennia. Perhaps our ancestors saw in birds a link to the heavens and fate.

From the hilarious new film Pirates an Adventure With Scientists we have Polly the ‘Not Parrot’ – and clearly not (quite) extinct because there are lots of parrots alive aren’t there. Clearly if you get your categories wrong logic goes out of the window and humour flies (waddles) in. Of course Pirates are a good topic for economics, is the model the isolated Robinson Crusoe living by his wits isolated from the world, of the rapacious mercantile band pillaging resources? Perhaps both are equally flawed universal models of how to run or model an economy.

In the recent debates about the ‘core’ problem about what is wrong with the current dominant paradigm of economics the categorisation problem has been particularly acute.

Is the problem with the dominant class of mathematical models used in Neo-classical economics – so called Dynamic Stochastic General Equilibrium Models (DGSE is much less of a mouthful)? A recent study has used a database of 50 wildly differing models of this class and the older Hicksian ISLM variety – none of them predicted the global financial crisis of 2007-8.

there is no reason to single out DSGE models, and favour more traditional [ISLM] style models that may still be more popular among business experts. In particular, Paul Krugman’s proposal to rely on such models for policy analysis in the financial crisis …is misplaced. Is there any hope left for economic forecasting and the use of modern structural models in this endeavour? (Wieland and Wolters 2012)

Krugman and Mark Thoma on their blogs have argued in addition that the more recent ‘New Keynesian’ models do not suffer from the weaknesses of the first generation DGSE models – the ‘new classical’ models – and that New Keynesian models – with their added assumptions of ‘sticky prices’ etc. are a different beast altogether. Mark Thoma even going as far as claiming that New Keynesian approach has moved beyond Neoclassical economics. The argument would appear to be about semantics and categorisation – Does New Keynesianism pass the duck test as being part of the DGSE family – or is it a new species altogether?

DGSE based models have only appeared to hold a very brief period of ascendency, most central banks have only begun to use them in the last half a dozen years. Yet DGSE and its foundational assumptions now seemed assaulted on all sides.

N. Gregory Mankiw a pioneer of New Keynesian DSGE modeling, has expressed doubts[1]

‘New classical and new Keynesian research has had little impact on practical macroeconomists who are charged with … policy. … From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.’

 Whilst Robert Solow in testimony to congress[2] has stated:

‘I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way… The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behaviour, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.’

It is no longer career destroying heresy to question ‘microfoundations’ and ‘rational explanations’ – indeed in the last month we have seen a vigorous debate in the blogosphere on these issues (see Noahopinion for one round up). Indeed one attack on the need for microfoundations has come from Krugman himself; a position which is rather hard to square with the claim that microfounded New Keynsian models are something of a new paradigm.

Rather than the duck test the key test for DGSE is whether it passes the Dodo test. Namely is it empirically useful and is it founded on sound theoretical assumptions – if not should it be declared extinct as a useful idea? Writing from a broadly post Keynesian perspective I would of course say it is extinct, its fans will say the parrot is alive despite all evidence to the contrary.

In the rest of this paper I will argue that both the predictive power and the degree of acceptance of a model are sociological phenomena, but of very different kinds. It draws on the concepts of agency and structure, and their interaction in social theory. Few economics thinkers, with the exception perhaps of Varoufakis[3], have noted the importance of these to economic theory.

The critical part of the argument is that because neo-classical economics has a mistaken specification of what it means to be rational, indeed to be human, and that it consequently misses key aspects of human decision making, notably the misjudgement of the behaviour of other agents, entrepreneurship and the evolution of policy issues such as interest rates. In omitting these aspects its models, notably DGSE, are mistaken, and this mistaken specification is the reason these models are flawed. Moreover by encompassing this more modern view of human behaviour these ‘black holes’ in economics theory can be filled – the replacement to the neo-classical paradigm has richer explanatory power and should be embraced as a more compelling paradigm.

To our knowledge no-one has fully explored the consequences of this failure of human decision making to the inability of neoclassical economics to explain phenomena such as debt, herding, financial instability and the tendency to undertake unwise exhuberent economic decisions at the worst of moments. We will attempt to do so.

Finally the paper sets out a positive alternative approach towards the construction of reasonably robust models to inform policy.

The Turning Point in Faith in Neoclassical Economics

At the start of the banking crisis, the air was thick with the sound of lachrymose economists. How did they miss the biggest crash since 1929? Professors at the LSE were asked that very question by the Queen – and were too tongue-tied to reply. A better answer came from Alan Greenspan, until recently the most powerful economist on the planet, who went to Capitol Hill and confessed to a “flaw” in his model of the world. Clearly, the economic crisis was also a crisis of economics. Chakrabortty

The past few months, and even more apparent a gathering tide in the last few weeks it has becoming apparent that criticism of the foundations of neo-classical economics – from being a fringe heterodox activity has become a growing rallying cry, perhaps crystallised at the buzz surrounding INETBerlin, especially the interest of young researchers.

In the last few months we have seen important works fromy Keen[4], Varoufakis[3] and Schiefler[5] challenging in different ways the foundations of neo-classical economics.

Whilst some of the most post-emeritis of the post-keynsians, like Davidson, or Leijonhufvud have the spring in their step and appetite for the chase of eager 21 year old graduate students.

Of course the rather unseemly at times Keen/Krugman Debate might be seen as a turning point; if only because it drew attention to the ideas on money & debt that the non-neoclassical school saw as central but that many in their fresh or saltwater citadels have not before encountered.

Perhaps more important some of the key figures within the ‘New Keynesian’ School were reporting important results, ( Kocherlkota and Eggertson et. al) that suggested that flexible prices were themselves destabilising (undermining both the SMD equilibrium core of DGSE derived models and Hicksian IS/LM models as well as the rhetorical claim that ‘stickiness’ was the essential differentiator between DGSE and New Keynesianism. )

But although it is perceived as a wobbly structure, like an old wooden house with so many new Keynesian extensions it threatens to fall down under its own weight, it hasn’t yet and perhaps needs to be pushed. That will not happen until the emerging generation of economic thinkers understand that the key advantages and tools they see in Neo-classicism are sub-optimal and a better alternative exists. That is the purpose of this paper.

DGSE – A Theory of Everything?

As a starting point it is important to ask why DGSE is viewed as superior knowledge by its followers.

Costas Azariadis & Leo Kaas (2007) tell us why

Prescott has been laying the foundations for a theory of everything in macroeconomics that will stretch well beyond the frictionless environments treated in its early version. A theory of everything is an attempt to explain key empirical observations in nearly every subeld of macroeconomics from a simple, logically coherent conceptual platform with a minimum of institutional detail

It is the same beguiling prospect of a ‘theory of everything’ that led physics to chase string theory for 20 years with ever decreasing returns, and finally a grudging acceptance (except from the most dogmatic acolytes) that other approaches also needed to be considered.

It was the introduction of microfoundations, with the added assumption of Muths/Lucas’s rational expectations that led to DGSE displacing the earlier generation of ‘old Keynesian’ ‘scratchpad’ models, and database driven models of past empirical relations between macroeconomic aggregates that had previously been used by central banks and national governments.

It was this sociological dynamic of the ‘prize’ that microfoundations offered that made it so beguiling. As David Glaser said of his former teacher
Armen Alchian

There was simply no problem that he could not attack, using the simple tools one learns in intermediate microeconomics, with a piece of chalk and a blackboard.

This prospect, of being able to attack every problem with these simple tools made the promise of microfounded models so beguiling. But Glasner refers to the dog in the manger of this approach:

 huge chunks of everyday economic activity, such as advertising, the holding of inventories, business firms, contracts, and labor unemployment, simply would not exist in the world characterized by perfect information and zero uncertainty assumed by general-equilibrium theory.

The justifications for microfounded models have generally fallen into several categories.

  1. The ability to avoid errors in macroeconomic models on the basis that the model runs counter to microeconomic theory;
  2. The ability to use the tools of microeconomics – in particular welfare economics concepts such as pareto optimality
  3. The ability to avoid the ‘Lucas Critique’

I will argue that no economic model should make unrealistic assumptions about the decisions of economic agents, however microfounded models do make such unrealistic assumptions, and worse the term ‘microfounded’ miscasts the structural conditions under which such decisions are made. By recasting the foundations of economics we can also deal with issues 2 & 3 above simultaneously. This then would lead to no aesthetic preference or advantage for the ‘microfoundations’ approach.

Noah Smith

I see a whole bunch of microfoundations that would be rejected by any sort of empirical or experimental evidence … In other words, I see a bunch of crappy models of individual human behaviour being tossed into macro models. This has basically convinced me that the “microfounded” DSGE models we now use are only occasionally superior to aggregate-only models.

But macroeconomics alas is equally problematic.

Econospeak

Macroeconomics, alas, is everything micro isn’t.  The axiomatic structure is missing; much theoretical work is essentially ad hoc.  Data are thinner and models are at risk of failing the first out-of-sample challenge.  … leading macroeconomists often make predictions that are not simply wrong, but profoundly, cosmically wrong.  It’s a crap shoot.

The Social Science of Decisions

So what is the way forward?

The correct basis for understanding the basis on which human beings make economic or any other decision is decision theory in a political economy setting, not utility theory or even prospect theory/behavioural economics.

The large majority of decisions are made on grounds other than the assessment of rates of return, in circumstances where risks are able to be predicted and opportunity costs of decisions assessed. Furthermore contrary to Kahneman and Tversky[6] the basis on which human beings make these decisions is not because of some irrational psychological bias but because the way we make day to day decisions is because it is a rational response to an uncertain future.

Much of the conventional basis for the concept of ‘rationality’ is the positivist assumption that we have the time, comfort space and resources to make considered decisions based on evidence and alternatives. However the space within such ‘homo economis’; can operate is a fairly narrow one in economic history. If we don’t have a claim on property and the basic means of existence than raw territorial instincts of survival and struggle for space and the means of life take their place. Even today we are at any point, if say the fuel to take food to our supermarkets ran out, about five days by some calculations from such a reality. The reality of homo sapiens being a creature than takes a defensive approach to space without consideration of opportunity costs can be seen for example in the attitude of most individual consultation responses to urban planning decisions in their neighbourhood.

The increasing influence of the ideas of Gerd Gigerenzer [7] (whom I have written about on my blog several times) is that  making decisions on the basis of full ‘rationality’ in the Cartesian sense is impossiblethere is never enough time or information, the rationality model used in neoclassical economics assumes an infinite time available to collect information to make decisions with no downside to that collection. Rather it makes more sense that instead decisions made on ‘gut feelings’, in an uncertain world, are very often better choices (rational ignorance) . Gerd and his colleagues have shown that simple ‘fast and frugal’ rules – heuristics – frequently lead to better decisions than the theoretically optimal procedure.

Given that rationality properly conceived is not the optimisation of some utility function can microeconomic modelling based on the calculus of variations survive – no. So if we move beyond an assumption of universal optimization that we must also remove an assumption of a trend towards equilibrium.

Econospeak again

Disequilibrium dynamics are observed, but adjustment is the ill-behaved child of microeconomics, the one who smashes the furniture and is sent to bed early so that proper equilibrium conditions can hold forth

The Limits of Microfoundations

Should then we simply go for better microfoundations? This seems to be the approach of the emerging field of imperfect information economics
which INET is focussing much of its funding efforts on. This is a useful area for exploration but it will never be enough. If the future is not something we can ever fully predict because of radical uncertainty then the foundational principle of rational expectations must go. So then what is the perfect information condition which is being used as a baseline? Unless we have a time machine we will never perfectly predict the future.

The task is not to remodel a flawed economics of perfect information with some slight modifications to deal with uncertain information, but to overturn it in favour of an economics based on decision making under uncertainty, an economics based on ‘ecological rationality’ – to use Gigerenzer’s phrase, rather than one of Cartesian rationality, which underlies neoclassical economics (where the ideal is a series of physical objects like billiard balls where if you only know the starting conditions you could predict the future).

We know now that even if we have as humanly possible correct about starting conditions in a complex system there will still be considerable variation in a model. This was discovered by physicist Jonathan Carter who wanted to observe what happens to models when they’re slightly flawed.

But doing so required having a perfect model to establish a baseline. So Carter set up a model that described the conditions of a hypothetical oil field, and simply declared the model to perfectly represent what would happen in that field–since the field was hypothetical, he could take the physics to be whatever the model said it was. Then he had his perfect model generate three years of data of what would happen. This data then represented perfect data. …

The next step was “calibrating” the model…Carter …used standard calibration techniques to match his perfect model to his perfect data…he assumed, reasonably, that the process would simply produce the same parameters that had been used to produce the data in the first place. But it didn’t. It turned out that there were many different sets of parameters that seemed to fit the historical data. And that made sense, he realized–given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, you’d expect there to be different ways to tweak the parameters so that they can produce similar sets of data over some limited time period.

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward–and sure enough, his calibrated model produced terrible predictions compared to the “reality” originally generated by the perfect model. Calibration–a standard procedure used by all modellers in all fields, including finance–had rendered a perfect model seriously flawed. … he continued his study, and found that having even tiny flaws in the model or the historical data made the situation far worse. “As far as I can tell, you’d have exactly the same situation with any model that has to be calibrated,”.

Is this fatal to all models? It is fatal to the conceit that Economics could ever be a science that could predict every aspect of the future. But the finding is fundamentally about the issue of calibration and the distinction between curve fitting and prediction of an uncertain future. For example we may calibrate a DCLG model against data but the conditions that invalidate the underlying micro assumptions may not yet have occurred. But a more useful model in terms of its underlying theory may be less well calibrated.

This may perform far less well in terms of curve fitting but perform spectacularly better in predicting a global financial crisis. A model might for example find that there is a problem of debt deflation that is unwinding, policy measures may slow it but not stop it, like the hot potato analogy it may hop across national borders but so long as the debt burden problem remains it will inevitably play itself out. This relates to the key systems theory concepts of robustness and control to which I shall return.

As Krugman noted in 2009

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth… the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.

So it is more than just tackling garbage in garbage and saying our current microfoundations are garbage and we should get better ones. The way we conceive of microfoundations is flawed as we can never get perfection in.

Partially we must get over the view that if only we had better information we would be more rational and make decisions more like the ‘internally consistent’ micro models of homo-economis. So what if micro is absolutely internally consistent.? Theories can be consistent but invalid, or valid but inapplicable.

The Lonely Island of Methodological Individualism

The problem is the methodological individualism preconception of economics and the problems this creates for all social sciences. By which I mean the ‘strong’ view of the isolated nature of individual decisions as described by Hodgson ‘an archipelago of Robinson Crusoe’s[8]’.

Blaug[9] noted

 “it is helpful to note what methodological individualism strictly interpreted … would imply for economics. In effect, it would rule out all macroeconomic propositions that cannot be reduced to microeconomic ones, … this amounts to saying goodbye to almost the whole of received macroeconomics. There must be something wrong with a methodological principle that has such devastating implications

Macroeconomics contains many phenomenon that only emerge at a macro scale. Like any complex system, as Anderson[10] notes in his classic paper, it has emergent properties that can only be understood at that scale.

We should note the origin of the term ‘Methodological Individualism’ from the first 1908 book by Joseph Schumpeter  Das Wesen und Hauptinhalt der theoretischen Nationalokonomie (The Nature and Essence of Theoretical Economics). In 1980 it was translated into pamphlet form with a short introduction from Hayek who with some glee wrote:

Many of [Schumpeter’s] students will be surprised to learn that the enthusiast for macroeconomics and co-founder of the econometrics movement had once given one of the most explicit expositions of the Austrian school’s “methodological individualism.” He even appears to have named the principle and condemned the use of statistical aggregates as not belonging to economic theory.

That this first book of his was never translated is, I believe, due to his understandable reluctance to see a work distributed which, in part, expounded views in which he no longer believed.

But even in this early work Schumpeter considered

Whether it is better to choose the society as the starting point is merely a methodological question without any principal significance.

There is an additional problem is you assume methodological individualism but only have a single representative agent. Then you ignore issues than can only be understood through the understanding of the fallacies of composition and division. But as the effects of these includes issues of prices market wide being driven by agents acting individually in their own interest which may drive down wellbeing overall (such as saving or planting a bumper crop when the weather is good) one wonders of the realism of models based on this assumption. The effects of issues of composition and division differing from the intended results of agency covers all of supply and demand, savings and investment, without these there is little left of economics. However the assumption that what is good for the individual actor will produce aligned results for social wellbeing is of course the foundational principle of DGSE and its children. Therefore one of the key arguments advanced for DGSE – that it enables application of welfare economics techniques – is flawed as its single producer/consumer representative agent model is likely to send false signals on welfare effects because of these compositional issues. It confuses the behaviour of part of an economic system with the whole of that system when comprised of interacting agents.

Agency and Structure, the Key Nexus in the Social Sciences which Economics has Ignored

Dan Hirschman – Asociologist
wonders if

Microfoundations is to Economics as ____ is to Sociology

..sociologists have been having similar debates for approximately forever … about agency and structure..I wonder to what extent a comparison of these debates in economics and sociology would prove fruitful?

The blank of course being the social theory concept of agency.

Hahn[11] a strong critics of the neoclassical approach to microfoundations asked the key question

If one asks “what does micro-economics contribute to our understanding and study of an economy?” the answer is that it furnishes use with a theory of the actions, and the interactions, of agents. Moreover, it seeks to account for any regularities in the aggregated behaviour of individuals.

In the broader social sciences the actions of an individual is known as agency, the capacity of humans to act as agents on their own behalf, either individually or collectively. Individual acts in relation to structures. Structure refers to all those factors that both limit and enable humans’ ability to act as autonomous agents and reproduce power structures for individuals to exercise power over others, or have power exercised over them. These structures include social class, religion, gender, ethnicity, customs/norms, etc. As well as social structures individuals are constrained and enabled by spatial and environmental structures including geography, weather, and bio-genetic factors.

Both structure and agency have their own causal powers and mechanisms. Though interdependent it is reductionist to force one as being the function or passive result of the other.

We cannot reduce society to agency[12], assuming all economic phenomena emerge from the atomistic behaviour of individuals, alone. Indeed, as we find for example in Braudel the development of institutional structures have been essential to capitalism and modernity.

The key problem of an approach to economics which assumes methodological individualism is that it ignores the very forces which make individuals different behave differently confronted by economics problems. Individuals don’t land equal spaced on the surface of the earth with equal ‘endowment’ as handed out by the imaginary ‘pater familias‘ in Samuelson’s fairy tale. Rather individuals and often trapped within the context of the resources and power available to them within a structural setting not of their own choosing – limiting their ability to act without constraint – a property sociologists call Habitus (after Bourdieu[13]) how individual as they learn how to act and distil knowledge in the world and structures to which they are born – leading them engage in what Bourdieu calls ‘fields’ an activity such a high paying profession which demands certain learned competencies that actors must possess. These learned competencies Bourdieu calls ‘capital’, and therefore offers a tantilising glimpse of a widened conception of how human capital differences, and the means of acquiring them lead to economic inequality. It is not just a matter of skill, but of the social network, resources and exercise of power that enables those skills to be acquired and exercised.

‘all those who share a given social position are exposed to similar opportunities and necessities, and they tend to develop a similar habitus’, one that ‘encourages us to behave in ways that reproduce the existing practices and hence the existing structure of society’ (Elder Vass 2006[14]: 327).

The mutual relationship between structure and agency – structuration – has been a key driving force in social theory over the last 20 years[15]. It avoids the twin traps of assuming all social arrangements are the result of the culmination of free socially acting individuals or the equal trap that all actions are the result of a functionalist structures maintaining a system . The structure agency discussion mimics the distinction in systems theory that neither part nor whole are reducible to each other.

Yet despite the importance of these arguments and the syntheses that have resulted from them there is hardly the slightest reference to them within the economics literature. It is if Economics existed in a hermitised cell whilst developments in social theory passed them by. Neoclassical economics today uses the same antiquated theortical tools of the sociology of Talcott Parsons in the 1950s.

The Trap of Relying Solely on Structure; Agency & Reflexivity

But an equal trap is the suggestion made by authors such as Colander[16] & Hahn[11] that we can replace a microfounded economics with a macrofounded one. It is important to realise why social theorists have considered an approach to social theory which leans solely on macro/structural issues too limited.

Of course habitus – structure in place – is not a one way cause – leading sociologists and social theorists to shape the concept of reflexivity, that agents can conceive of changes to shape their condition, shaping and creating structure as well as simply structure shaping agency.

Omar Liazardo

 Agency is simply freedom of cognition from objective aspects of the world, or more precisely the agent’s freedom…to conceptualize the world in alternative ways

Simply conceptualising the world is not enough to change it, however reflexivity is a key precondition for individuals to be able to.

Margaret Archer’s writing on reflexivity have been highly influential[17] [18] [19], especially as she approaches the issue from a realist and not an idealist perspective. She theorises that reflexivity is an inherent ability that all agents share, and it takes place through the self reflexive dialogues that social actors have with themselves, when they consider their own social position, identities, action and thoughts in relation to other agents.

This critically allows for the ability to innovate and tread new ground, learning knowledge that their parents could not have known, and undertake professions of which their parents have no experience[20].

Here we can see a clear link to the concept of entrepreneurship – reshaping and escaping habitus. Indeed the very conception of what is entrepreneurship is not tractable without conceptions of field and habitus. What new knowledge and information that is sought requires a conception of what exists, which by definition is not new information. The innovation of the new requires a benchmark and competition with the old.

Reflexivity and the Lucas Critique

Reflexivity makes forecasting difficult. If you believe you can forecast the future then your agency will affect the future state of the system, both through your behaviour and your influence on the behaviour of others.

Lets say there is a car park with 10 spaces and 11 cars trying to use those spaces. You arrive at 9.05 every morning and find you have now space. So you observe the pattern of cars arriving and find the last car arrives at 8.59. So next day you arrive at 8.50. This annoys the person now forced to park a long way away so they arrive at 8.45. But there is a driver who normally arrives at 8.46 every morning. I am now competing with that person for the last space. And so on and so forth

Without knowing how all agents will react it becomes impossible to use a model to make a strong prediction.

This of course was the basis of the Lucas critique[21] of aggregate based economics, which gave a huge boost to the urge to develop microfounded models.

Economics was swept away with Lucas’s purported solution that we should model the “deep parameters” relating to rational preference, technologies and resource constraints that govern each individual’s behaviour.

But how uniquely ‘deep’ and immune to this critique are these parameters? If as in the above example technology and the resource level (parking spaces) are broadly fixed that the only remaining deep parameter is rational preference. As has been pointed on by a number of writers the assumption of ‘rational expectations’ assumes an agent who can forsee the future (see for example Keen 2011 P248[4]). This undermines the initial assumption of a radically uncertain future which required a model in the first place. If we know the future do we need a model at all?

However the weakness goes beyond this. An uncertain future is created by actors today who are unable to accurately predict and respond to their fellows. So the assumption of rational expectations is really one of psychic expectorations, being able to predict the actions of your fellows. Yet in the car park example above what would this mean? Would no-one get out of bed in the morning because none could not be sure of being last, would everyone arrive at exactly the same time and jam the entrance? Of course time is there to prevent everything happening at once. Would not the most rational think to do be to roll a dice on your arrival time, drive straight in and have a 10/11 chance of getting a space? Perhaps but over what time period?

The close eyed observer may have noted that this example is quite close to many problems in high frequency trading. Trying out outguess fellows by milliseconds at the margin far from being rational and reinforcing stability actually generates instability given that being uncertainty of action is not just a product of the future but a product of the unknowability of the decisions of others today that in acting create the future,

The Lucas critique then was wrongly specified. The issue uncertainty generated by reflexive multiple actors is an issue whatever the assumption on rationality and whatever the degree of aggregation or the lack of it.

Does that mean then we cannot model? No rather it is an issue of how deal with the issue of robustness of decisions making – Soros – one of the very few economic thinker who have tackled the issue of uncertainty helps us here.

Soros on Reflexivity- a radically uncertain future

The concept of reflexivity
is more than Keynes’s concept of radical uncertainty, that you cannot know the future even in a probabilistic sense. Rather it is an explanation of how people act in the face of uncertainty and how that very action helps create an uncertain future.

The ergodic axiom is central to neoclassical economics, To Samuelson if you know that there is a pattern of behaviour in the past then those relationships can be projected into the future. Modelling is like a mirror, the future is reflected back at you with merely a change of axis. But radical uncertainty shatters that mirror, what lies beyond the axis cannot be known with certainty. The axis of thee present is what Varoufakis[3] has called – the wall of indeterminacy.

Soros’s extends Keynes concept of radical uncertainty in two ways, Firstly that reflexivity is a key cause of radical uncertainty, putting agency in the frame not simply the fog beyond the wall of uncertainty. Secondly that reflexivity requires the rejection of the ergodic axiom.

As Davidson comments

People change their behaviour mainly when things become unpredictable. If predictable then they can continue on the same behaviour as currently.

Soros

Recognizing reflexivity has been sacrificed to the vain pursuit of certainty in human affairs, most notably in economics, and yet, uncertainty is the key feature of human affairs. Economic theory is built on the concept of equilibrium, and that concept is in direct contradiction with the concept of reflexivity.

Uncertainty inherent in Habitus not simply Reflexivity

Reflexivity is central to uncertainty but is not reducible to it, Soros in his FT lecture on reflexivity again.

reflexivity is not the only source of uncertainty in human affairs. Yes, reflexivity does introduce an element of uncertainty both into the participants views and the actual course of events, but other factors may also have the same effect

Structure can emerge even without reflexivity. A classic example here is the effects of Homeostasis in creating ‘invisible hand’ effects, as I wrote on my articles here and here on homeostasis & the invisible hand.

 intentional action is of secondary importance. In a population of foxes (predators) and rabbits (prey), an increase in the number of foxes will cause a reduction in the number of rabbits; a smaller rabbit population will be able to sustain fewer foxes, and the fox population will reduce. Foxes act but the number of rabbits has nothing to do with the subjective views of foxes, it has everything to do with the initial conditions of ‘reproduction’ and dynamic feedback of the system. The criticism of the idea of the ‘invisible hand’ as teleological ignore this whole-part interaction in complex systems….

[Homeostatsis is} a phenomenon that does not not even require humans or purposeful action – it equally applies to the beehive, the anthill, and the flock of birds.

All economic agents operate within and remake the market structure every day, every moment of the day. But that market has behaviour that cannot be ascribed to or described as an individual agent

Robust Decisions beyond the Wall of Uncertainty

Let us return to the field of modern decision theory. One of Gigerenzer’s favourite heuristics – which – according to the principal of ecological rationality[22] – he claims we were born with as evolution furnished us with tools to live in an uncertain world, is the tracking heuristic.

In this case the rule is simple look up at 45 degrees, and run towards a ball in flight. If the ball is above the line of sight slow down, if below it speed up.

He doesn’t mention it but this is a classic example of Homeostasis, of the Governor Principal from control theory. The purpose of a governor is to achieve a steady rate of output in a system where that rate of output is subject to influences which are uncertain. If there are disequilibrating forces then a government creates an equal and opposite force this temporary equilibrium between the forces is then used to control the rate of output of the system, so that it oscillates around a point of equilibrium that is never precisely reached. Similarly the equilibrium point of the governor is subject to constant oscillation –equilibrium can only be understood as a line of points in time which are constantly changing, not as a timeless point.

This model of equilibration controls guiding a system in a constant state of dynamic disequilibrium is a good conceptualisation of many economic conditions. The more developed the systems of control the more stable the system, the weaker the systems of control the more instable.

This has two important consequences. Firstly instability is then something we can mathematically model using the tools of systems engineering – Lyapunov stability

Instability is not the same as uncertainty, rather it is a consequence of uncertainty and one of the several factors – including the unknowability of the decisions of others – that creates uncertainty.

Secondly even when a model deals with issues beyond the wall of uncertainty we can use techniques to help make models robust. Again robustness is more than just uncertainty as a may be wrong because of unknown causalities in the past and present and not just the future.

Philosopher of science Wimsatt[23] uses the term robustness to mean the stability of a result under different and independent forms of determination, such as methods of observation, measurement, experiment or mathematical derivation– or one way the ability to withstand the kind of calibration problems/initial condition accuracy problems highlighted by Carter.

Or as specified by control theorist Chandrasekharan(1996)[24]

“Robust control refers to the control of unknown [systems] with unknown dynamics subject to unknown disturbances”

This might very well describe wrestling with an economic model used to explore forecasting and attempting to fit to past data. A key aspect of control that humans possess is to build on to their decisions the results of their learned experience and to estimate the consequences of making a wrong decision.

Maercelllino and Salmon (2006)[25] are two of the very few economists to consider the application of robustness & control theory in economic theory.

Robust decision theory has recently emphasized a deterministic approach to modelling …(uncertainty)… whereas standard stochastic decision theory has employed probability models (risk). Essentially, the robust approach does not presume that economic agents are able to employ probability distributions. (p173)

The Cost of Error under Radical Uncertainty

People do not just test hypotheses or formulate decisions in a disinterested way, but assess the costs of different errors.  This ideas comes from evolutionary psychology.  James Friedrich[26] suggests that people do not primarily aim at truth but try to avoid the most costly errors. For example in job interviews one sided questions are asked because they are focused on weeding out unsuitable candidates

Yaacov Trope and Akiva Liberman’s[27] have refined this idea, assuming that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis, and the consequences of each one.

This concept has great parallels with the ideas advanced in the 1950s by the economist G.LS. Shackle, who advanced a non-probabilistic conceptualisation of decision making under uncertainty. He criticised conventional views of rational choice because they did not adequately deal with the issue of ‘surprise’ by events caused by our ‘unknowledge’.

In Shackle’s view, individual choices made in real world (non-experimental) conditions -are non replicable. You cannot undo the decision.

Shackle put forward the concept of potential surprise, the consequences of an action. The potential surprise values of the various outcomes do not add up to one – they are non-additive.

Lets give an example suppose the entrepreneur is asked to make an exhaustive list of the specified distinct events which can affect the value of alternative investments, as required by the application of probability theory and critical rationalism. The entrepreneur, he contended,

“will in the end run out of time for its compiling, will realize that there is no end to such a task, and will be driven to finish off his list with a residual hypothesis, an acknowledgement that any one of the things he has listed can happen, and also any number of other things unthought of and incapable of being envisaged before the deadline of decision have come: a Pandora’s box of possibilities beyond reach of formulation.”

Of course if a hypothesis is based on a false heuristic – like the ponzi heuristic – the information provided from sales in bubble conditions will not able agents to determine, if they are so inclined to do statistical modelling, to distinguish between type I and type II errors.

Explaining Bad Economic Decisions

If then, real world economic decisions are not based on probalistic reasoning but on the basis of trying on the basis of robust evidence from learned experience and the best guess of consequences we will often be wrong. We will have taken the wrong lesson from our experience, wrongly calculated the consequences or simply, given the constraints of time, might have supplemented reason by the information presented by the economic choices of others in view.

Neoclassical economics by contrast is about explaining good decisions in a world of perfect information in a perfectly predictable future.

But it isn’t so we will be wrong. How then does it explain the poor economic decisions and the disastrous ones that led to the global financial crisis? An economics aimed at angels of perfection is no use to those dealing with flawed human beings.

Bad economic decisions need to be as central to economic theory as good ones. It is the bad decisions that drive the dynamics of everything much that is interesting about economic change.

‘although entrepreneurs can..make errors, there is no tendency
for entrepreneurial errors to be made. (Kirzner 2007[28])

With the greatest respect for Kirzner, who has given us many insights into market process and entrepreneurship, this statement is nonsense.  There are such tendencies, and such tendencies are endogenous to the business cycle.

It is trivial to call a directionality in rising prices, a rising price must one day fall (falling prices are more difficult, they can fall to and below zero – a scrapping cost for outmoded goods).  What is hard is calling when, getting the timing right.

What rules do market participants use to determine when to buy and sell?  I’m considering here the non-professional investor, acting without the benefit of sophisticated models.

Let us consider two main theories for this, both based on different axiomatic foundations.

The Neo-classical – that decision under uncertainty is the choice that maximises the expected value of a utility function (See Savage 1954[29] )

The Austrian – human action is rational but ex-poste the actor may discover they have made an entrepreneurial error.  So economics is not a theory of choice or decision making but on the social processes of coordination, which depends on the awareness the actors show in the exercise of entrepreneurship (see De Soto 2010[30]).

The problem is that both theories leave bad decisions unexplained; placed outside the field of analysis of ‘core’ economics.

The kind of bad decisions that saw many Albanian’s investing in a pyramid scheme that led to economic collapse in 1996.  The kind of bad decisions that see people all over the world taking out credit to get in on housing bubbles, just before they pop.

If such actions are seen as outside the core concepts of economics then economics can tell us nothing.

Of course many decisions are not made on the basis of the calculation of a utility function, in fact outside the blackjack card counter at a casino very few are.  Trying to explain the ‘deviations’ from rationality and the ‘bias’s’ of decision making we have behavioural economics and behavioural finance.  But it is putting lipstick on a pig, but it does not explain the commonality and universality of decisions that vary from the maximising ideal.

The Austrian approach is less flawed – bad decisions are made by those who believe them to be rational – but it tells us nothing about – for example – why in periods of ‘golden’ growth most actors make good decisions, and why in periods of bubble and crisis decisions are bad ones.  We often have a disjunction.  When the entrepreneur profits they are a hero – they got it right.  But when, despite factors beyond their control there are losses and a downturn – they got it wrong – but then Austrians contend the market gets in right in driving failures to the wall.  We have a mish-mash here of methodological individualism and hinting at some form of emergent process, which acts above individual will, to restore economic prosperity.

Austrians cannot have it both ways, they lack a coherent theory of how multiple decisions are made with economic consequences.  Entrepreneurship is seen as an impenetrable black box of the not yet foreseen and unknown.  It tells us nothing about economic change other than it is seemingly unknowable and unpredictable.

Consider a third new approach – based on the decision theory approach explored in this paper, and as promoted by LiCalzi[31] and his co-writers

This approach extends the familiar satisficing concept of Simon (1955)[32]

the agent should establish some target t and then pick the first action d which meets the target

As follows:

This simple target-based model is not complete because there may be uncertainty about the target itself. For example commercial firms must usually view their customers ‘requirements as uncertain…Hence, we relax the assumption of a known target and replace t with a random consequence T . The ensuing target-based decision model prescribes that the agent should choose an action d which maximizes the probability v(d) = P(Xd T ) of meeting an uncertain target.

LiCalzi shows that aiming for a probabilistic target in this way both satisfies Savages and von Neumann and Morgenstern’s axiomatization.  Indeed rather than using the wooly ‘language of utility’ it enables use of the more objective  ‘language of targets’ – indeed the ‘indifference curve’ between two different decisions can be interpreted (after Pratt, Raiffa and Schlaifer (1995)[33]) , as ‘indifference probabilities’.  No understanding by the agent of cardinal utility, or indeed ordinal utility, is needed, only of whether one event is more likely than another.

But if faced by a decision between multiple options all that is needed is an assessment that one is most likely.  It does not require a-priori a subjective construction of a probability function.

Entrepreneurial decisions, at their most simplistic, are decisions about which choice will make the most profit.  But that doesn’t necessarily require a subjective maximising function, simply a decision choice about which option is likely to make the most.  When time and information is available it might be appropriate to calculate a maximising function, as through operational research methods such as the travelling salesman for delivery of goods, but most decisions are not made that way, they are a special case of decision making under full information.

Most decisions are made on learned experience and gut instinct – ecological rationality. Indeed we know from psychological research that most decisions are not even made from a range of options at all.  The experience of the decision maker will be used to rule out certain courses of action in advance.  (See my article on action script decision theory – the decision maker has an idea how things work based on the knowledge that has been gained from experience. S/he compares the possible action against what is known to work.)

Explaining Typical Collective Dynamic Economic Phenomena

Consider the herding instincts and ‘madness of crowds’, ‘irrational exhuberence’ ‘animal spirits’ of markets.  Some may have a simplistic decision rule – my mate has bought a house at a low price, that price rose, they made lots of money, I can too.  In the language of Minsky this could be called a ‘Ponzi investor’.

Many see that as a deviation from the rational actor presumed in neo-classical theory and look down their noses at Minsky’s ideas as a result.  But as it is the simplest economic decision rule does not make it irrational.  For many getting into a market at the right time and right price it was a very rational decision.  But at a decision rule it is incomplete. Others with more experience of real estate markets might add to the decision rule ‘unless that capital value of the house exceeds a valuation dependent on rental yields’  that is one based on economic fundamentals.  But these will always be in the minority in any market involving many actors who are not professional investors.

This of course is one of the reasons why the housing market is inclined to bubble behaviour to a greater extent than other markets, and why the distribution of different learned decision rules need to be at the core of an explanatory and predictive economics.  Markets do have differential tendencies to entrepreneurial error, errors which, upon study, found to be systemic and predictable to a considerable degree.

Put at its most simplistically a market based on a crude rising price rule will rise exponentially until budget constraints are met, when they are people will borrow until liquidity constraints are reached, then the market will turn.

Interest rate changes explain the changes to the point of inflection, but not the causes of malinvestment.  Austrian theories of the business cycle are one step removed from the agency based process of crowd following – homeostasis- that drives the cycle.  They gloss over the systemic entrepreneurial errors.

Explaining Credit Dynamics as Responses to Uncertainty

Neoclassical economics denies the importance of debt. To some it is both its central most defining feature and its greatest weakness.

Imagine then an economy composed of Lucas agents who can perfectly predict the future including the date of their own death. Consumption smoothing unto death alone would generate a modest interest rate. With an even population pyramid, steady interest rates, one currency area and an exact match over time of the products one would wish to take out credit (for housing) and the population then the cash payments of people paying off debt would match exactly in the cash flows from banks of new credit. At that point credit=debt the whole thing cancels and it is forgotten about.

Of course in the real world none of these conditions are true, this will result in mismatches between the rate of credit creation and the rate at which loans are paid back. These changes in the rate of credit creation, through affecting the amount of money in circulation, will vary aggregate demand, just as will the translation of the stocks of assets purchased and then sold with credit and used as counterparties to loans into flows of money in circulation.

Just what the forward selling price will be of assets we purchase with credit will be subject to radical uncertainty. Because we get it wrong there is an additional economic rent on money created by banks and other financial intermediaries to account for their perceived risk of default. But because the future is not fully amenable to risk assessment because of uncertainty, and because the actions of agents themselves create that uncertainty there will be entrepreneurial error in credit creation, especially of course when people demand credit on the basis of simplistic decision rules – such as the ponzi heuristic above – and when creditors expand credit in this climate of homeostatic growth of uncertainty – assessing ‘credit risk’ on an individual basis rather than ‘credit uncertainty’ as an emergent phenomena.

Institutional Agents and Collective Economic Decisions

A third area that this expanded view of economic agency can help inform is that of how and why institutions and not just individuals make decisions. Indeed as often been commented in the world of the all seeing Lucas agent is incompatible with the neoclassical Coase theory of the firm acting according to imperfect information, so why does the firm exist?

From Penrose’s classic 1959[34] study

“All the evidence we have indicates that the growth of firms is connected with the attempts of a particular group of human beings to do something”

Working in a firm is a classic habitus, and the firm itself is a structure than can extend globally. The structuration process, the reproduction of the structure, the firm, by agency contributing towards decisions of that firm is not reducible to modelling as if it were a single rational actor. The CEO will make a small proportion of the decisions made by the firm, and his decisions will be constrained and shaped by his or her perception of how his her decisions will be received by their board and shareholders. This class of agents will be concerned about the performance and decisions of the firm as a firm and will judge the performance of the CEO as a result. The survival of the firm trumps the survival of any individual, and the survival of shareholder value trumps the survival of the firm.

Most non-investment decisions within firms are not taken on the basis of weighing up the pros and cons of the profit maximisation potential of different choices. The market is made and decisions are based on theories and plans about how the market will develop. So instead decisions are based on goals set by management which serve as encapsulations of actions which will achieve profit. Of course these theories and goals can be misplaced through not understanding how the production process can be optimised to produce value – see much of Goldratt’s work, and the ideas of Throughput Accounting for example.

Again we turn back to how economic actors do not make decisions within the framework of so called ‘rational choice theory’ but as Bourdieu (2005) [35] puts it their “feel for the game” -“feel” being in other words their image (to use Boulding’s term[36]) of habitus, and the “game” being the field – and more particularly the range of decisions this field presents to economic agents within the firm. This of course is not to suggest that ‘game change’ decisions – which reconceptualise the habitus of the firm and the fields within it – cannot be made.

This approach can also help endognise within economic theory decisions made by political institutions such as central banks, though not from the narrow ‘rational choice’ perspectives of public choice theory. For example decisions made by central banks in setting interest rates may be based on targeting ‘goals’ which might be based on bad economics within a mandate based on terrible economics.

So for example a wider economic model might have to incorporate a bad economics module for the central bank which the modeller knows incorporates false assumptions based on institutional reality. Indeed the cynic might say the definition of a central bank is a bad economics module.

Arrows that miss the target

The Lucas microfoundations revolution in economics is largely due to the application of his suggestion of the insertion of chapter 7 of Debreu (1959) as the core axioms of general equilibrium modelling, but only with the insertion of an axiom of an agent with perfectly future predicting abilities – rational expectations.

Debrau’s conception of commodities being time and date stamped, so a smoothie sold in New York in February is different from one sold at Miami Beach in July, was a crucial insight.

But almost never discussed is that this implies that commodities, consumers and producers exist across a surface not as an asptial and atemporal spot decision.

As soon as you get price across a surface you will get differential economic rent, especially as, after Von Thunen, cost of travel can be seen as deductions from that rent, creating an ever shifting surface of economic possibilities. A surface potholed by the frequency of spatial reswitching[37] [38] and other spatial Wicksell effects

Given an economic surface of rent variation you can never get perfect competition unless all economic decisions are made on the same head of the same pin.

Without this the SMD conditions are breached and contrary to Farmer the result is a multiple equilibria separated by disequilibria and instability rather than a foundation which demonstrates general equilibrium.

Another of Debreu’s insights was that a production plan does not need an appraisal of risk – although his assumption, as we show above, that firms plans are made to maximise value of shares is too simplistic.

Consumers and producers will be taking economic decisions to buy and sell at a price under the La Calzi target/goal based approach set out above. And the decision arrows, fired into an uncertain future, may miss their intended target.

For example an ice lolly maker may make 300 ice lollies to sell on the beach the next day and it might rain for a week (at least it does in England) that then becomes a sunk cost on the firms balance sheets and the asset becomes increased inventory with a storage cost. Indeed from Fisher/Minksy’s ‘dual price’ perspective all assets and all intermediate goods once produced become assets. From the perspective of a consumer entering a shop there is no ‘supply curve’ only a demand curve.

Because production takes time you cannot model exchange across this Von Thunen/Debreu surface as a single matrix operations between production decisions made by producers and consumption decisions make by consumers. Rather it is at least two separate decisions made at two points in time- the investment/production/marketing decisions made by firms –creating a shadow price – and the purchase decisions at market prices made by consumers. Such decisions separated in time are also likely to regulated by contracts – or in Leijonhufvud‘s term ‘an unstable web of contracts’ (note from this perspective the imputation theory of Wesier and the cost plus classical approach are compatible as it is within the monetary circuit that cost prices gradually adjust to imputed monetary flows)

The shifts in stock levels of firms then sends market signals to firms on how to reshape their production. Indeed the equilibriation process of Ricardo and classical authors, and recently revived, makes more sense than the point price general equilibrium, all decisions made all at once by, as Buiter puts it ‘is a friendly auctioneer at the end of time – a God-like father figure – who makes sure that nothing untoward happens with long-term price expectations…{these] are not models of decentralised market economies, but models of a centrally planned economy. Buiter 2009[39]

Firms of course will act on information of market movements of other firms. The market becomes a Stacklenberg Game, not a Nash Game, again chasing the market produces disequilibriation.

This then produces a surface of non Walrasian prices all the time everywhere, sending information signals to economic agents for potential arbitrage opportunities. Indeed in the perfect information perfect completion world of neo-classical microfoundations why would any producer rationally produce anything? It is a profit free, arbitrage free world.

As Richard Serlin Comments – microfoundations based modelling is

trying to fit a square peg into a round hole. You’re trying to fit perfect optimizing behaviour of individuals (“internal consistency”) to the behaviour of aggregates that did NOT, in fact, result from perfect optimizing behaviour of individuals. They resulted from very imperfect optimization of very imperfect individuals

Economic Modelling as Piecewise Engineering

“The simple observation that most models are oversimplified, approximate, incomplete, and in other ways gives little reason for using them. Their widespread use suggests that there must be other reasons. It is not enough to say that we cannot deal with the complexities of the real world, so simple models are all that we can work with, for unless they could help us do something in the task of investigating … there would be no reason for choosing model building over astrology or mystic revelation as a source of knowledge”

“All theories, even the best, make idealizations or other false assumptions that fail as correct descriptions of the world. The opponents of scientific realism argue that the success or failure of these theories must therefore be independent of or at least not solely a product of how well they describe the world. If theories have this problematic status, models must be even worse, for models are usually assumed to be mere heuristic tools to be used in making predictions or as an aid in the search for explanations, and which only occasionally are promoted to the status of theories when they are found not to be as false as we had assumed. While this rough caricature of philosophical opinion may have some truth behind it, these or similar views have lead most writers to ignore the role that false models can have in improving our descriptions and explanations of the world.”

Wimsatt 2007[40]

False models, in Wilsatt’s pragmatic realist ‘piecewise engineering’ view of theorising and model building, help us see flaws to build more robust models.

DGSE has served its purpose. It has shown us flaws which enable us to build a replacement, not as a theory of everything in one model, but as a framework of theory that enables us to explore everything.

In Wilmsatt’s phrase, this is has to be based on ‘Heuristics all the way down’ to the level at which change is the product of natural section. Heuristics based modelling is prior to axiomatisation or the use of algorithms.

Heuristic principles …they are retuned, remodulated, reconceptualised and often newly reconnected piecemeal rearrangements of existing adaptations or expectations and they encourage us to do likewise with whatever we reconstruct (Wimsatt 2007 p10)

Modelling Time and Resources

The typical graduate macroeconomics and monetary economics training received at Anglo-American universities during the past 30 years or so, may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding.  It was a privately and socially costly waste of time and other resources..Willem Buiter 2009[39]

We have looked at how economic theory founded on structuration (the mutual determination of structure and agency) and decision theory/ ecological rationalism theory can be much richer than one founded on methodological individualism and Cartesian rational choice theory.

We have also looked at the claims made for the advantages of microfoundations including:

  1. The ability to avoid errors in macroeconomic models on the basis that the model runs counter to microeconomic theory;
  2. The ability to use the tools of microeconomics – in particular welfare economics concepts such as pareto optimality
  3. The ability to avoid the ‘Lucas Critique’

And found them to be false. Indeed the structure –agency approach to modelling can bridge the link between micro and macro in ways that make these terms redundant, indeed it would be better to talk of economics as meso-economics. By which I mean the political economy scope, predominant in classical economics, where all economic decisions were seen as being taken within the context of an economy, an economy which could be modelled, but a conceptually real economy none the less. This is the field of issues of great interest in Economics – the relationship between individual agency decisions and the reproduction of economic structures. As Charles Goodheart said of DGSE “It excludes everything I am interested in”.

This structure-agency approach avoids the Lucas critique as it endogenises reflexivity. Indeed Lucas’s own solution to the critique turns out to have been a false one as it handwaves away uncertainty, the very reason agents have reflexivity and is based on assumptions that are incompatible with general equilibrium SMD conditions.

We have also found that the hope of using welfare economic tools was a phantom as you can’t assess welfare in a model with a single agent and no distribution. Welfare requires an assessment of winners and losers.

‘Microfounded’ DGSE then, can fulfil none of these promises, but an alternative approach can. How then can this approach be modelled?

I contend that a replacement modelling approach must have a number of key components in order to be able to be useful in its level of aggregation.

  1. It should be able to model at various levels of aggregation – including the potential for agent based modules
  2. It should be able to model at least the major components of sectoral balances – such as private and public sectors, state and Central Bank, and domestic and foreign sectors, and the balance sheets of all units modelled.
  3. It should be able to model the distributional impacts of wages, profits and rents, taxes and transfer payments, and enable the calculation of welfare costs and benefits by level of aggregation.

These three components imply by themselves, mathematically, a state machine simulation, but they are not sufficient criteria to explain changes of state. This requires:

  1. It should be able to model movements and state levels of money, base inputs and assets in terms of flows and stocks
  2. It must be fully accounting consistent – including joint treatment of assets and accompanying liabilities including money assets. This will enable a proper use of all accounting tools and methods within economics including proper analysis of money, credits, debts, counterparty secured assets, capital depreciation and investment.

This kind of model would avoid the problems listed in this article because it is a simulation of the economy not an abstraction of the economy. In particular it is a simulation driven by flows of money and revaluation of assets.

The category of models which fit these criteria, and are suitable for the kind of ‘piecewise engineering’ modelling is the kind of stock flow consistent modelling using social accounting matrices as pioneered by Wyne Godley & Marc Lavoie[41] and now being used and developed by figures such as Stephen Kinsella[42] and Steven Keen[4]. Keen’s approach is particularly of interest as it uses, in as yet unpublished work (though the lecture is online) engineering mathematics to derive a pricing formula for individual goods dependent on balancing flows in and out to achieve the desired level of consumption of the good – that formula being the same as the Kalecki pricing formula. Therefore the criticism often levied at these models, that they aren’t founded on a theory of price, does not hold.

This approach also allows for use of the mathematics of robustness and stability estimation to be used, though only the first tentative steps to do this have been taken so far. It is not perhaps too ambitious to imagine in five years times models will as of course evaluate their stability as you model as well as evaluating the robustness of initial and modelling assumptions through calculating variations to the model in the cloud and presenting a statistical report to the modeller.

Kinsella[42] on how these deal with expectations

Stock flow models tend to incorporate expectations in the following way…agents set themselves norms and targets, and act in accordance with these norms and targets, and with the expectations that they may hold about the future. These norms and targets ensure agents are rarely ever right about things, because they focus on past performance mainly. Mistakes in any period brought about by mistaken expectations in the last period create gluts or shortages of stocks in the form of inventories, money balances, or wealth. These stock buildups function as feedback mechanisms that change behaviour in the next period, for example a firm having a sale if inventories get too high above their target level

Taking Wing

This approach is bleedingly new, but its development, for the first time, offers the potential set of theory and tools to displace ‘micro-founded’ neoclassical economics from its dominant position in economic thought. It is not as yet a fully working set of fully comprehensive tools, but given Taleb’s comment that

 we would have great jumps in knowledge if we avoided teaching these models, and replaced them with anything, even gardening classes.

It is never too soon to get rid of DGSE.

One hopes a new generation of researchers, and an older generation that never drunk the DGSE kool aid, has now built up sufficient scale and momentum to develop this new approach to the extent that achieves its potential. We are no longer talking about ‘heterodox’ economics, but a new monetary mainstream economics.

References

1.    Mankiw, G., The Macroeconomist as Scientist and Engineer. The Journal of Economic Perspectives, 2006. 20(4): p. 29-46.

2.    Robert Solow, P.E., MIT, to the House Committee on Science and Technology, S “Building a Science of Economics for the Real World”, in Subcommittee on Investigations and Oversight:. July 20, 2010: Washington.

3.    Varoufakis, Y. (2012) A Most Peculiar Failure On the dynamic mechanism by which the inescapable theoretical failures of neoclassical economics reinforce its dominance.

4.    Keen, S., Debunking Economics. 2nd Revised and Expanded Edition ed. 2011, London: Zed.

5.    Schlefer, J., The Assumptions Economists Make. 2012, Harvard: Harvard University Press.

6.    Tversky, D.K.A., Prospect Theory: An Analysis of Decision under Risk. Econometrica, March 1979. Vol. 47(No. 2): p. pp. 263-292.

7.    Amory B. Lovins, L.H.L., and Paul Hawken, A Road Map for Natural Capitalism. Harvard Business Review, 1999(May-June).

8.    Hodgson, G., Meanings of Methodological Individualism. Journal of Economic Methodology, 2007. 14: p. 211-26.

9.    Blaug, M., The Methodology of Economics: Or, How Economists Explain. 1992: Cambridge University Press.

10.    Anderson, P.W., More is Different. Science 1972. 177 ((4047) ): p. 393-396.

11.    Hahn, F., Macro foundations of micro-economics. Economic Theory

2002. 21: p. 227-232.

12.    Giddens, A., The Constitution of Society. Outline of the Theory of Structuration. 1985, Cambridge: Policty.

13.    Bourdieu, P., The Logic of Practice. 1990, Cambridge: Polity Press

14.    Elder-Vass, D., Reconciling Archer and Bourdieu in an Emergentist Theory of Action. American Sociological Association, 2007. 25.4: p. 325-346.

15.    Kemp, C., Building Bridges between Structure and Agency: Exploring the Theoretical Potential for a Synthesis between Habitus and Reflexivity Essex Graduate Journal of Sociology, 2010. 10: p. 4-12.

16.    Colander, D.A., The Macrofoundations of Micro. Eastern Economic Journal, (Fall 1993). 19(No. 4): p. 447-457.

17.    Archer, M., Realist Social Theory: : The Morphogenetic Approach

1995, Cambridge: Cambridge University Press.

18.    Archer, M., Being Human: The Problem of Agency. 2000, Cambridge: Cambridge University Press.

19.    Archer, M., Structure, Agency and the Internal Conversation. 2003, Cambridge: Cambridge University Press.

20.    Archer, M., Making Our Way Through the World: Human Reflexivity and Social Mobility. 2007, Cambridge: Cambridge University Press.

21.    Lucas, R., Econometric Policy Evaluation: A Critique, in The Phillips Curve and Labor Markets, , Carnegie-Rochester Conference Series on Public Policy,, K.M. Brunner, A, Editor. 1976, American Elsevier: New York. p. 19-46.

22.    Gigerenzer, D.G.G.a.G., Models of Ecological Rationality: The Recognition Heuristic. Psychological Review, 2002. 109(1): p. 75-90.

23.    Wimsatt, W.C., ed. False Models as Means to Truer Theories. Neutral Models in Biology, ed. e.M.H.N.A. Hoffman. 2006, Oxford University Press.

24.    Chandrasekharan, P., C., Robust Control of Linear Dynamical Systems. 1996: Academic Press.

25.    Salmon, M.M.M., Robust Decision Theory and the Lucas

Critique. 2001.

26.    Friedrich, J., Primary error detection and minimization (PEDMIN) strategies in social cognition: A reinterpretation of confirmation bias phenomena. Psychological Review, 1993. 100(2): p. 298-319.

27.    Trope, Y.L., A., ed. Social hypothesis testing: cognitive and motivational mechanisms. Social Psychology: Handbook of basic principles, ed. E.T.K. Higgins, Arie W. 1996, Guildford Press: New York. pp. 91–93.

28.    Kirzner, I.M., Entrepreneurial Discovery and the Competitive Market Process: An Austrian Approach. Journal of Economic Literature, Mar 1997. 35(1): p. 60-65.

29.    Savage, L.J., The Foundations of Statistics. 1972: Dover Publications.

30.    de Soto, J.H. and I.o.E. Affairs, Socialism, Economic Calculation and Entrepreneurship. 2010: Edward Elgar.

31.    Bordley, M.L.a.R., Decision analysis using targets instead of utility functions. Decisions in Economics and Finance, 2000. 23: p. 53-74.

32.    Simon, H.A., A Behavioral Model of Rational Choice. The Quarterly Journal of Economics,, Feb 1955. 69(1): p. 99-118.

33.    Pratt, J.W., H. Raïffa, and R. Schlaifer, Introduction to Statistical Decision Theory. 1995: Mit Press.

34.    Penrose, E., E.T. Penrose, and C. Pitelis, The Theory of the Growth of the Firm. 2009: Oxford University Press.

35.    Bourdieu, P., The Social Structures Of The Economy. 2005: Polity.

36.    Boulding, K.E., The Image: Knowledge in Life and Society. 1961: University of Michigan Press.

37.    Pavlik, C., Technical reswitching: a spatial case. Environment and Planning A. 22(8): p. 1024-1034.

38.    Sheppard, E.S., T.J. Barnes, and C. Pavlik, The Capitalist Space Economy: Geographical Analysis After Ricardo, Marx and Sraffa. 1990: Unwin Hyman.

39.    Buiter, W., The unfortunate uselessness of most ‘state of the art’ academic monetary economics”, in ft.com/maverecon. 2009, Financial Times. .

40.    Wimsatt, W.C., Re-Engineering Philosophy for Limited Beings:: Piecewise Approximations to Reality. 2007: Harvard University Press.

41.    Godley, W. and M. Lavoie, Monetary economics: an integrated approach to credit, money, income, production and wealth. 2007: Palgrave Macmillan.

42.    Kinsella, S., Words to the Wise: Stock Flow Consistent Modeling of Financial Instability. SSRN eLibrary, 2011.

4 thoughts on “Why DGSE is so hard to Displace, and a Programme for its Displacement

  1. Pingback: Its not just Real Business Cycle Models that live in an Unreal World without Businesses « Decisions, Decisions, Decisions

  2. This is a minor point, but this quote “Maercelllino and Salmon (2006)[25] are two of the very few economists to consider the application of robustness & control theory in economic theory.” is incredibly wrong. Tom Sargent — who just won the Economics sort-of Nobel — co-authored a whole book on robust control in economics, and there are a bunch of attempts to use the idea. It’s not the dominant approach, but it’s not like no one does it, or no one has heard of it.

  3. Pingback: Mathematical Modelling in Economics that Meets the Lawson Critique | Decisions, Decisions, Decisions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s