Micro Assumptions in Macro Models – Any Scope for NK/PK Convergence?

With major orthodox figures such as Paul Romer attacking the ‘lazy’ approach to microfoundations in DGSE models and major figures such as Micheal Eichenbaum  questioning major pillars of DGSE (such as the Euler equation and by implication rational expectations) it is opportune to examine – in the light of recent theoretical developments – whether a stronger foundation for the economic behavior of individuals may emerge.

Romer made a number of points which apply to any system of modelling – including the identification problem which applies equally to SFC models.  The key point he also made however is that this masked the lack of progress of the DGSE approach to macro over the last 20 years including the ‘lazy’ approach to micro-foundations which ignored systems effects and emergent phenomenon,

Suppose an economist thought that traffic congestion is a metaphor for macro fluctuations or a literal cause of such fluctuations. The obvious way to proceed would be to recognize that drivers make decisions about when to drive and how to drive. From the interaction of these decisions, seemingly random aggregate fluctuations in traffic throughput will emerge. This is a sensible way to think about a fluctuation

Better modelling requires a better model of the agent and the interactions of agents.  Behavioral research and empirical observations are crucial but to be useful for forecasting these need to be abstracted  to a model.

Hence Romer’s call for taking microfoundations seriously.   Which is the purpose of this piece.  Having said that I hate the term. I refer here to ‘micro assumptions’ rather than ‘micro foundations’ as the use of the term ‘foundations’ represents a reductionist fallacy that excludes system level properties that only represent themselves at a system wide scale as emergent properties of individual agent decision making.  Precisely the point Romer was making.

To simplify matters I will only focus on one of the three legs of the DGSE framework the household sector.

In all modern versions of this there is a household consumption function which maximises NPV of utility over a lifetime using a Euler equation (which effectively defines an IS curve).  The use of rational expectations assumes perfect foresight where agents are immune of the ‘illusion’ that current price levels are a reliable guide to future action.

From a Ramsey consumption model the residual is savings.  This then feeds into a Koopmans-Kass derived Growth model (as used in RBC models the foundation of DGSE models) where the change in the capital stock =savings-depreciation=investment.  On this point the divergence between NK and PK is small as all PK models I know use a similar definition of the ‘capital stock’ derived from Kalecki.  Again the term is misleading – it is a flow not a stock – the classical term ‘capital advanced’ is far superior – it is only a stock if you freeze and roll up accounts after a fixed but arbitrary period- such as a calendar year.  Capital here is defined in purely monetary terms and as such and with only a single ‘representative agent’ in DGSE models no complications from capital theory and the in-applicability of marginal theories of distribution – which PKs might otherwise object to – arise.   So the real difference between NK and PK models is over the household consumption function. Here many PK models have a lot to answer for making very simplistic and non-founded assumptions such as savings being a fixed proportion of income.  This isn’t good enough.

Recent years have seen a number of theoretical developments which offer the potential to replace the RE/Representative Agent/Euler equation approach.

Firstly the growing use of extrapolation expectations – that agents extrapolate current conditions to the future.  A hypothesis which has growing empirical support as Noah Smith summarises.

The problem with the Euler equation is that under equilibrium the (interest) savings rate implied by the Euler equation should be the same as the money market interest rate (to be strict minus any Wicksell effect).  They are not, not only are they not positively correlated many many studies show they are negatively correlated.  The same phenomenon  underlies the ‘equity premium puzzle’ and the ‘risk free puzzle’ (of which more of below as it now seems there is a solution to these puzzles).

The Euler equation implies that an increased interest rate should lead to higher savings and deferment of consumption from the present to the future.  However two factors seems to be at work, firstly households are liquidity constrained not simply by a budget constraint.  If debtors receive a reduction in income  from higher interest rates they may prioritise essential current consumption over savings. As Keynesian’s stress MPS and MPC are different decisions.  Secondly interest rates in an inflation targeting monetary regime are highly correlated with inflation and so consumers may buy now to avoid higher prices later.

The solution to both of these issues seems to lie in abandoning a representative consumer and dividing households into several groups.   This can be done for example by class (those relying on wages only and those on investment income) and by degree of indebtedness.  By these factors liquidity constraints can vary but also by the degree to which future savings are ‘planned’.

Of course few households will plan there savings through optimizing their savings through a Ramsey model.  But many will invest in pension funds etc. which will carry out sophisticated financial models.  For everything else ‘fast and frugel’ hueristics (rules of thumb) can be assumed based on current spending patterns.

Here Extrapolative Expectations comes in.  As a recent paper by Glaser and Nathenson points out it only takes an assumption that a minority of house price buyers will extrapolate current house price trends to capture speculation, bubbles and momentum.  As Noah Smith summarises.

Glaeser and Nathanson’s model makes one crucial assumption — that investors rely on past prices to make guesses about demand, but fail to realize that other buyers are doing the exact same thing. When everyone is making guesses about price trends based on other people’s guesses about price trends, the whole thing can become a house of cards. The economists show how if homebuyers think this way, the result — predictably — is repeated housing bubbles and crashes. 

Of course it is the division of agents into categories in this was which was precisely the foundation for Minky’s Financial Instability hypothesis.

For an agent to fully investigate current and future price movements is costly.  Information takes time to gather and time to filter noise, and the time to filter noise increases with the amount gathered leading to exponential costs.  ‘Sticky information’  (Mankiw /Reis) models are a form of bounded rationality based on extrapolative expectations.  Indeed once you allow for this you get a non-vertical phillips curve.  Keynsianism is seemingly vindicated.

The second major advance concerns the interpretation of the utility function given historical time.  Here I will refer to the work of Ole Peters of the Sante Fe institute.

Current utility theory has a ‘many worlds’ approach to events similar to that which has bogged down theoretical physics.    So if you take many tosses of a coin you then assemble them into an ‘ensemble’ from which you can estimate likelihood and hence logarithmic utility.   Peters has shown this to be flawed by a replacement approach which places events and decisions in historical time based on past events.  This approach replaces utility but is mathematically equivalent to logarithmic utility.  Most excitingly it offers a basis for estimating ‘rational leverage’ of both households and firms based on past outcomes and future likelihoods – and a solution to the equity premium and other finance theory puzzles.

We resolve the puzzle by rejecting the underlying axiom that the expectation value of profit should be used to judge the desirability of [a] contract. Expectation values are averages over ensembles, but an individual signing [a] contract is not an ensemble. Individuals are not ensembles, but they do live across time. Averaging over time is therefore the appropriate way of removing randomness from the model, and the time-average growth rate is the object of interest.

As Peter’s points out Chocrane’s text book on finance manages to derive all of modern finance theory from a single basic equation, but one that makes a false assumption on ensemble utility, so if you correct that assumption you get the whole of finance theory  as a free gift.

So the proposal is to reconstruct the micro assumptions of household behavior based on extrapolative expectations and optimal leverage with liquidity constraints.

This requires division of households into groups and agents based on wealth and leverage.  Once you have wealth transfers and lending however this requires a balance sheet based approach which can model stocks and flows.  So the irony is that current trends on improving ‘microfoundations’ could bring NKs firmly towards the ideas and techniques pioneered in the PK community.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s