Broxbourne local plan lead, Green Belt policy is outdated

Herts Mercury

“Outdated” green belt policy needs addressing, according to one of the men responsible for a major planning blueprint.

The claim was made by Broxbourne Borough Council’s Paul Seeby, who is overseeing the area’s local plan, which will facilitate the building of 7,000 homes.

Green belt policy was established in the 1940s with the aim of preventing urban sprawl, but a shortage of housing has brought it into question.

The Government stipulates houses can only be built on green belt land if there are “very special circumstances”, but half of the sites laid out in the borough’s plan lie in the coveted land.

“There has been building taking place primarily on brownfield sites,” said cllr Seeby.

“But building on green belt land shouldn’t be seen as a loss or taking something away.

“If you look at sites like High Leigh or Rosedale Park, a lot of green belt can be opened up for people to enjoy. It’s a trade off, there’s not enough brownfield sites.

“I think we need to ask ourselves what we are preserving. A piece of grass isn’t necessarily rich in biodiversity.

“When we are trying to go into the green belt we need to look at ways to open it up so people can enjoy the countryside, albeit with a few houses on the site.”

He added: “The green belt boundaries were quite random. They took London and drew a line round it and you can’t access the land.

“I think [the policy] is outdated and we need to ensure that people have access to green space.”

Local plans are currently being drawn up by every district council in the country.

Around 45 per cent of the 16,000 homes to be built in East Herts lie on the green belt, while in North Herts the figure is around 42 per cent of 12,000 homes.

Richard Bullen, the honourary treasurer of the Hertfordshire branch of the Campaign for the Protection of Rural England, passionately disagreed with Mr Seeby.

“The primary purpose of the green belt is to stop communities being joined together,” he said.

“This stems back to what happened between the wars with houses being built on the side of roads and wherever they wanted.

“It’s called green only because the Ordnance Survey use the colour green to show countryside. It’s there to make sure that we don’t get urban sprawl.

“In 1945 London and Los Angeles were about the same size. In 1947 the green belt policy was introduced.

“If London was the size of LA now it would extend from Bedford to Brighton. The whole purpose of the green belt, to prevent urban sprawl, has been very successful.”

Part of the East Herts District Plan would see 10,000 homes built in Gilston as a new garden town. Spike Hughes, of Stop Harlow North, is campaigning against the development.

On the issue of green belt policy, he said: “I absolutely disagree with Mr Seeby.

“You keep hearing this ‘we are going to build on the green belt because it’s not nice green belt’. What a load of bull.

“If you look at the green belt around Gilston, it is beautiful.”

 

London – Double Densities with 2,200 Tower Blocks Not Enough – 5% of Green Belt Still Needed

Co-Star on Carter Jonas Report

Carter Jonas, the UK property consultancy, has outlined a three-pronged approach to address London’s housing crisis in its latest report, entitled ‘Solving London’s Housing Crisis’.

Through a combination of taller buildings, increased density and building innovation, Carter Jonas estimates that around 1.47 million homes could be delivered in the next 20 to 30 years.

Building up is essential to accommodate more homes in a smaller area and has the potential to deliver 820,000 additional homes. The proportion of residential supply approved through tall building developments has risen from 16% to 42% between 2004 and 2015, demonstrating a clear recognition that this efficient use of land could be instrumental in meeting housing targets.

By increasing the density of development within Housing Zones, Opportunity Areas and Intensification Areas, which cover 15% of the city, there is the scope to support more than 720,000 new homes – double the 360,000 units currently forecast.

London is the greenest city of its size with over a fifth of the capital’s land area classified as Green Belt. The second solution – Build Out – highlights that just a third of the city’s 15,300 acres of ‘non-green’ Green Belt land could accommodate 250,000 homes, at affordable price points. Green Belt boundaries should be reviewed over time by local authorities in line with the current National Planning Policy Framework and the recent Housing White Paper.

Carter Jonas estimates that by employing modern methods of construction to ‘Build Differently’ an extra 397,500 new homes could be delivered each year. Furthermore, if 10% of the 360,000 residential units planned across the Housing Zones, Opportunity Areas and Intensification Areas were developed as smaller high-quality units, then this could allow for up to an additional 36,000 units to be delivered.

Continuation of office to residential conversions offers at least a further 15,000-25,000 new homes, with 16,600 units currently granted planning and in progress (approx. 9m sqft of office space).

Tim Shaw, Head of Development, Carter Jonas, said: “There is not a single panacea that will solve London’s housing crisis, but there is a solution if we adopt an approach that allows for increased development density, relaxation of the green belt definition and innovation in construction techniques. We estimate that this approach could deliver the 1.5 million new homes London needs by 2050, so whilst the target is clearly very ambitious, it is achievable.

“Naturally these are needed to deliver a flurry of new high rise developments and the political challenge of reclassifying green belt land.

“What is clear is that we collectively need to embrace change using a targeted, innovative and effective plan – and the Government needs to give us the flexibility to do so. Aspects will be unpopular – the idea of 2,200 new towers won’t be to everyone’s taste – but the situation in London has reached such a critical point that we need to face up to this harsh reality if we are to have any hope of tackling the Capital’s housing needs.

Wealden v Lewes/South Downs Parts of Core Strategy Quashed because of Cumulative SAC Impact

Ballilaw

The impact of the JCS would only be 190 vehicles a day through the Ashdown Forest SAC, however with Welden these topped 1,000 /daym the screening threshold. Natural England gave bad advice.

As out of time for Wealden only applies to SDNP area – which has no strategic sites – and almost no impact on Ashdown forest.  So the impact is to leave SNP in Lewes district without a plan unless it carries out a hugelty expensive study on development of marginal impact – what a waste of time.

 

Micro Assumptions in Macro Models – Any Scope for NK/PK Convergence?

With major orthodox figures such as Paul Romer attacking the ‘lazy’ approach to microfoundations in DGSE models and major figures such as Micheal Eichenbaum  questioning major pillars of DGSE (such as the Euler equation and by implication rational expectations) it is opportune to examine – in the light of recent theoretical developments – whether a stronger foundation for the economic behavior of individuals may emerge.

Romer made a number of points which apply to any system of modelling – including the identification problem which applies equally to SFC models.  The key point he also made however is that this masked the lack of progress of the DGSE approach to macro over the last 20 years including the ‘lazy’ approach to micro-foundations which ignored systems effects and emergent phenomenon,

Suppose an economist thought that traffic congestion is a metaphor for macro fluctuations or a literal cause of such fluctuations. The obvious way to proceed would be to recognize that drivers make decisions about when to drive and how to drive. From the interaction of these decisions, seemingly random aggregate fluctuations in traffic throughput will emerge. This is a sensible way to think about a fluctuation

Better modelling requires a better model of the agent and the interactions of agents.  Behavioral research and empirical observations are crucial but to be useful for forecasting these need to be abstracted  to a model.

Hence Romer’s call for taking microfoundations seriously.   Which is the purpose of this piece.  Having said that I hate the term. I refer here to ‘micro assumptions’ rather than ‘micro foundations’ as the use of the term ‘foundations’ represents a reductionist fallacy that excludes system level properties that only represent themselves at a system wide scale as emergent properties of individual agent decision making.  Precisely the point Romer was making.

To simplify matters I will only focus on one of the three legs of the DGSE framework the household sector.

In all modern versions of this there is a household consumption function which maximises NPV of utility over a lifetime using a Euler equation (which effectively defines an IS curve).  The use of rational expectations assumes perfect foresight where agents are immune of the ‘illusion’ that current price levels are a reliable guide to future action.

From a Ramsey consumption model the residual is savings.  This then feeds into a Koopmans-Kass derived Growth model (as used in RBC models the foundation of DGSE models) where the change in the capital stock =savings-depreciation=investment.  On this point the divergence between NK and PK is small as all PK models I know use a similar definition of the ‘capital stock’ derived from Kalecki.  Again the term is misleading – it is a flow not a stock – the classical term ‘capital advanced’ is far superior – it is only a stock if you freeze and roll up accounts after a fixed but arbitrary period- such as a calendar year.  Capital here is defined in purely monetary terms and as such and with only a single ‘representative agent’ in DGSE models no complications from capital theory and the in-applicability of marginal theories of distribution – which PKs might otherwise object to – arise.   So the real difference between NK and PK models is over the household consumption function. Here many PK models have a lot to answer for making very simplistic and non-founded assumptions such as savings being a fixed proportion of income.  This isn’t good enough.

Recent years have seen a number of theoretical developments which offer the potential to replace the RE/Representative Agent/Euler equation approach.

Firstly the growing use of extrapolation expectations – that agents extrapolate current conditions to the future.  A hypothesis which has growing empirical support as Noah Smith summarises.

The problem with the Euler equation is that under equilibrium the (interest) savings rate implied by the Euler equation should be the same as the money market interest rate (to be strict minus any Wicksell effect).  They are not, not only are they not positively correlated many many studies show they are negatively correlated.  The same phenomenon  underlies the ‘equity premium puzzle’ and the ‘risk free puzzle’ (of which more of below as it now seems there is a solution to these puzzles).

The Euler equation implies that an increased interest rate should lead to higher savings and deferment of consumption from the present to the future.  However two factors seems to be at work, firstly households are liquidity constrained not simply by a budget constraint.  If debtors receive a reduction in income  from higher interest rates they may prioritise essential current consumption over savings. As Keynesian’s stress MPS and MPC are different decisions.  Secondly interest rates in an inflation targeting monetary regime are highly correlated with inflation and so consumers may buy now to avoid higher prices later.

The solution to both of these issues seems to lie in abandoning a representative consumer and dividing households into several groups.   This can be done for example by class (those relying on wages only and those on investment income) and by degree of indebtedness.  By these factors liquidity constraints can vary but also by the degree to which future savings are ‘planned’.

Of course few households will plan there savings through optimizing their savings through a Ramsey model.  But many will invest in pension funds etc. which will carry out sophisticated financial models.  For everything else ‘fast and frugel’ hueristics (rules of thumb) can be assumed based on current spending patterns.

Here Extrapolative Expectations comes in.  As a recent paper by Glaser and Nathenson points out it only takes an assumption that a minority of house price buyers will extrapolate current house price trends to capture speculation, bubbles and momentum.  As Noah Smith summarises.

Glaeser and Nathanson’s model makes one crucial assumption — that investors rely on past prices to make guesses about demand, but fail to realize that other buyers are doing the exact same thing. When everyone is making guesses about price trends based on other people’s guesses about price trends, the whole thing can become a house of cards. The economists show how if homebuyers think this way, the result — predictably — is repeated housing bubbles and crashes. 

Of course it is the division of agents into categories in this was which was precisely the foundation for Minky’s Financial Instability hypothesis.

For an agent to fully investigate current and future price movements is costly.  Information takes time to gather and time to filter noise, and the time to filter noise increases with the amount gathered leading to exponential costs.  ‘Sticky information’  (Mankiw /Reis) models are a form of bounded rationality based on extrapolative expectations.  Indeed once you allow for this you get a non-vertical phillips curve.  Keynsianism is seemingly vindicated.

The second major advance concerns the interpretation of the utility function given historical time.  Here I will refer to the work of Ole Peters of the Sante Fe institute.

Current utility theory has a ‘many worlds’ approach to events similar to that which has bogged down theoretical physics.    So if you take many tosses of a coin you then assemble them into an ‘ensemble’ from which you can estimate likelihood and hence logarithmic utility.   Peters has shown this to be flawed by a replacement approach which places events and decisions in historical time based on past events.  This approach replaces utility but is mathematically equivalent to logarithmic utility.  Most excitingly it offers a basis for estimating ‘rational leverage’ of both households and firms based on past outcomes and future likelihoods – and a solution to the equity premium and other finance theory puzzles.

We resolve the puzzle by rejecting the underlying axiom that the expectation value of profit should be used to judge the desirability of [a] contract. Expectation values are averages over ensembles, but an individual signing [a] contract is not an ensemble. Individuals are not ensembles, but they do live across time. Averaging over time is therefore the appropriate way of removing randomness from the model, and the time-average growth rate is the object of interest.

As Peter’s points out Chocrane’s text book on finance manages to derive all of modern finance theory from a single basic equation, but one that makes a false assumption on ensemble utility, so if you correct that assumption you get the whole of finance theory  as a free gift.

So the proposal is to reconstruct the micro assumptions of household behavior based on extrapolative expectations and optimal leverage with liquidity constraints.

This requires division of households into groups and agents based on wealth and leverage.  Once you have wealth transfers and lending however this requires a balance sheet based approach which can model stocks and flows.  So the irony is that current trends on improving ‘microfoundations’ could bring NKs firmly towards the ideas and techniques pioneered in the PK community.