Threat from Giant White Sharks Dominates London Mayoral Election

Ken Livingstone today gave an interview with the Times where he says the difference between him and Boris is that he wants to protect London from Great White Shark attacks.

What can he be referring to?

Telegraph 2007

Boris Johnson has revealed the inspiration behind his ambition to run London – Larry Vaughn, the mayor in the film Jaws who demanded the beaches stay open despite the ominous presence of a great white shark.

Just 24 hours after entering the race to become the Tory candidate in the mayoral elections next year, it emerged that Mr Johnson had praised Mayor Vaughn’s “laissez faire” approach to public safety on several occasions.

The mayor’s order leads to the gruesome death of a young boy. However, the MP for Henley said Mr Vaughn is a role model to which all politicians should aspire.

“The real hero of Jaws is the mayor,” Mr Johnson said last year in a speech at Lloyd’s of London.

“A gigantic fish is eating all your constituents and he decides to keep the beaches open. OK, in that instance he was actually wrong. But in principle, we need more politicians like the mayor – we are often the only obstacle against all the nonsense which is really a massive conspiracy against the taxpayer.”

Of course Boris was being contrary – how could anyone consider slime ball Mayor Vaughn a hero.  Indeed in the book you find the real reason he didnt want to close the beachs, he was in the pay of the local mafia who owned beachfront property.

The real allegory of the book was that three very unsympathetic guys have to come together to save their town – a greek tragedy about the idea of the greater good.  A good theme then for the Mayoral election.

Indeed Steven Spielberg said he had to change the characters (apart from quint of course) as he found them so unsympathetic he wanted the shark to win.  Well you never know what might happen to the Mayoral candidate’s boat at the Jubilee regatta.  Lets hope Boris takes along his compressed  hair.

What is the meaning of ‘Persistent Underdelivery’ ? #NPPF

Literal Meaning It doesnt say chronic or acute – that’s import, all dictionary definitions of chronic refer to constant over some some period and include  recurrent events within the definition.  The dictionary definitions of persistence all refer to heading back from the present by a period of time or (unhelpfully) defining it as an indeterminate period of time.  The degree of undershoot doesn’t come into it as it does not say ‘acute and persistent’  so it is only the time dimension not the scale dimension in terms of the literal meaning of the word.  Again because it does not say chronic, and consistency is part of the dictionary definitions of persistency. to me it has to be a continuous record of under delivery.

How Far Back? – think about it logically, if two years ago you had underdelivery, and then underdelivery had persisted into the next year.  It had persisted than by definition it was persistent so you only have to go back two years.  So what if the underdelivery was negative in years -1, not -2, -3 etc.  That would be chronic underdelivery  as it is recurring.  What if positive in years -1, -2, but negative before that.  That would be neither persistent or chronic by their literal meaning.  Of course if underlivery recurred in year +1 it would be chronic, and in year +2 as well both persistent and chronic.  We can’t tell if something is persistent unless we have more than one years continuous evidence, it might simply be an outlier year and their is no evidence to say otherwise.

So logically it has to mean underdelivery, by any degree, over two or more continuous years in the past till now.

But what is now?  Ok we are in the current accounting period.  We all  know at DCLG instruction that trajectories are measured from the first day of the next accounting period forward.  You can’t include the current period as it would require some forecasting till the end of the year and the concept of persistency is about past events not forecast forward ones.  So logically it is the two accounting years before the current one.

Finally how do you measure ‘underdelivery’.  The concept of delivery is of houses built, not projected supply at those points.  So it has to refer to whether units completed for that year were below the target for that year.

How do you measure that target for a past year?PPS3 or NPPF.  There is no text in the NPPF suggesting it be cast restrospectively in terms of calculating housing supply, so in my opinion it would be PPS3 in the 2011-2012  and 2010-211 years and NPPF in the 2012-2013 year forwards.  This also makes it very easy to calculate- just look at the AMR.  Past arguments over 5 year supply are irrelevant what matters is whether the houses needed to be completed in a year were or not.

This to my mind is the logical literal meaning of the NPPF.  If DCLG consider that it means something different, some complicated fuzzy assessment of both acuteness and persistency of delivery over some longer period of time, they should state so in guidance or used more precise terms in the NPPF, otherwise no one will be able to construct a forward looking trajectory and the required level of new housing to meet trajectory unambiguously.

Jamie Carpenter – ‘Hiding behind Localism’ on meaning of #NPPF

Planning Blogs

In the case of the NPPF, there is growing frustration that the government has failed to provide clarity on the meaning of policies in the document which planning consultants and lawyers see as potential battlegrounds. Questions have been raised about how far local plans can conflict with the NPPF during the 12-month grace period councils have to bring plans into line with national policy (the NPPF says that councils with plans adopted after 2004 may give full weight to relevant policies in the plans even if there is a “limited degree of conflict” with the framework). Another area of potential confusion is how much extra land for housing councils need to plan for on top of the required five-year supply. The NPPF says that councils with a “record of persistent underdelivery of housing” should provide an extra 20 per cent, while those with a good record can plan for only five per cent.

So far, the message emerging from the DCLG has been that now the NPPF has been published it is up to councils to define its meaning. Speaking last week at a London seminar, Greg Clark said that the NPPF is a “framework for local decision-taking” and it is for councils to make judgements on its interpretation. At a separate conference last week, chief planner Steve Quartermain described the framework as a “control shift” to local authorities. He revealed that a government helpline set up to advise local authorities on the NPPF is not intended to help them interpret the meaning of the policies contained in the document. He said: “The advice is not geared at telling you: ‘This is what the policy means’.” Quartermain said that instead the helpline would give advice about the process councils can follow to test their policies for conformity against the NPPF.

It will be a while before the full implications of this approach become clear. But there are risks. Take the issue of housing land supply and what constitutes a “record of persistent underdelivery of housing”, for example. Clark said at the seminar last week that he “did not want to appear to offer an interpretation” on what would be considered persistent underdelivery, while a DCLG spokeswoman said that it is for councils to decide, “based on evidence of performance”. But the result of the DCLG deciding against what might look like “top-down” intervention in this area could well be a spate of appeals and delays.

As Ian Tant, senior partner at consultancy Barton Willmore, has recently pointed out, a key question raised by this particular section of the NPPF is how many councils will readily accept that they are persistently failing to deliver. …Given that there has clearly been a problem in the not-too-distant past with a sizeable proportion of local authorities being over-optimistic with their housing land supply figures, it would be helpful for the DCLG to spell out exactly what it means by a “record of persistent underdelivery of housing”. Localism is an admirable principle. But hiding behind the localism word when a some guidance from central government could help to head off costly appeals looks like an abdication of responsibility.

Distributional Impacts of Money Creation/QE – A Factor Returns Approach

There has been some debate about Paul Krugman’s recent articles that QE is not particularly benefitting bank profits and therefore the Fed is not the tool of the banks.

If we are talking about distributional impacts of a policy it is helpful to think in terms of factor returns.If you are talking about distributional impacts why not employ the economics of factor returns?

If there is demand for money it will have a price and its production can attract a profit.

Think of the factor return on inside money creation as seigniorage and on bank money as interest ( the asset return from good loans – also known as bank profit).  Though of course central banks can loan directly through a variety of open market operations.

The asset purchase programmes conducted by the FED and in various other jurisdictions involves the central bank buying long terms bonds and lowering long term interest rates.

As QE it expansionary it will increase the money in bank balances from holders of government bonds – a stock.

But as Banks borrow short and lend long the profit rates on individual loans – and loan repayments +interests are a flow – will be squeezed by QE.  Banks will not reduce their own ‘natural’ interest rates if it squeezes their lending book so it cannot make a profit.  So commercial bank rates have stuck persistently above the treasury zero bound rate.  This to me is confirmation of a Hawtrian Credit Deadlock – lenders refusing to lend and creditors refusing to borrow because of falls in aggregate demand –  rather than a Hicksian ‘Liquidity trap’ – liquidity trap is simply a confirmation of the ineffectiveness of central banks once a credit deadlock has hit.

So if idle balances accumulate we have net saving in the economy cancelling out monetary easing.  Banks remain liquid but not particularly profitable, Krugman has a point here, but those balances are likely to seek out the highest returns, such as commodities in emerging markets.

In terms of savers on fixed income they will be hit by near zero treasury rates as funds need to take out annuities to fund their liabilities, as well as rising food and energy prices.  However this is to some extent offset by the rise in equity prices caused by the expansionary monetary policy.

So QE is not a wholly pro or anti bank measure.  Rather it is a measure which keeps banks alive – barely – at a time of high systemic risk in the EZ in particular.  Zero interest rate policies benefit debtors against savers – but this is offset by banks needing to maintain their own interest rates above the zero bound and the expanded money supply boosting equities.

Too much of the comment on the net assume that all policy is explicitly pro banking interests – its more complicated than that.  The problem is that even with regulatory capture the policy that should be pursued is not agreed on, and states and banks have opposite interests on issues such as sovereign debt.

Manifold Destiny – Can the Non-Existence of General Equilibrium be Proved?

Yesterday I put an early draft of a paper on SSRN concerning DGSE.

In a short and almost throwaway section I looked at how the axiomatic assumptions imported into general equilibrium theory from Debrau didnt demonstrated General Equilibrium at all because spatially and temporally stamped goods always existed on a surface of the earth with differential features which would always generate Von Thunen land rent.

The moment I had put the paper online though I thought surely someone had thought of this before as it is pretty obvious, general equilibrium if it exists, does not occur in an asptial and atemporal prior ether but from an existing disposition of resources, firms, property and individuals.  In the jargon there will be non-convexity.

Indeed it had been thought of before in a result from 1978 called Starret’s Spatial Impossibility theorem, but outside the very narrow field of spatial economics is little known about.

Starrets considered a surface of islands, the firm on one island and the firm on another.  It is like the archipelago of Robinson Crusoe’s only with trade.

the consumer living in A and the firm locating in B cannot be an equilibrium . It turns out that the economic agents always want to move closer to each other, as opposed to being apart and having to bear the transport cost….

Or the theorem formally put

Suppose an economy with a finite number of locations and homogeneous space. If transportation consumes scarce resources (and preferences are locally non-satiated), there is no competitive equilibrium with positive transport costs….this result is true for any number of islands and economic agents

The simplifying assumption is homogenious space, which enables the producer and consumer to swap positions and still hold the same result.

Now this result is very powerful and led subsequent spatial theorists to produce theories based on non-convexity, and of course that enabled you to show comparative advantage, gains to trade, increasing returns to scale, economies of urban agglomeration, growth of urban areas etc – and with increasing  returns also must go perfect competition.   So with the core assumptions of neoclassical economics gone we get realsitic results.  Indeed it directly led to the New Economic Geography based on concepts of gains from trade and which of course led to Krugman getting his Nobel Prize.

From the New Palgrave chapter on Spatial Economics

In the Theory of Value, Debreu (1959) answers affirmatively. A commodity is defined by all its characteristics including its location: the same good traded in different locations must be treated as different commodities. This ‘answer’ runs into serious problems, as pointed most clearly by Starrett (1974). Consider the extreme case of homogenous space where firms face the same convex production set, and consumer preferences are the same (and locally not satiated). Transporting commodities between locations is costly. Then the spatial impossibility theorem states that, with a finite number of locations, consumers, and firms, no equilibrium involves transportation. The intuition behind this result is straightforward: since economic activities are perfectly divisible and agents have no objective reason to distinguish between 3 locations, each location operates in autarky to save on transport costs. To avoid this very  counterfactual result (no trade), one of the assumptions behind the spatial impossibility theorem  needs to be relaxed. If one takes transport costs as an unavoidable fact of life, one must assume either some non-homogeneity of space or some non-convexity of production sets.

Now my instinct would be that if you relax the homogeneity assumption then you are forced to accept non-convexity, which implies dynamic equilibrium or disequilibrium rather than fixed general equilibrium. Without even distribution of costs how can you have topologically convex costs.  Without convex costs and with imperfect competition the Arrow-Debrau route to proving the existence of general equilibrium is closed off – and so is DGSE.   In focussing on DGSE I missed the wider potential result.  Proving this however is another matter entirely.

It implies that the surface of all prices should be seen as a surface of rents and prices a -manifold which can be studied using the maths of non-smooth analysis.  Price vectors can then be seen as like gravitational attraction on this manifold, towards points of maximum revenue or minimum cost..

The consequence of the spatial impossibility theorem is that where there are islands of homogeneous costs for a firm they will be unstable as firms will be seeking to move slightly to gain comparative advantage balancing this against the costs of reloacting.  Seen like this Hotelling’s famous result of ice ceam sellers on  beach can be seen as a special case of this wider phenomenon.

There may be islands of partial equilibrium in some markets – such as at a livestock market – but because receipt’s will be spent in other markets where all buyers don’t gather at a point or are distributed homogeneously, and where because of dislocation costs firms and consumers are not in constant movement, there will never be a general equilibrium in all markets all at once.  What we may find are islands is relative stability on this manifold, settlements, cities, towns and villages.  Indeed where transport costs relative to production are high then we may find islands of autarky where no competitive markets exist at all.   In these islands of relative  Lyapunov stability, with constantly shifting pints of attraction.  Where you have many small flexible firms these may achieve near or quasi dynamic equilibrium.  Again because these islands – settlements, towns and cities, have agglomeration economies of scale general equilibrium, or even island stability for more than short period of time, is never fully reached, cities grow and this shifts the comparative trade advantages of existing firms and whether they can outbid others for the same piece of land.  This conception of the geographic landscape as a manifold of stability as it potentially looks beyond the New Economic Geography through being able to better analyse disequilibrium phenomena such as explosive city growth and city decline – using tools of stability analysis on manifolds such as von Neumann/Fourier techniques, for which well established mathematical techniques are known.