The Perpetual Motion Assumption in New Keynesian Models

Note:  There is nothing original in this article as it relies on the work of Ping Chen – what I hope to do here is highlight what I think is the biggest mistake in the history of Macroeconomics for those not of a mathematical bent.

The Problem – Reconciling Equilibrium and Business Cycles

Classical economics had a very clear view of the operations of the economy as a whole – the invisible hand.  Though individual agents would compete and accumulate in free capital markets investments would flow to the most profitable occupations and unprofitable occupations would die off.  The analogy was of a process of ‘gravitation’.  The problem was reconciling this with business cycles and the irregular periods of crisis that capitalist underwent.  Hence such problems were outside – exogenous – to the system – like bad weather, wars and the occasional disastrous economic decision by monarchs.

The great depression firmly shook the belief in stability and the 1930s were a fecund period for new ideas.  In this febrile climate an informal paper was presented to a conference that sought to explain this instability not as something which was within – endogenous- to capitalism but something caused by the accumulation of random outside shocks.  This was hailed as a ‘Copercernian’ revolution in economics by Samuelsson – Lucas said ‘That was a hell of an idea,”. “It was just a huge jump from what anyone had done.” This formed the basic assumption of modern Real Business Cycle  and NK (New Keynesian) models which arose from them.  A basically stabilizing capitalist system buffered from something outside something outside the realm of economics.  The problem was the originator of the theory – Ragnar Frish – had made an error – his promised journal write up of the idea never appeared and he did not even refer to it in his acceptance speech for the first Nobel Prize in Economics – even though it was referred to in his citation.  As such, in being the primary theoretical underpinning of the neoclassical model of the business cycle, it is probably the greatest mistake in economics.  For in seeing is seeing the capitalist economy as basically stable and stabilizing it had nothing to say about the Great Recession.

Early Dynamic Theories of The Business Cycle – Marx, Tugan-Baranovsky and Aftalion 

Though in the era of classical economics there were occasional thinkers such as Lauderdale and Malthus who advance the concept that cycles and crisis were caused by a disproportionality between investment and consumption the first thinker who advanced a systematic and dynamic theory of this was Marx. This may come as a surprise to many but Frisch and Kalecki – who advanced ideas which were to become the foundation of all modern theories of the business cycle – took their cue from a line of thinking advanced by Marx.

Marx had several theories of crisis in several notebooks – none of which were published in his lifetime.  He never had the opportunity to draw together a unified theory of growth the cycle and long run  crisis. His main cyclical theories however are in Theories of Surplus Value (in the section on Ricardo Ch 17), and Capital Vol 2.

For Marx – building on earlier work by Sisimondi- the starting point was a rejection of Says Law – that producers trade with others demanding consumption goods as such supply creates its own demand.  For Marx this may be true in a barter economy but not a monetary one, where the motivation of holders of money is not to consume but accumulate.

“no one can sell unless someone else purchases.  But no one is forthwith bound to purchase just because he has sold”.

The potential for money to act as a potential brake on consumption through non-circulation and the time gap between investment and consumption was suggested by a number of authors notably JS Mill.  Marx however criticized such purely monetary theories

the separation between purchase and sale [in a monetary economy]… explain the possibility of crises… [but not] their actual occurrence.  They do not explain why the phases of the … come into such conflict

For Marx the driver was the tendency to overproduction in capitalism, for production to expand beyond the limits of the market.  As such he developed Ricardo’s concept of disposition between production and consumption.  This was never worked out mathematically leaving hints to later authors – notably his schemes of reproduction between consumption and production sectors.  Marx seems to have been developing an idea from Sisimondi – the purchasing power available to purchase consumer’s goods is equal to last year’s income either invested or spent on wages or capitalist consumption, so any increase in investment and capital intensity will result in a surplus of commodities. Increases in machinery are therefore responsible for market gluts.  This idea, which also underlay Marx’s theory of the falling rate of profit, has been picked up by many authors from Major Douglas to Foster and Catching’s, but in its simple form it is both incomplete and based on a fallacy.  It is incomplete because it explains a systematic downturn, but not upswings in the cycle.  Marx verbally explained upswings by increased opportunities for cheaper credit and investment at low interest rates but never developed a unified theory of money and production.  Fallacious because If production is continuous and even, or even expands evenly, and no money is hoarded then the wages and profits from those producing intermediate goods and expenditure on ‘unproductive labour (luxury goods and services for capitalists) will exactly match any ‘gap’ im surplus value (profits) necessary to purchase final consumption goods – there is no cycle.  It was later authors who developed the theory in a more satisfactory form that because credit and investment is not even, and the intuition that the decision to invest and the decisions of consumers to spend are taken at different times is the cause of this unevenness, that the possibly of cycles arises.  Again it was Marx who provided the hint which his emphasis that it was capitalism’s own urge to expand which created these conditions. As Schumpeter commented the important point was not not Marx had the right answers but he asked the right questions.

The next major leap was made by Tugan-Baranovsky, credited by Keynes as ‘first and most original’ modern writer on business cycles. His publication of ‘The Industrial Crises in England’ in 1894 caused something of a sensation being translated into several European languages. Tugan-Baranovsky took from Marx the centrality of investment as the driver of capitalism, developing Marx’s reproduction schemes (and adding the missing luxury goods and services sector) where investment driven by credit, which he envisaged like a steam engine, building pressure until it was released and the cycle repeats.  Credit availability being the steam.

Panic is the death of credit. However, credit has the ability to return to life, and its life cycle is the modern industrial cycle. The first period f a credit cycle (the post-panic period) immediately follows the end of a panic. At this time, the discount rate becomes low and the supply of loan capital on the money market exceeds demand. (Tugan-Baranovskij, 2002a, pp. 39-40).

Tugan-Baranovsky was stronger in empirical elaboration of the cycle than mathematical precision.  He did set out a primitive version of the multiplier theory as additional free capital via banks was opened up by additional investment, however though this verbal model could be elaborated into a model of upswings prompting upswings and downswings prompting downswings it did not explain turning points, how upswings turned to downswings and vice versa.

The breakthrough came from the French Jewish economist  Albert Aftalion.  In Les crises périodiques de surproduction,1913 he developed the accelerator principle. Following Marx and Tugan-Baranovsky the driver of the cycle is investment in fixed capital and the time lag to bring products to consumption.  Aftalion developed a function of total spending which included consumption of final consumption goods plus investment in fixed capital and intermediate goods.

In Aftalion’s model the starring point is simple reproduction.  An economy which doesn’t grow where the capital stock is simply replaced (depreciation).  In a growing economy, the rate of investment (assuming no change in capital/output rations) is therefore proportionate to the rate of change of income. A corollary of this is the importance of the second derivative of investment – if the rate of increase slows – its acceleration, then there will be a downturn in income.  A second corollary is that a rise (or fall) in demand for consumer goods will result in a greater (or lessor) rise in demand for capital goods. The existence of a time lag between production and consumption alone is able to mathematically generate cycles—even before consideration of availability of real resources and credit, solving the puzzle of a demand shortfall from Sismondi and Marx.

Aftalion’s accelerator model was independently discovered and popularized by J.M. Clark in 1917.

1930s development of the Accelerator Principle

Frisch’s work was inspired by a mathematical critique of Clark’s verbal model and his famous paper on business cycles was primarily an attempt to demonstrate how mathematical and modular models of the whole economy could be built.  In many ways, it marked the founding of macroeconomics.  Frisch presented Clark’s model as an equation and demonstrated firstly that it was underdetermined – one equations with two unknowns – and therefore needing an expression for aggregate demand, secondly through not expressly including the period of production if investment was instantaneous it could not generate turning points and cycles. On the second point, he seems to have been influenced by a Norwegian tradition on the importance of the period of production/fixed capital after Tugan-Baranovsky.  Clark acknowledged the problem and called upon mathematical economist to answer it.  Frisch and other took up this challenge

Three other major figures working in parallel and influencing each other on the very first full macroeconomic models of the cycle. were Tinbergen, Schumpeter and Kalecki.  Tinbergen studied  variations in output of the shipbuilding industry.

The basic problem of any theory on endogenous trade cycles may be expressed in the following question: how can an economic system show fluctuations which are not the3 result of exogenous, oscillating forces, that is to say some fluctuations due to some ‘inner’ cause?

When freight rates are high ships are commissioned, however they take two years to complete, freight rates are lowered by the volume of ships, the lag creating a business cycle.  This Tinbergen modelled using complex roots which produced a sin function.  The result being an endogenous model of the cycle, which did not rely on sunspots or such like some previous models of the cycle.

Capital investment can be split into two components.  I simple reproduction without growth all you are doing is replacing the existing capital stock – depreciation. This is not as simple as it seems as the economic life of the machine, the period of depreciation, varies with the interest rate.  If there is a change in investment, which can either be created by innovations or changes in interest rates change in the economic life of capital goods or shifts in aggregate demand, then this changes the wage bill for those producing the delta in the capital stock.  Frisch split this into two – changes investment of capital goods producing consumer goods, and changes in investment in capital goods producing other capital goods.  This produces second and higher order effects waves of ever decreasing magnitude to aggregate demand.  This was the ingenious treatment of depreciation in the models of Fricsh and Kalecki. This metronomic view of capitalism lays at the heart of all current schools of economics though its origins and significance has mostly been forgotten.  The Austrian schools presents a bottom up approach prioritizing the  period production of capital goods as the driver. Keynesians reversed the causality with investments driven by spending.

Tinbergen’s method was taken up in Kalecki’s investment driven and Marx inspired models of the 1930s estimating the parameter of the effect on the sin function based on estimates from empirical data. The results were highly sensitive to initial assumptions.  From today’s perspective, this should not have been a surprise.  The interaction between stock and demand for investment involved a feedback loop and hence was a non-linear system – which are highly sensitive to initial assumptions.

Slutsky – Order from Randomness

Frisch had been studying business cycles for several years. Both Kalecki and Frisch were invited to a seminar in honor of Cassel’s 70th  birthday in 1933.  Frisch had been working on problems concerning the business cycle for several years intending them to be published in a full article in Economtrica.  The paper to the conference was informal and had no end notes.  Frisch approached his business cycle investigations from an econometric perspective. This led to an understanding of the need for a theory to explain several statistical aspects of cycles, their general regularity but with variations in period and intensity.  Frisch rejected the Tinbergen approach used by Kalecki of two equations, one for price one for investment. He used just one for investment making his theory non-linear, it generated a cycle but the cycle died out.

Frisch was one of the first to set out a systematic macroeconomic model, he laid out a Tableau economic, after Quesney as follows:

Land presented a non-linear feedback loop.  To obtain a linear system Frisch eliminated land – as well as assuming all consumer goods were consumed immeadiately.

This linearization of the system was probably his key mistake and it is interesting to speculate why he took this approach.  At the conference

‘Since the Greeks it has been accepted that one cannot say an empirical quantity is exactly equal to a precise number’ (Frisch Quoted in Goodwin 1989)

Kalecki had simplified Tinbergen’s equations, replacing a non-linear equation very sensitive to initial conditions with a precise fixed amplitude generated by a sin function. The criticism was valid, however rather than developing a non- linear system he developed a linear one.  It is likely he felt more comfortable with a linear system as its verification was much more amenable to his emerging statistical econometrical techniques.  A second factor may have been that this was analogous to the rocking horse model popularized by Wicksell.

“If you hit a rocking horse with a stick, the movement of the horse will be very different from the stick. The hits are the cause of the movement, but the system’s own equilibrium laws condition the form of movement” Wicksell 1918

This was essentially the classical view of Capitalism held by Ricardo rather than Marx, of stabilizing, equilibrating forces with disturbances coming from outside the system.  This may have been easier to digest than the potentially radical concept that cycles and crises came from within the forces of capitalism itself.

In abandoning a non-linear system, he had created a problem.  The oscillations died away. This led him to distinguish between what he termed the impulse – what created the cycle – and propagation – what provided the energy to continue it – hence the title of his paper.

Propagation Problems and Impulse Problems in Dynamic Economics

To solve this puzzle he turned to an idea from Slutsky – a Russian economist who had to turn to pure theory in the wake of the Russian Revelation.  Slutsky looked at the sequence of draw on the Russian lottery, and added up subsequent numbers – then plotted them.  The result was cycles that looked much like a business cycle.  This is like adding the results of two dice and producing a bell curve.  The law of large numbers meaning that results oscillate around a central point.

Frisch then argued that it was this random exogenous factor which provided the propagation force – the energy that maintained the swings – which he modelled as a pendulum.  There is some irony in that as the pendulum is a classic non-linear system.  So, to replace an endogenous non-linear drive of the cycle he introduced an endogenous one.

This was not the only possible mechanism for propagation mentioned in the article. He also mentioned Schumpeterian innovations being like a release valve operating like a second pendulum at particular points in the cycle.  Therefore one important surving featgure of Frischs approach despite his errors, is a necessary example;e of modelling the economy as a system exploring with an open mind several possible causative forces.

The Perpetual Motion Machine

The problem with the distinction between impulse and propagation is that it does not exist in nature.  There is no such thing as a directionless prorogation force. All forces have momentum and direction.  An exogenous force to generate a cycle must have different directions at every point in its cycle, a pendulum to keep swinging must have a different direction of force when it swings left to when it swings right.  A purely random force like Brownian motion is unable to generate an oscillation of broadly constant magnitude as forces in one direction cancel out forces in another.  Once set in motion a pendulum requires input of energy to keep swinging otherwise friction will bring it to a halt.  If it did not it would be a perpetual motion machine.  A child on a swing adds energy and shifts their body mass to maintain motion.  It is an endogenous force relying on their knowledge of the timing of the cycle.  It is like the price of ships in Tinbergen’s model.

Frisch suggested that the stable property of a market economy could be described by a damped oscillator, and that persistent business cycles could be maintained by persistent shocks… physicists solved the problem of the harmonic oscillator under Brownian motion analytically in 1930 and refined it in the 1940¡¯s ( Uhle nbeck an Ornstein 1930, Chandrasekhar 1943, Wang and Unlenbeck 1945). The classical works on Brownian motion were well known among mathematicians through the influential book on stochastic process (Wax 1954). Since 1963, the discoveries of deterministic chaos further indicate that only the nonlinear oscillator is capable of generating persistent cycles (Lorenz 1963, Hao 1990). It was a great mystery why the economic community has for more than six decades ignored these fundamental results in stochastic process and adhered to the mistaken belief of noise-driven cycles.t a harmonic oscillator under the Brownian motion does not lead to a diffusion process. Its oscillation will have become rapidly damped into residual fluctuations without apparent periodic motion…. The fantasy of the Frisch model is quite similar to a second kind of perpetual motion machine in the history of thermodynamics. Schumpeter considered business cycles like a heartbeat, which is the essence of the organism (Schumpeter 1939). According to nonequilibrium thermodynamics, a biological clock can only emerge in dissipative systems with energy flow, information flow, and matter flow (Prigogine 1980). Therefore, nonlinear mechanism under nonequilibrium condition is the root of business cycles (Chen 2000) Chen 1999.

Chen speculates that Frisch had realized his mistake and quietly abandoned this model.

Frisch’s promised paper, “Changing harmonics studied from the point of view of linear operators and erratic shocks,” was advertised three times under the category “papers to appear in early issues” in Econometrica, including Issue No. 2, 3, and 4 of Volume I (April, July, and October 1933) but never appeared in Econometrica since 1934.  Frisch never mentioned a word about his prize-winning model in his Nobel speech in 1969 (Frisch 1981).

Historically, Frisch quietly abandoned his model as early as 1934. Frisch’s promised paper, “Changing harmonics studied from the point of view of linear operators and erratic shocks,” was advertised three times under the category “papers to appear in early issues” in Econometrica, including Issue No. 2, 3, and 4 of Volume I (April, July, and October 1933). The promised paper was never published in Econometrica where Frisch himself was the editor of the newly established flagship journal for the Econometric Society. Surprisingly, Frisch never mentioned a word about his prize-winning model in his Nobel speech in 1969 (Frisch 1981)..  Chen 2010

Indeed in 1934 in his paper on ‘Incap[ul;ating Phenomenon’ he had developed a fuller theory involving the finance sector.

“violent depressions may be caused by the mere fact that the parties involved have a certain typical behaviour in regard to buying and loaning” (Frisch, 1934a, 271; Frisch’s emphasis).

The  origin of the crises lay in the behaviour of the banking and monetary authorities, l- with variations in interest and credit driving the accelerator principle.

Although Frisch’s 1933 model depended on depreciated for its propagation force there was no term for interest.  Depreciation was a fixed factor. If however one models the size of the required depreciation fund through net present value of the capital stock we get a firmly non-linear model where changes to the supply and demand for financial capital vary the interest rate.

This was the road not taken as such models can purely endogenously generate cycles through changes to the impulse (second derivative) of the supply of credit.  The impulse of credit replaces the impulse of god with a stick.  However Frisch’s 1934 and subsequent models are not rocking bhorse models, they are non-linear models able to gneerate cycles endogenously



Lucas and the Revival of Frisch’s Method

Frisch’s paper led to little follow on work, it was too far ahead of its time.  The Keynesian revolution led the economics profession to see the world through partial equilibrium, rather than dynamic general equilibrium models.  With the Arrow-Debrau general equilibrium model and the rational expectations hypothesis all prices were set in advance forever.  In the Jurgenson neoclassical model of investment there is no time lag at all between investment decisions and spending and with rational expectations are cycles are perfectly predicated. For those developing the Real Business Cycle models upon which modern New Keynesian models are based this presented a conundrum, as with all cycles perfectly predicted there is no unexpected excess supply or excess demand – there can be no cycle.  This led to a revival in the Frisch-Slutksy hypothesis but with one crucial difference.  With an endogenous model of the cycle ruled out it relied purely on Slutsky – on endogenous, random shocks to the system. With such model’s dominant in the years prior to 2007 they were unable to explain the Crisis.  It was an exogenous shock, produced, for example, by purely random behaviors in factors such as how much holidays workers chose to take.  In a perpetual motion machine only an act of god could vary its course.

An Endogenous or Exogenous Cycle?

One should not be too dismissive – as a believe Chen is – of random exogenous factors.  What matters is the weight to be given to exogenous actions and how they are mediated through the expectations and behavior of economic actors.  If the sample of random events is small and there are economic actors not infinitely large in number interpreting and mediating them then those actors will vary their behavior depending on the past pattern of prices.  The economy is not a non-ergodic system where all events are independent.  It is an ergodic path dependent system where agents’ ability to bear loss and bet on gains is dependent on their past performance.  As such it is not like pure Brownian motion or a random walk, exogenous random events can generate cycles.

If then both endogenous and exogenous factors can generate cycles, and both are of the same non-linear mathematical form how can one distinguish between them? Both matter but in large liquid markets it is likely that the endogenous systemic factors will be longer term and large and the random factors shorter term and shorter in effect and amplitude.’


Frisch made two errors, excluding land feedback of supply of capital goods on price of consumer goods, which turned the non-linear model (as pursued by Kalecki prior to 1933) into a linear model.  This may have been the greatest mistake in economics as the resultant model is not able to generate cycles. By intropducing the economics profession to the Slutsky process as a false means of generating cycles it ignored issues with with timing and consumption and investment and led it to assume that the economy bar external shocks was basically stable without immanent endogenous issues at times of growth generating issues which can produce depression and crisis.

Concept for a Cubett Cycle Only Bridge at Blackfriars

In cities radically improving conditions for cyclists buiding new Bridges primarily for cyclists have become the norm.  Moreover they are light and relatively cheap, it was the mass of earth interfering with LT subsurface work which ultimately did for the Garden Bridge.

The new analytical work by TfL has shown just where such a bridge should be.  There are huge flows and potential flows over Blackfriars Bridge connecting as it does Clapham and the City of London is is by far the main origin-destination point.  It is teh route of thesuperhighway.  70% of traffic across Blackfriars Bridge is now Cycles.  Additional capacity here is needed because of the introduction of anti-terrorist barriers.

Closing Blackfriars bridge would be hard because of bus routes, so why not build a cycle and pedestrian only bridge here, made much easier here as the supprts from the Cubett rail bridge (now demolished) are still in place.  A proposal for a combined green bridge and cycle route at Blackfriars was submitted to the ‘High Line for London’ ideas competition in 2012.

Britain’s Two Biggest Rail Projects will Cross but not Interchange That’s a Disgrace @Andrew_Adonis


Near the village of Steeple Claydon in the Vale of Aylsham is the former station of Calvert.  Here England two biggest rail projects will cross.  i stress cross there are no plans for an interchange.  Here HS2 will in this section follow the route of the former Great Central Railway, England last main line which met the visionary Edward Watkin’s vision of a purpose built high speed main line linking London and the North. At Calvert it crosses the soon to be reopened Varsity Line between Oxford and Cambridge.  They will not interchange. A chord was previously built between them to serve a nearby brickworks and army camp.

Why is this?  Why are no intermediate stations proposed on HS2 when for comparison the Japanese Shinkansen system has many stations every 30-40 km or so apart, especially around Tokyo.

The reason is the misconception of British railway engineers over capacity.  I can say this with some assurance having worked closely with Japanese engineers on High Speed Rail in India.  The concept of headway is crucial.  The benefit cost ration of HS2 depends on time savings from travel – hence run the trains as fast as possible and as tightly spaced (headway) as possible.  It now seems the 18 trains per hour ambition of HS2 is too ambitious, the maximum the Japanese run is 12-14 an hour.   Ironically the faster you run the less the capacity, because the headway is dependent on braking and signal technology.  The optimum sweet spot seems to be around 300kph as per Chinese railways.  Because HS2 is being planned around heavy Spanish and French trains and less advanced signalling technology their headway is far less than lightweight Shinkansen trains which being much lighter take less time to break.  The japanese also have a different philosophy around benefits and costs.  They realise that commuters are able to pay a premium and the most commuters will be around intermediate stations, even if the houses for those commuters are not yet built.  If you can increase capacity by commuters without losing headway you can dramatically increase the farebox and the BCR of the project as the fixed costs serving the major cities will have already been paid for.  They do this through interleaving trains and advanced train control.  You can switch a train onto the opposite track to overtake a slower ‘stopping’ service, though unless carefully timed this can reduce capacity.  The better solution is to have acceleration/deceleration/overtaking lanes before and after intermediate stops – which at around 320kph need to be around 16km in length.  Rather than every train stopping at Old Oak Common, Birmingham interchange and Crewe I recommend that such lanes are created around such stations enabling faster services between London and Manchester direct.   This would also open up potential for intermediate stations at Amersham, Princes Risborough, Calvert, Stoneleigh (for Leamington and Warwick) and Lichfield, Stafford and Machester Airport – all potential major growth locations. The future strategic plans for the London and Birmingham regions and Oxford-MK-Cambridge should be based around this in the same way Greater Tokoyo’s regional plan is based around Shinkansen.  Indeed there is a reason Tokyop-Osaka is the world’s largest city experiencing the economic benefits without the disbenefits that have choked other megacities – you can commute farther and faster by High Speed Rail.

Calvert is a particular opportunity for large Garden City scale development.  The best in England.  The landscape is flat with few villages.  The location is midway between Oxford and Milton Keynes and one of the three routes of the potential Oxford -Milton Keynes expressway.  indeed development here could pay for this an improved A road upgradings to Aylesbury and its proposed northern loop road and east and west between Oxford and Luton, linking the M40 to the M1 and linking several growth areas.  Indeed I dont think it is really a choice between one or other expressway grade roads, two routes to dual A Trunk road standard with grade separation would perform a better network function and service a wider range of growth areas.

Future decisions on new settlements in the Aylesbury Vale local plan have been put on hold pending decision on the expressway route.  Currently Wilmslow (on the Varsity line) and Haddenham (on the Chiltern Line) are candidates.  Both are potential Garden Village/town locations and should go ahead – but only Claverty has the opportunity and potential for a Garden City.  Were Aylesbury Vale to meet only its own need and that of constrained south Buckinghamshire and Chilterns districts it would not be needed, however there is also the need to meet  overspill from London and Greater Birmingham. Oxford and Cambridge have a net inflow of commuters, as well as theior own problems, so they are not good locations for this overspill.  The appropriate locations are where peoiple could commute from, on the WCML whose cpacity will increase with HS2, at MK and new Garden City locations north and ssouth of it, at Northmapton and Rugby, and stations along the Marylebone -Snow Hill Line and the suggested stations along the HS2 route.

I suggest calling this new Garden City at Clavert Junction/Steeple Claydon, Watkin, after the great Engineer whose vision was to link the North and Midlands to Europe via High Speed Rail.

No Legislation to Capture Land Value Uplift in Queens Speech

Despite being in all major parties manifestos Land Value Capture to increase housing affordability does not feature in the Queens Speech.  The only DCLG bill relates top Tenants fees, no housing act, no planning act.

The reference in the guidance notes, seems to relate to policy measures only, excluding land value capture which requires amendments to primary legislation.  With a two year Queens Speech at least a two year delay.

We will deliver the reforms proposed in the White Paper to increase
transparency around the control of land, to “free up more land for new homes in the right places, speed up build-out by encouraging modern methods of
construction and diversify who builds homes in the country” (p.70).

I rang the DCLG press office – they knew nothing about whether the proposal had been dropped or not – they are due to get back to me.

Did the Conflicted and Privatised BRE lull DCLG into False Sense of Security over #Grenfell Fire?

In 2015 the BRE were commissioned by the DLCG to carry out research on the flammability of external cladding.  It was only today it was published alongside an extension till 2018 (following ba gap of nearly two years.  Why is that?  Blame passing surely not.

BRE Global, through the contract with DCLG, investigate fires that may have implications for Building Regulations. With the exception of one or two unfortunate but rare cases, there is currently no evidence from these investigations to suggest that the current recommendations, to limit vertical fire spread up the exterior of high-rise buildings, are failing in their purpose.However, as the need to improve energy efficiency becomes increasingly urgent, more innovative ways to insulate buildings to improve their sustainability and energy efficiency are changing the external surfaces of buildings with an increase in the volume of potentially combustible materials being applied. A number of significant fires, such as those discussed previously, have demonstrated the potential risks.
It was agreed with DCLG to carry out three experiments, to assess the performance of different external façades including non-fire rated double glazing, when exposed to a fire from below, representative of the external face of some buildings.

The experiments conducted basically was pasting internal plasterboard to the outside of a building, not aluminium composite material (ACM) with a combustible polycarbonate fill. Nor did they involve a cavity.  The tests showed that unprotected windows could fail and this could lead to a fire spread and breach of compartmentalisation.  There was no evidence the BRE had looked at international evidence of fire spread from use of such cladding. So why then did they conclude that ‘media attention’ about the risks of such fires were a çommon misconception’?

In 2014, according to the BBC

Liberal Democrat MP Steven Williams – who was then a minister in the department – replied (To the all party parliamentary fire safety group

: “I have neither seen nor heard anything that would suggest that consideration of these specific potential changes is urgent and I am not willing to disrupt the work of this department by asking that these matters are brought forward.”

The group replied to say they “were at a loss to understand, how you had concluded that credible and independent evidence, which had life safety implications, was NOT considered to be urgent”.

Why did Williams consider it was not urgent?  The DCLG no longer had a chief construction advisor, it relied on the BRE for expert evidence and the BRE had been privatised in 1997.

Independent of Government ties, BRE was also now able to certify and approve products that it tested, and so BRE Certification was born in 1999.

Certification is independent confirmation by an expert third party that a product, system or service meets, and continues to meet, appropriate standards.

The Loss Prevention Certification Board (LPCB) has been working with industry and government for more than 100 years to set the standards needed to ensure that fire and security products and services perform effectively. LPCB offers third-party approval confirming that products and services have met and will continue to meet these standards. This benefits both specifiers and manufacturers:

Specifiers selecting LPCB approved products reduce fire safety and security risks and demonstrate due diligence (the use of approved products is encouraged by insurers). They also avoid wasting money on purchasing inappropriate equipment, and save time spent on searching for and assessing products and services.

BRE certifies specialist fire safety equipment but NOT building materials.  This is the responsibility of the independent British Board of Agrement which certified the panels in question – certified materials have enhanced status under the building regs.  Though separate BBA and BRE effectively share a site in the same campus in Brickett Wood Watford and BRE has commercial arrangements with BBA.

The privatisation of BRE raises many potential conflicts of interest.  The case of a BRE fiore investigator finding BRE fire safety approved systems on a site is even mentioned on their website.   

I suspect however the issue is much more cultural – the BRE being a bastion of the building industry – would be unwilling to rock the boat and suggest that a widely used construction material should be removed and retrofitted years after the event of iots installation.

Savills – Planning Permissions for New Homes not Concentrated in Most Unaffordable Areas


Planning permissions granted for new homes are being concentrated in the wrong areas, where there is less need for housing, according to new research by Savills.

It found that there is a lack of 90,000 planning consents for homes in the least affordable and most in-demand areas of the country.

Only 20pc of planning consents in 2016 were in the most unaffordable places, where the lowest priced homes are at least 11.4 times income. However, 40pc of the country’s total need for new homes is in these markets, while there is a surplus of consents in the most affordable locations.

Research found that in areas where the house price to earnings ratio is over 11.4, which includes London and much of the South East, there is a shortfall of 73,000 planning consents for homes.

Since the National Planning Policy Framework was launched four years ago, with the aim of simplifying the system, there has been a 56pc increase in the number of consents granted.

But analysis shows that there has not been any increase in the areas where affordability is most stretched and where housing need is the greatest.

The Savills report said: “This means we are not building enough homes in areas where they are most needed to improve affordability and support economic productivity.”
Only 41pc of local authorities have a housing plan which sets out housing need and a five-year plan of how to cater for it.

Savills also modelled the potential impact of the Housing Delivery Test, which was announced in the Housing White Paper last February and would assess need based on market strength in an attempt to build “homes in the right places”. It found that it would double London’s housing need to more than 100,000 homes.

Chris Buckle, Savills research director, said: “There continues to be a massive shortfall in London and its surrounds and it is this misalignment of housing need versus delivery which could ultimately hinder economic growth.”

See my reports here – on planning by resistence and here – on why both regulation and delivery matter.

DCLG Press Office Claims it is Contrary to BREGS to Fit #Grenfell Panels on High Rises

  • Cladding using a composite aluminium panel with a polyethylene core would be non-compliant with current Building Regulations guidance. This material should not be used as cladding on buildings over 18m in height.


As background –

  • This has been independently backed up by experts including Sir Ken Knight (former commissioner of London Fire Brigade, -Number witheld- and Martin Conlon (Chairman of the Building Control Alliance, who has already made a similar statement to the BBC).

  • Building Regulations guidance, Part B: Chapter 12 (p93/94) – external wall construction:

  • We can not comment on what type of cladding was used on Grenfell Tower building – this will be subject to investigations.

BS 8414-1: 2002 – Fire performance of external cladding systems.

Test method for non-loadbearing external cladding systems applied to the face of a building.

BS 8414-2: 2005 – Fire performance of external cladding systems.

Test method for non-loadbearing external cladding systems fixed to and supported by a structural steel frame.

But neither were reflected in the Bregs Part B

Two Gaps in Planning – Delivery and Regulatory

RTPI Dr Michael Harris

‘Planning reform’ was never going to resolve the housing crisis, because ‘planning restrictions’ were never its cause.

As the title of the UK Government’s White Paper “Fixing Our Broken Housing Market” (February 2017) suggests, we’ve overlooked the complex range of reasons why we aren’t building enough homes.

…the fact that the ongoing gap between housing supply and demand is equivalent to the house building that used to be done by local authorities before the 1980s.

Looked at this way, the solution to housing crisis may not actually be that complex: we need local authorities to build more houses.

If that is the solution where?  If it simply on the inadequate land zoned for housing it won’t solve the crisis.  Housing will be more affordable, and with land value capture it may both be more affordable and fund infrastructure, but it wont be enough and may simply displace private house building making market housing more expensive.

Both sufficient land zoned for housing in the right place and mechanism for delivery, included increased public sector land holding and building, are needed to solve the housing crisis and fix the broken housing market.  Both are necessary but not sufficient.  Any one focussing on just one is being defensive, housebuilders for their oligopoly and under delivery, and the RTPI for suggesting Planners (as opposed to Planners under political direction) is not part of the problem.

Hence a complete solution to the housing crisis must address both the spatial gap (where) and the delivery gap (who and how funded.  Only an integrated solution – such as Garden Cities/New Towns can do so.

Planning By Resistance – How The Powerful Divert Development to Poor Areas

In an outstanding PHD thesis from 2013 Robert Morrow coins the term ‘Planning by Resistance’.

this dissertation explains the origins and impact of Los Angeles’s slow-growth, Community planning era between the Watts (1965) and Rodney King (1992) civil unrests. …
The dissertation explains how the slow-growth movement was facilitated by the shift from top-down planning during the progrowth, post-war period to a more bottom-up Community planning…

The project illustrates the dramatic land use changes that occurred during this period – first, the down-zoning of the City by 60% in the initial community plans in the 1970s, and the subsequent shifts in residential densities as homeowners shapedlocal community plans. These shifts were strongly correlated to socioeconomic characteristics and homeowner activity, such that areas with well-organized homeowner groups with strong social capital were able to dramatically decrease density as a means of controlling population growth, and areas with few to no homeowner groups (strongly correlated with Latinos, non-citizens, and large family sizes)dramatically increased in density. As such, density followed the path of least resistance.

I argue that this process has produced a phenomenon
of “planning by resistance” – where those communities with
time, money, and resources (including social capital) can resist
change while those unable to mobilize bear the burden of
future growth.

The changes meant the future growth of Los Angeles was absorbed by low-income, minority communities – communities that were least able to accommodate that growth since they already had overcrowded housing, under-performing schools, lacked park space and other amenities, and in many cases were not served by mass transit.

At heart, the findings illustrate the dark side of social capital and the dangers of equating local planning with more democratic planning. It also illustrates in vivid detail the motivations and impacts of adopting restrictive land use policies. As this case demonstrates, exclusively local planning may empower those with the loudest voices
and strongest political connections, at the expense of the silent majority, leading to unexpected outcomes, including a less socially just, economically secure, and environmentally healthy city. This, in turn, has important implications for planning theory, which has long positioned planners as adjudicators of communicative action.
The homeowner revolution in Los Angeles and the devastating impacts it has had on the City’s social, economic, and environmental sustainability, demonstrates the need for the re-assertion of a professional role for planners, a better balance between local and regional concerns, and the critical importance of implementing a planning process that reflects the will of the majority of a City’s residents, rather than empower only its most strident voices.

It is clear from a UK perspective that even in a primarily discretionary planning system the distribution of protected areas and the ability of homeowners to object to development has produced much the same outcomes. This can produce the perverse outcome that campaigners in poor areas see development in their areas as pushing up house prices whereas the cause is lack of development in more well off areas,

Morrow powerfully attacks the community planning paradigm that has been dominant on the left in Planning since the 1960s.  If planners are simply advocates and arbiters not upholders of the public interest then they are simply conspiring in this structural bias against development of meet social needs in the wider public interest and failing to produce social plans that meet these needs rather than the chaos of market forces weakly resisted on a piecemeal basis.