Barnet Scheme for 1,350 homes takes 2 years to get to committee- Refused Against Officers Advice

Barnet and Whetstone Press

Councillors have refused to grant a planning application for a development of 1,350 homes on the 17 hectare North London Business Park site (NLBP) in Brunswick Park ward.

The application, submitted by Comer Homes, included high rise blocks up to 9 storeys and a new free school.

But it was considered, by Barnet Council’s planning committee, to be an overdevelopment and out of character with the area.

A total of 228 local residents objected to the proposals.

Obviously not overdevelopment, They must be hoping the Mayor Directs Approval to avoid costs on appeal.  As the site backs onto railway and is surrounded by generous open space almost nobody is visually affected.  A good example of cllrs being moved by objections immaterial of planning merits.

Media preview

Media preview

Media preview

 

 

Advertisements

The Fisher Equation, Monetary Targets and Austerity

Take the Fisher Equation in its full non linearised form.

1+i=(1+r)(1+π)

r denotes the real interest rate, i denote the nominal interest rate, and π denotes the inflation rate.

Is this a good target for monetary policy?

It is not because it does not measure the full numeraire effect of the money in circulation.  Its use has led to inflation consistently undershooting Central Bank targets.

A better target would be

1+i=(1+r)(1+π)+(1+g)

Where g is the ratio of the deficit to the current broad money stock in circulation.

It linearise after two taylor expansions to the approximation

i=r+π+g

Its importance is because of the accounting identity that the public sector deficit is equal to the public sector surplus.

So if the deficit is expanding at less that the rate of growth then the deficit is sustainable.  Note it is the first derivative that matters, and when you have a delay between the rate of money destruction – the point in time taxation takes place, and the point of time it is spent, which you always have with annual budgets, then you will get a classic accelerator effect; a mere change in the rate of change in the deficit will be sufficient to cause a change in effective demand even though the deficit may still be growing.   The reduction in the rate of the deficit has led to an accelerated reduction in the numeraire effect causing deflation.

@MayorofLondon Plan Secret Plan to release 20,000 Green Belt Homes to fund Crossrail 2

In 2013 London First set up a task force to look at funding options for Crossrail which included releasing land for 70,000 homes in Chessington raing 1.6 billion pounds.  I wrote about that here.

Of course Mayor Khan stated in his election Campaign that the green Belt was ‘invilate’ so presumably he would be prepared to let Crossrail 2 fail  if there was a choice?

Not quite, the current hold up between the Treasury and TfL according to City AM

Rising tension between the government and Transport for London (TfL) over funding plans for Crossrail 2 is threatening to shunt the £31bn infrastructure project into the sidings.

TfL submitted its business and funding case for the new cross-London rail route to the government in March, but transport secretary Chris Grayling has yet to give the plan the go ahead.

Crossrail 2 planners had wanted a decision on it by the end of May to keep the timeline of the key railway link on track. In the mayor’s transport strategy, unveiled yesterday, Sadiq Khan said the government “must immediately” give the go-ahead for Crossrail 2. But City A.M. understands the Department for Transport (DfT) remains unconvinced by TfL’s current funding proposals…

Michele Dix, TfL’s managing director for Crossrail 2, said: “TfL submitted a revised business case to the secretary of state in March. It includes detailed proposals for a funding package where London funding streams pay for half of the total cost of Crossrail 2 as was agreed with the government. Our case is robust, and we will continue to work closely with the secretary of state as we develop our plans.”

A DfT spokesperson said: “As with all transport scheme proposals a thorough analysis is being carried out by the department to ensure it is a robust scheme. This includes examining whether the National Infrastructure Commission’s detailed recommendations on the scheme have been met. These considerations and further discussions are part of a normal ongoing process.”

Lets have a look at the NIC report from March 2016

Page 56

TfL has undertaken extensive work to explore how Crossrail 2 can facilitate the
delivery of housing. The following planning policy changes underpin Crossrail 2’s
housing case:

-Industrial land release: An increased rate of Strategic Industrial
Location (SIL) release for housing development.
– Density: An increase in the housing density levels applied by the
London Plan56 (including the intensification of existing housing estates)
l-Metropolitan Open Land/Green Belt release: Densification around
Crossrail 2 stations; including, where appropriate in specific cases, the
limited release of Metropolitan Open Land (MOL) and Green Belt land.

The report goes on.

Again, in relation to Green Belt release, changes are already being considered.

hile the Green Belt is protected under national planning policy as well as the London Plan (MOL is protected by the London Plan), a number of local authorities- including some on the Crossrail 2 route – are already reviewing Green Belt designations. The Crossrail 2 Growth Commission notes that the future role of the Green Belt is not an issue confined to Crossrail 2 and will need to be considered further as part of the London Plan and other local and national planning

National Infrastructure Commission report | Transport for a world city
processes.59 The release of limited parcels of such land around Crossrail 2 and connecting stations currently contributes at least 10% to Crossrail 2’s housing goal of 200,000 new homes, but a co-ordinated approach across local authorities on the release of land for development is again needed.

The scope for development and densification along the line, in southwest
London and outside London to the north-east and north-west, is also large and includes areas such as Chessington and Tolworth. Significant opportunities exist outside the Greater London boundary. 75,000 of the 200,000 homes potentially unlocked by Crossrail 2 are outside Greater London into Surrey and Hertfordshire. New housing will come from both new developments and the intensification of existing housing areas and town centres along the route.

The release of limited parcels of such land around Crossrail 2 and
connecting stations currently contributes at least 10% to Crossrail 2’s housing goal of 200,000 new homes, but a co-ordinated approach across local authorities on the release of land for development is again needed. (page 58).

The reference here is to TfL, Crossrail 2 Business Case, 2015 which is not publicly available.  A question the Treasury is surely asking is why 20,000 not the 70,000 the Crossrail Growth commission identified, especially as the potential loss of over 1 billion pounds in funding could mean key Stations such as Kings Roads may now be dropped (after all it is the former Hackney Chelsea line).  Now where Mayor are these 20,000, and where are the other 50,000 you dropped?    The likelihood is the Mayor is only considering  relatively small releases around Chessington South Station.

National Infrastructure Commission Launches Oxford to Cambridge Corridor Competition

Malcolm Reading

At the moment its just a subscription link by a specialist form of architectural composition consultants

Some notable points however

As we recommended it is being commissioned by the NIC itself and stresses

CAMBRIDGE • MILTON KEYNES • NORTHAMPTON • OXFORD

However according to the accompanying notes

Initiative to launch at the end of June and offer competitors opportunity to influence strategic development and sustainable placemaking within the UK’s leading economic growth corridor.

The National Infrastructure Commission (NIC) is seeking visionary ideas for development

typologies across the corridor encompassing Cambridge, Milton Keynes, Oxford and Northampton that can contribute to delivering
the homes the area needs, integrate the delivery of infrastructure with high quality places and maintain the environmental and cultural character of the corridor.

This will be an open call for: forward-thinking ideas and proposals. We want visions that encompass a range of development typologies, which balance placemaking with efficient use of infrastructure. These need to frame sustainable development that maintains and/or protects the environment and cultural character within the arc encompassing four of the UK’s fastest-growing and most productive centres.

The competition welcomes: broad multidisciplinary teams – including international ones – of urban designers; architects; planning, policy, and community specialists; landscape designers; development economists;
and others – this can include non-specialists who bring insights and new thinking.

The social need here: is to creatively link new road and rail infrastructure with placemaking in the UK’s leading growth corridor. The orridor connecting Cambridge, Milton Keynes and Oxford has the potential to be a distinctive

COMING SOON

THE CAMBRIDGE TO OXFORD CONNECTION:
IDEAS COMPETITION
world-renowned network for science,technology and innovation. But its future success is not guaranteed.

A lack of sufficient and suitable housing presents a fundamental risk to the success of the area. Without a joined-up strategy for housing, jobs and infrastructure across the corridor, it will be left behind by its international competitors.
The NIC aims to unlock housing sites, improve land supply, and support well connected and sensitively-designed new communities, whilst bringing productive towns and cities closer together.

The free-to-enter competition: launches in June, will have two-stages, and concludes in the autumn. An honorarium of £10,000 will be paid to four shortlisted teams to develop their concept into a creative vision. Finalists’ entries used in the NIC’s report will be fully credited and these teams may be given continuing roles as the wider project develops.

To receive automatic notification of launch,Please return to the competition website:
https://competitions.malcolmreading .co.uk/cambridgeoxfordconnection

Further details on the Cambridge Milton Keynes – Oxford Growth
corridor project can be found here:

https://www.nic.org.uk/our-work/growth-corridor/

This competition is commissioned by the National Infrastructure Commission and is being run by Malcolm Reading Consultants

What is worrying is that the competition is being launched without a firm government commitment and there is no clear prize, contract or role for the winning firm.  As a competition rather than a competiative tender with a firm RFP and draft contract it runs the risk of attracting the flashy and impractical rather than bold, practical but still visionary strategy.

Why there is no chance of any Successful Prosecutions over the #Grenfell Fire

The building regulations state the regulation – set out as general principles – such as fire not being able to spread – and then the approved documents which only have the legal status of guidance.  When there is any vagery or doubt about the approved documents then there is no chance of a successful prosecutions then a defending council would run rings about this.  If you have any doubt about this just check out this thread from the editor of Building Design and the responses where even professional with many year of experience are unclear about the interpretation and meaning of the regs where two tables appear to be in direct contradiction.

Reynobond PE, the panel which features a polyethylene core has been tested according to the standards set out in BS EN 13501-1: Fire classification of construction products and building elements. According to the certificate the panels have a Class 0 rating for the surface spread of flame which is the highest rating.

According to Part B, the building regulation which deals with fire safety, the external envelope of a building should not provide a medium for fire spread if it is likely to be a risk to health and safety. Any insulation product or filler material used in the external wall construction of a building over 18m tall should be of limited combustibility. A table in Part B, diagram 40 shows what classification of material can be used in different parts of a building. It states that materials with a classification of class C-s3 – combustible with a limited contribution to fire – can be used in buildings up to 18m tall. A building over 18m tall must use materials with a classification of B-s3 or better. This suggests both types of Reynobond panel would meet the requirements of Part B for the flammability of external cladding.

There appears to be a contradiction in Part B. Table A7 in the appendix defines materials of limited combustibility which must be used on buildings over 18m tall. This states a material tested to BS EN 13501-1 must have a rating of A2 – s3 or better, in other words non-combustible.

A commentator summarises a presentation on fire risk of external cladding given by Dr Sarah Caldwell of BRE

on page 39, of the UK regulations for buildings above 18 metres, which clears some confusion up for us:

– External Surfaces should conform to diagram 40 – which means the rainscreen cladding must meet class 0 (national class) or Class B-s3, d2 or better (European class)

– All insulation and filler materials should be A2-s3, d2 or better (EN13501-1)

OR

– Test the complete system to BS 8414 and classify in accordance with BR135

And

– All cavity barriers and fire stopping guidance needs to be followed

So Reynobond PE meets Class 0 for BS 476: Part 6 (fire propagation) and meets Class 1 for BS 476: Part 7 (surface spread of flame). It also meets the European Class B-s2, d0 (s being smoke, d droplets, lower number better). So it’s use was according to current regulations. Class 1 of Part 7 is overridden by Class 0 of Part 6, as explained by a Probyn Miers journal article last December:

http://www.probyn-miers.com/perspective/2016/02/fire-risks-from-external-cladding-panels-perspective-from-the-uk/

There are a lot of questions.  Such as why the sales literature states one standard and the BBA certificate another and why past BBA test were met but more recent BRE tests were not, but overall any defence could make a strong argument that there was reasonable doubt over whether Reynobond PE met the building regs,

There are lesssons to be learned – in particular simplifying the regulations.  But simply ‘banning’ material x or y is not a satisfactory approach.  A material meeting one fire test standard may be combustible when part of a poorly designed system.  The way ahead is to take w whole building approach with desktop studies involving independent fire safety engineer as as part of the design process and routine large scale testing on external panel mock ups.

The Perpetual Motion Assumption in New Keynesian Models

Note:  There is nothing original in this article as it relies on the work of Ping Chen – what I hope to do here is highlight what I think is the biggest mistake in the history of Macroeconomics for those not of a mathematical bent.

The Problem – Reconciling Equilibrium and Business Cycles

Classical economics had a very clear view of the operations of the economy as a whole – the invisible hand.  Though individual agents would compete and accumulate in free capital markets investments would flow to the most profitable occupations and unprofitable occupations would die off.  The analogy was of a process of ‘gravitation’.  The problem was reconciling this with business cycles and the irregular periods of crisis that capitalist underwent.  Hence such problems were outside – exogenous – to the system – like bad weather, wars and the occasional disastrous economic decision by monarchs.

The great depression firmly shook the belief in stability and the 1930s were a fecund period for new ideas.  In this febrile climate an informal paper was presented to a conference that sought to explain this instability not as something which was within – endogenous- to capitalism but something caused by the accumulation of random outside shocks.  This was hailed as a ‘Copercernian’ revolution in economics by Samuelsson – Lucas said ‘That was a hell of an idea,”. “It was just a huge jump from what anyone had done.” This formed the basic assumption of modern Real Business Cycle  and NK (New Keynesian) models which arose from them.  A basically stabilizing capitalist system buffered from something outside something outside the realm of economics.  The problem was the originator of the theory – Ragnar Frish – had made an error – his promised journal write up of the idea never appeared and he did not even refer to it in his acceptance speech for the first Nobel Prize in Economics – even though it was referred to in his citation.  As such, in being the primary theoretical underpinning of the neoclassical model of the business cycle, it is probably the greatest mistake in economics.  For in seeing is seeing the capitalist economy as basically stable and stabilizing it had nothing to say about the Great Recession.

Early Dynamic Theories of The Business Cycle – Marx, Tugan-Baranovsky and Aftalion 

Though in the era of classical economics there were occasional thinkers such as Lauderdale and Malthus who advance the concept that cycles and crisis were caused by a disproportionality between investment and consumption the first thinker who advanced a systematic and dynamic theory of this was Marx. This may come as a surprise to many but Frisch and Kalecki – who advanced ideas which were to become the foundation of all modern theories of the business cycle – took their cue from a line of thinking advanced by Marx.

Marx had several theories of crisis in several notebooks – none of which were published in his lifetime.  He never had the opportunity to draw together a unified theory of growth the cycle and long run  crisis. His main cyclical theories however are in Theories of Surplus Value (in the section on Ricardo Ch 17), and Capital Vol 2.

For Marx – building on earlier work by Sisimondi- the starting point was a rejection of Says Law – that producers trade with others demanding consumption goods as such supply creates its own demand.  For Marx this may be true in a barter economy but not a monetary one, where the motivation of holders of money is not to consume but accumulate.

“no one can sell unless someone else purchases.  But no one is forthwith bound to purchase just because he has sold”.

The potential for money to act as a potential brake on consumption through non-circulation and the time gap between investment and consumption was suggested by a number of authors notably JS Mill.  Marx however criticized such purely monetary theories

the separation between purchase and sale [in a monetary economy]… explain the possibility of crises… [but not] their actual occurrence.  They do not explain why the phases of the … come into such conflict

For Marx the driver was the tendency to overproduction in capitalism, for production to expand beyond the limits of the market.  As such he developed Ricardo’s concept of disposition between production and consumption.  This was never worked out mathematically leaving hints to later authors – notably his schemes of reproduction between consumption and production sectors.  Marx seems to have been developing an idea from Sisimondi – the purchasing power available to purchase consumer’s goods is equal to last year’s income either invested or spent on wages or capitalist consumption, so any increase in investment and capital intensity will result in a surplus of commodities. Increases in machinery are therefore responsible for market gluts.  This idea, which also underlay Marx’s theory of the falling rate of profit, has been picked up by many authors from Major Douglas to Foster and Catching’s, but in its simple form it is both incomplete and based on a fallacy.  It is incomplete because it explains a systematic downturn, but not upswings in the cycle.  Marx verbally explained upswings by increased opportunities for cheaper credit and investment at low interest rates but never developed a unified theory of money and production.  Fallacious because If production is continuous and even, or even expands evenly, and no money is hoarded then the wages and profits from those producing intermediate goods and expenditure on ‘unproductive labour (luxury goods and services for capitalists) will exactly match any ‘gap’ im surplus value (profits) necessary to purchase final consumption goods – there is no cycle.  It was later authors who developed the theory in a more satisfactory form that because credit and investment is not even, and the intuition that the decision to invest and the decisions of consumers to spend are taken at different times is the cause of this unevenness, that the possibly of cycles arises.  Again it was Marx who provided the hint which his emphasis that it was capitalism’s own urge to expand which created these conditions. As Schumpeter commented the important point was not not Marx had the right answers but he asked the right questions.

The next major leap was made by Tugan-Baranovsky, credited by Keynes as ‘first and most original’ modern writer on business cycles. His publication of ‘The Industrial Crises in England’ in 1894 caused something of a sensation being translated into several European languages. Tugan-Baranovsky took from Marx the centrality of investment as the driver of capitalism, developing Marx’s reproduction schemes (and adding the missing luxury goods and services sector) where investment driven by credit, which he envisaged like a steam engine, building pressure until it was released and the cycle repeats.  Credit availability being the steam.

Panic is the death of credit. However, credit has the ability to return to life, and its life cycle is the modern industrial cycle. The first period f a credit cycle (the post-panic period) immediately follows the end of a panic. At this time, the discount rate becomes low and the supply of loan capital on the money market exceeds demand. (Tugan-Baranovskij, 2002a, pp. 39-40).

Tugan-Baranovsky was stronger in empirical elaboration of the cycle than mathematical precision.  He did set out a primitive version of the multiplier theory as additional free capital via banks was opened up by additional investment, however though this verbal model could be elaborated into a model of upswings prompting upswings and downswings prompting downswings it did not explain turning points, how upswings turned to downswings and vice versa.

The breakthrough came from the French Jewish economist  Albert Aftalion.  In Les crises périodiques de surproduction,1913 he developed the accelerator principle. Following Marx and Tugan-Baranovsky the driver of the cycle is investment in fixed capital and the time lag to bring products to consumption.  Aftalion developed a function of total spending which included consumption of final consumption goods plus investment in fixed capital and intermediate goods.

In Aftalion’s model the starring point is simple reproduction.  An economy which doesn’t grow where the capital stock is simply replaced (depreciation).  In a growing economy, the rate of investment (assuming no change in capital/output rations) is therefore proportionate to the rate of change of income. A corollary of this is the importance of the second derivative of investment – if the rate of increase slows – its acceleration, then there will be a downturn in income.  A second corollary is that a rise (or fall) in demand for consumer goods will result in a greater (or lessor) rise in demand for capital goods. The existence of a time lag between production and consumption alone is able to mathematically generate cycles—even before consideration of availability of real resources and credit, solving the puzzle of a demand shortfall from Sismondi and Marx.

Aftalion’s accelerator model was independently discovered and popularized by J.M. Clark in 1917.

1930s development of the Accelerator Principle

Frisch’s work was inspired by a mathematical critique of Clark’s verbal model and his famous paper on business cycles was primarily an attempt to demonstrate how mathematical and modular models of the whole economy could be built.  In many ways, it marked the founding of macroeconomics.  Frisch presented Clark’s model as an equation and demonstrated firstly that it was underdetermined – one equations with two unknowns – and therefore needing an expression for aggregate demand, secondly through not expressly including the period of production if investment was instantaneous it could not generate turning points and cycles. On the second point, he seems to have been influenced by a Norwegian tradition on the importance of the period of production/fixed capital after Tugan-Baranovsky.  Clark acknowledged the problem and called upon mathematical economist to answer it.  Frisch and other took up this challenge

Three other major figures working in parallel and influencing each other on the very first full macroeconomic models of the cycle. were Tinbergen, Schumpeter and Kalecki.  Tinbergen studied  variations in output of the shipbuilding industry.

The basic problem of any theory on endogenous trade cycles may be expressed in the following question: how can an economic system show fluctuations which are not the3 result of exogenous, oscillating forces, that is to say some fluctuations due to some ‘inner’ cause?

When freight rates are high ships are commissioned, however they take two years to complete, freight rates are lowered by the volume of ships, the lag creating a business cycle.  This Tinbergen modelled using complex roots which produced a sin function.  The result being an endogenous model of the cycle, which did not rely on sunspots or such like some previous models of the cycle.

Capital investment can be split into two components.  I simple reproduction without growth all you are doing is replacing the existing capital stock – depreciation. This is not as simple as it seems as the economic life of the machine, the period of depreciation, varies with the interest rate.  If there is a change in investment, which can either be created by innovations or changes in interest rates change in the economic life of capital goods or shifts in aggregate demand, then this changes the wage bill for those producing the delta in the capital stock.  Frisch split this into two – changes investment of capital goods producing consumer goods, and changes in investment in capital goods producing other capital goods.  This produces second and higher order effects waves of ever decreasing magnitude to aggregate demand.  This was the ingenious treatment of depreciation in the models of Fricsh and Kalecki. This metronomic view of capitalism lays at the heart of all current schools of economics though its origins and significance has mostly been forgotten.  The Austrian schools presents a bottom up approach prioritizing the  period production of capital goods as the driver. Keynesians reversed the causality with investments driven by spending.

Tinbergen’s method was taken up in Kalecki’s investment driven and Marx inspired models of the 1930s estimating the parameter of the effect on the sin function based on estimates from empirical data. The results were highly sensitive to initial assumptions.  From today’s perspective, this should not have been a surprise.  The interaction between stock and demand for investment involved a feedback loop and hence was a non-linear system – which are highly sensitive to initial assumptions.

Slutsky – Order from Randomness

Frisch had been studying business cycles for several years. Both Kalecki and Frisch were invited to a seminar in honor of Cassel’s 70th  birthday in 1933.  Frisch had been working on problems concerning the business cycle for several years intending them to be published in a full article in Economtrica.  The paper to the conference was informal and had no end notes.  Frisch approached his business cycle investigations from an econometric perspective. This led to an understanding of the need for a theory to explain several statistical aspects of cycles, their general regularity but with variations in period and intensity.  Frisch rejected the Tinbergen approach used by Kalecki of two equations, one for price one for investment. He used just one for investment making his theory non-linear, it generated a cycle but the cycle died out.

Frisch was one of the first to set out a systematic macroeconomic model, he laid out a Tableau economic, after Quesney as follows:

Land presented a non-linear feedback loop.  To obtain a linear system Frisch eliminated land – as well as assuming all consumer goods were consumed immeadiately.

This linearization of the system was probably his key mistake and it is interesting to speculate why he took this approach.  At the conference

‘Since the Greeks it has been accepted that one cannot say an empirical quantity is exactly equal to a precise number’ (Frisch Quoted in Goodwin 1989)

Kalecki had simplified Tinbergen’s equations, replacing a non-linear equation very sensitive to initial conditions with a precise fixed amplitude generated by a sin function. The criticism was valid, however rather than developing a non- linear system he developed a linear one.  It is likely he felt more comfortable with a linear system as its verification was much more amenable to his emerging statistical econometrical techniques.  A second factor may have been that this was analogous to the rocking horse model popularized by Wicksell.

“If you hit a rocking horse with a stick, the movement of the horse will be very different from the stick. The hits are the cause of the movement, but the system’s own equilibrium laws condition the form of movement” Wicksell 1918

This was essentially the classical view of Capitalism held by Ricardo rather than Marx, of stabilizing, equilibrating forces with disturbances coming from outside the system.  This may have been easier to digest than the potentially radical concept that cycles and crises came from within the forces of capitalism itself.

In abandoning a non-linear system, he had created a problem.  The oscillations died away. This led him to distinguish between what he termed the impulse – what created the cycle – and propagation – what provided the energy to continue it – hence the title of his paper.

Propagation Problems and Impulse Problems in Dynamic Economics

To solve this puzzle he turned to an idea from Slutsky – a Russian economist who had to turn to pure theory in the wake of the Russian Revelation.  Slutsky looked at the sequence of draw on the Russian lottery, and added up subsequent numbers – then plotted them.  The result was cycles that looked much like a business cycle.  This is like adding the results of two dice and producing a bell curve.  The law of large numbers meaning that results oscillate around a central point.

Frisch then argued that it was this random exogenous factor which provided the propagation force – the energy that maintained the swings – which he modelled as a pendulum.  There is some irony in that as the pendulum is a classic non-linear system.  So, to replace an endogenous non-linear drive of the cycle he introduced an endogenous one.

This was not the only possible mechanism for propagation mentioned in the article. He also mentioned Schumpeterian innovations being like a release valve operating like a second pendulum at particular points in the cycle.  Therefore one important surving featgure of Frischs approach despite his errors, is a necessary example;e of modelling the economy as a system exploring with an open mind several possible causative forces.

The Perpetual Motion Machine

The problem with the distinction between impulse and propagation is that it does not exist in nature.  There is no such thing as a directionless prorogation force. All forces have momentum and direction.  An exogenous force to generate a cycle must have different directions at every point in its cycle, a pendulum to keep swinging must have a different direction of force when it swings left to when it swings right.  A purely random force like Brownian motion is unable to generate an oscillation of broadly constant magnitude as forces in one direction cancel out forces in another.  Once set in motion a pendulum requires input of energy to keep swinging otherwise friction will bring it to a halt.  If it did not it would be a perpetual motion machine.  A child on a swing adds energy and shifts their body mass to maintain motion.  It is an endogenous force relying on their knowledge of the timing of the cycle.  It is like the price of ships in Tinbergen’s model.

Frisch suggested that the stable property of a market economy could be described by a damped oscillator, and that persistent business cycles could be maintained by persistent shocks… physicists solved the problem of the harmonic oscillator under Brownian motion analytically in 1930 and refined it in the 1940¡¯s ( Uhle nbeck an Ornstein 1930, Chandrasekhar 1943, Wang and Unlenbeck 1945). The classical works on Brownian motion were well known among mathematicians through the influential book on stochastic process (Wax 1954). Since 1963, the discoveries of deterministic chaos further indicate that only the nonlinear oscillator is capable of generating persistent cycles (Lorenz 1963, Hao 1990). It was a great mystery why the economic community has for more than six decades ignored these fundamental results in stochastic process and adhered to the mistaken belief of noise-driven cycles.t a harmonic oscillator under the Brownian motion does not lead to a diffusion process. Its oscillation will have become rapidly damped into residual fluctuations without apparent periodic motion…. The fantasy of the Frisch model is quite similar to a second kind of perpetual motion machine in the history of thermodynamics. Schumpeter considered business cycles like a heartbeat, which is the essence of the organism (Schumpeter 1939). According to nonequilibrium thermodynamics, a biological clock can only emerge in dissipative systems with energy flow, information flow, and matter flow (Prigogine 1980). Therefore, nonlinear mechanism under nonequilibrium condition is the root of business cycles (Chen 2000) Chen 1999.

Chen speculates that Frisch had realized his mistake and quietly abandoned this model.

Frisch’s promised paper, “Changing harmonics studied from the point of view of linear operators and erratic shocks,” was advertised three times under the category “papers to appear in early issues” in Econometrica, including Issue No. 2, 3, and 4 of Volume I (April, July, and October 1933) but never appeared in Econometrica since 1934.  Frisch never mentioned a word about his prize-winning model in his Nobel speech in 1969 (Frisch 1981).

Historically, Frisch quietly abandoned his model as early as 1934. Frisch’s promised paper, “Changing harmonics studied from the point of view of linear operators and erratic shocks,” was advertised three times under the category “papers to appear in early issues” in Econometrica, including Issue No. 2, 3, and 4 of Volume I (April, July, and October 1933). The promised paper was never published in Econometrica where Frisch himself was the editor of the newly established flagship journal for the Econometric Society. Surprisingly, Frisch never mentioned a word about his prize-winning model in his Nobel speech in 1969 (Frisch 1981)..  Chen 2010

Indeed in 1934 in his paper on ‘Incap[ul;ating Phenomenon’ he had developed a fuller theory involving the finance sector.

“violent depressions may be caused by the mere fact that the parties involved have a certain typical behaviour in regard to buying and loaning” (Frisch, 1934a, 271; Frisch’s emphasis).

The  origin of the crises lay in the behaviour of the banking and monetary authorities, l- with variations in interest and credit driving the accelerator principle.

Although Frisch’s 1933 model depended on depreciated for its propagation force there was no term for interest.  Depreciation was a fixed factor. If however one models the size of the required depreciation fund through net present value of the capital stock we get a firmly non-linear model where changes to the supply and demand for financial capital vary the interest rate.

This was the road not taken as such models can purely endogenously generate cycles through changes to the impulse (second derivative) of the supply of credit.  The impulse of credit replaces the impulse of god with a stick.  However Frisch’s 1934 and subsequent models are not rocking bhorse models, they are non-linear models able to gneerate cycles endogenously

 

 

Lucas and the Revival of Frisch’s Method

Frisch’s paper led to little follow on work, it was too far ahead of its time.  The Keynesian revolution led the economics profession to see the world through partial equilibrium, rather than dynamic general equilibrium models.  With the Arrow-Debrau general equilibrium model and the rational expectations hypothesis all prices were set in advance forever.  In the Jurgenson neoclassical model of investment there is no time lag at all between investment decisions and spending and with rational expectations are cycles are perfectly predicated. For those developing the Real Business Cycle models upon which modern New Keynesian models are based this presented a conundrum, as with all cycles perfectly predicted there is no unexpected excess supply or excess demand – there can be no cycle.  This led to a revival in the Frisch-Slutksy hypothesis but with one crucial difference.  With an endogenous model of the cycle ruled out it relied purely on Slutsky – on endogenous, random shocks to the system. With such model’s dominant in the years prior to 2007 they were unable to explain the Crisis.  It was an exogenous shock, produced, for example, by purely random behaviors in factors such as how much holidays workers chose to take.  In a perpetual motion machine only an act of god could vary its course.

An Endogenous or Exogenous Cycle?

One should not be too dismissive – as a believe Chen is – of random exogenous factors.  What matters is the weight to be given to exogenous actions and how they are mediated through the expectations and behavior of economic actors.  If the sample of random events is small and there are economic actors not infinitely large in number interpreting and mediating them then those actors will vary their behavior depending on the past pattern of prices.  The economy is not a non-ergodic system where all events are independent.  It is an ergodic path dependent system where agents’ ability to bear loss and bet on gains is dependent on their past performance.  As such it is not like pure Brownian motion or a random walk, exogenous random events can generate cycles.

If then both endogenous and exogenous factors can generate cycles, and both are of the same non-linear mathematical form how can one distinguish between them? Both matter but in large liquid markets it is likely that the endogenous systemic factors will be longer term and large and the random factors shorter term and shorter in effect and amplitude.’

Conclusion                                                                                                                                                                                                                     

Frisch made two errors, excluding land feedback of supply of capital goods on price of consumer goods, which turned the non-linear model (as pursued by Kalecki prior to 1933) into a linear model.  This may have been the greatest mistake in economics as the resultant model is not able to generate cycles. By intropducing the economics profession to the Slutsky process as a false means of generating cycles it ignored issues with with timing and consumption and investment and led it to assume that the economy bar external shocks was basically stable without immanent endogenous issues at times of growth generating issues which can produce depression and crisis.

Concept for a Cubett Cycle Only Bridge at Blackfriars

In cities radically improving conditions for cyclists buiding new Bridges primarily for cyclists have become the norm.  Moreover they are light and relatively cheap, it was the mass of earth interfering with LT subsurface work which ultimately did for the Garden Bridge.

The new analytical work by TfL has shown just where such a bridge should be.  There are huge flows and potential flows over Blackfriars Bridge connecting as it does Clapham and the City of London is is by far the main origin-destination point.  It is teh route of thesuperhighway.  70% of traffic across Blackfriars Bridge is now Cycles.  Additional capacity here is needed because of the introduction of anti-terrorist barriers.

Closing Blackfriars bridge would be hard because of bus routes, so why not build a cycle and pedestrian only bridge here, made much easier here as the supprts from the Cubett rail bridge (now demolished) are still in place.  A proposal for a combined green bridge and cycle route at Blackfriars was submitted to the ‘High Line for London’ ideas competition in 2012.

Britain’s Two Biggest Rail Projects will Cross but not Interchange That’s a Disgrace @Andrew_Adonis

 

Near the village of Steeple Claydon in the Vale of Aylsham is the former station of Calvert.  Here England two biggest rail projects will cross.  i stress cross there are no plans for an interchange.  Here HS2 will in this section follow the route of the former Great Central Railway, England last main line which met the visionary Edward Watkin’s vision of a purpose built high speed main line linking London and the North. At Calvert it crosses the soon to be reopened Varsity Line between Oxford and Cambridge.  They will not interchange. A chord was previously built between them to serve a nearby brickworks and army camp.

Why is this?  Why are no intermediate stations proposed on HS2 when for comparison the Japanese Shinkansen system has many stations every 30-40 km or so apart, especially around Tokyo.

The reason is the misconception of British railway engineers over capacity.  I can say this with some assurance having worked closely with Japanese engineers on High Speed Rail in India.  The concept of headway is crucial.  The benefit cost ration of HS2 depends on time savings from travel – hence run the trains as fast as possible and as tightly spaced (headway) as possible.  It now seems the 18 trains per hour ambition of HS2 is too ambitious, the maximum the Japanese run is 12-14 an hour.   Ironically the faster you run the less the capacity, because the headway is dependent on braking and signal technology.  The optimum sweet spot seems to be around 300kph as per Chinese railways.  Because HS2 is being planned around heavy Spanish and French trains and less advanced signalling technology their headway is far less than lightweight Shinkansen trains which being much lighter take less time to break.  The japanese also have a different philosophy around benefits and costs.  They realise that commuters are able to pay a premium and the most commuters will be around intermediate stations, even if the houses for those commuters are not yet built.  If you can increase capacity by commuters without losing headway you can dramatically increase the farebox and the BCR of the project as the fixed costs serving the major cities will have already been paid for.  They do this through interleaving trains and advanced train control.  You can switch a train onto the opposite track to overtake a slower ‘stopping’ service, though unless carefully timed this can reduce capacity.  The better solution is to have acceleration/deceleration/overtaking lanes before and after intermediate stops – which at around 320kph need to be around 16km in length.  Rather than every train stopping at Old Oak Common, Birmingham interchange and Crewe I recommend that such lanes are created around such stations enabling faster services between London and Manchester direct.   This would also open up potential for intermediate stations at Amersham, Princes Risborough, Calvert, Stoneleigh (for Leamington and Warwick) and Lichfield, Stafford and Machester Airport – all potential major growth locations. The future strategic plans for the London and Birmingham regions and Oxford-MK-Cambridge should be based around this in the same way Greater Tokoyo’s regional plan is based around Shinkansen.  Indeed there is a reason Tokyop-Osaka is the world’s largest city experiencing the economic benefits without the disbenefits that have choked other megacities – you can commute farther and faster by High Speed Rail.

Calvert is a particular opportunity for large Garden City scale development.  The best in England.  The landscape is flat with few villages.  The location is midway between Oxford and Milton Keynes and one of the three routes of the potential Oxford -Milton Keynes expressway.  indeed development here could pay for this an improved A road upgradings to Aylesbury and its proposed northern loop road and east and west between Oxford and Luton, linking the M40 to the M1 and linking several growth areas.  Indeed I dont think it is really a choice between one or other expressway grade roads, two routes to dual A Trunk road standard with grade separation would perform a better network function and service a wider range of growth areas.

Future decisions on new settlements in the Aylesbury Vale local plan have been put on hold pending decision on the expressway route.  Currently Wilmslow (on the Varsity line) and Haddenham (on the Chiltern Line) are candidates.  Both are potential Garden Village/town locations and should go ahead – but only Claverty has the opportunity and potential for a Garden City.  Were Aylesbury Vale to meet only its own need and that of constrained south Buckinghamshire and Chilterns districts it would not be needed, however there is also the need to meet  overspill from London and Greater Birmingham. Oxford and Cambridge have a net inflow of commuters, as well as theior own problems, so they are not good locations for this overspill.  The appropriate locations are where peoiple could commute from, on the WCML whose cpacity will increase with HS2, at MK and new Garden City locations north and ssouth of it, at Northmapton and Rugby, and stations along the Marylebone -Snow Hill Line and the suggested stations along the HS2 route.

I suggest calling this new Garden City at Clavert Junction/Steeple Claydon, Watkin, after the great Engineer whose vision was to link the North and Midlands to Europe via High Speed Rail.

No Legislation to Capture Land Value Uplift in Queens Speech

Despite being in all major parties manifestos Land Value Capture to increase housing affordability does not feature in the Queens Speech.  The only DCLG bill relates top Tenants fees, no housing act, no planning act.

The reference in the guidance notes, seems to relate to policy measures only, excluding land value capture which requires amendments to primary legislation.  With a two year Queens Speech at least a two year delay.

We will deliver the reforms proposed in the White Paper to increase
transparency around the control of land, to “free up more land for new homes in the right places, speed up build-out by encouraging modern methods of
construction and diversify who builds homes in the country” (p.70).

I rang the DCLG press office – they knew nothing about whether the proposal had been dropped or not – they are due to get back to me.