Decision Theory for Planners #109 Why doing it tomorrow is quicker than doing it today

This section is on prioritisation.

This is particularly important as the techniques set in in previous sections, such as no bad-multitasking, will only work if there is a clear prioritisation of workflow.

An excellent way of looking at things is called the Urgency/Importance Matrix (from Stephen Covey, A. Roger Merrill, Rebecca R. Merrill, First Things First: To Live, to Love, to Learn, to Leave a Legacy. New York: Simon and Schuster, 1994).   My own version of it, including nuances added by various contributors over the years,  is below.


Not Urgent



Urgent Commitments

  • Firefighting
  • emergencies, complaints and crisis issues
  • demands from superiors or customers
  • planned tasks or project work now due
  • meetings and appointments
  • reports and other submissions
  • staff issues or needs
  • problem resolution, fixes



  • planning, preparation, scheduling
  • research, investigation, designing, testing
  • networking relationship building
  • thinking, creating, modelling, designing
  • systems and process development
  • anticipation and prevention
  • developing change, direction, strategy

Not Important


Reject and Explain

  • trivial requests from others
  • ad-hoc interruptions and distractions
  • routine but time consuming administrative tasks
  • misunderstandings appearing as complaints
  • pointless routines or activities
  • accumulated unresolved trivia
  • boss’s whims or tantrums


Cease and Desist

  • ‘comfort’ activities, computer games, Facebook, excessive cigarette breaks, celebrity gossip
  • daydreaming, doodling, over-long breaks
  • reading nonsense or irrelevant material
  • unnecessary adjusting equipment etc.
  • embellishment and over-production

Covey and the Merrills see traditional time management thinking as dominated by the clock of scheduling.  To this needs adding the ‘compass’ to find true north of what is really important.  Imagine filling in a timesheet with these categories!!

The key is to get as much work as possible in the top right, which also requires us to set aside time, and ridgidly ringfence that time, to plan to plan, to project plan.

The interruptions are one of the main causes of ‘bad multi tasking’ .  You will note that I have added ‘routine’ administrative tasks to the list.  Whether administration is an interruption or not depends on how important the time as is a resource of the individual, i.e. whether or not the time of that individual acts as a resource constraint which could become a bottleneck.

If administrative work costs an organisation costs £10 an hour there is no point in getting an £20 an hour person to do it, especially if it means the project taking longer and costing more in staff time.

This issue is at the heart of the often repeated in the deeply fallacious ‘doing your own admin’ concept.  Nothing is more disruptive of productivity than having to stop what you are doing to spend half an hour checking attendee availability and checking meeting room availability for a meeting, especially if that organisation does not have automatic online tools for this.

What it means is professional staff spending most of their time doing process work and not planning work.  If you find this in an organisation dont be surprised if the productivity of the planning work has fallen to the floor.  Far from being a cost saving it has actually raised the cost of the project and reduced the productivity of the business process.

But what about the top left?  Do we drop everything?

Not everything that is a priority is an emergency.  Only emergencies need to be done today.  Do emergencies today and priorities tommorrow.

This concept is at the heart of the ‘Do it tomorrow‘ doctrine.  That paradoxically you can get work done quicker by doing it tomorrow instead of today.

The idea come from Marks Forster’s Book ‘Do it Tomorrow-  Get Everything Done’

 Forsters ideas starts from a simple obervation, time management problems come from having more on your to do list everyday than you can get done. You have an overflow from one day to the next. The simple answer is to bring these two into balance.

Forster say never prioritize tasks, since you should aim to accomplish all of them. Instead, prioritize commitments.The goal is to do everything that you’ve committed to, at which point it doesn’t matter too much what order you do it in (particularly if you use a short committment horizon of a day).

Forster suggests getting a short pocket book – I find a black and red flipbook is ideal. At the end of everyday make a list of all of the things you have committed to do, even to yourself, that day that would ideally be completed tomorrow. That list is your list of priority actions for the day, and draw a line under it. It works, if you try and get those commitments done as soon as possible in the day you have the rest of it left over for ‘plan for work’ activities.

As the day goes on cross off the commitments as they are delivered. if new commitments arise put them under the line. At the end of the day that is your new list for tomorrow. If a commitment is still hanging around after two or three days you need to ask if it should still be there, is it still a priority, can it be broken down, should part or all of it go to normal ‘plan for work’.

Rather than an ‘open’ to do list which grows and grows this is a closed list – a will-do list. As items get crossed off it gives you a real sense of accomplishment. Also because you arnt forever being thrown off course by doing panic measures ‘now’ you get a lot more done. People like me who have adopted it swear by it, it really does double your productivity and sense that the day was worthwhile.

As Forster admits it really is a clever application of the theory of constraints to levelling workflow

what I am suggesting is that..we impose a buffer on all the bits of work which arrive in a random way over the course of a day. That means we can deal with them in an orderly fashion instead of rushing from one thing to another.

Forster recommends we do the task on the list we least want to do first

Our natural way of working is to follow the path of least resistance. If we are given a list of tasks, we will tend to do the easy ones first. The problem with this is that when we get to a certain level of difficulty, there is a tendency to invent more easy tasks to avoid having to do the more difficult tasks. That is one of the reasons people get submerged in a sea of trivia. If we reverse this and do the tasks we least want to first, then our day will get progressively easier and there will be no need to invent any more “busy work”.

Forster also cautions against prioritising a task as important if it doesnt effect the end result.

For example, if you are building a car, which is more important – the engine or the rear windscreen wiper? Obviously the engine is, but customers are not going to be very pleased if you deliver cars without the rear windscreen wiper if that’s what they ordered. So it really doesn’t matter which is more important – you have to do the lot!

So the level at which you decide what you are going to do and what you are not going to do must be at the level of commitments. It’s no good identifying which tasks are important – that’s too late. You have to keep your commitments well audited.

Decision Theory for Planners #109 Multitasking is a Waste of Time

We are know the situation.  You are working something important.  Then a minor office crisis occurs.  Your boss asks you to work on it.  Shall I drop what im doing?  No theres a deadline please try and do both?

The problem is nothing destroys productivity more and leads to things being late more than multitasking.

When goals go unmet (or are delayed beyond promised due dates), the first place I look to place blame is at the feet of multitasking.  Jeff Johnah

Why is this?

Bad Multitasking is the act of dropping a primary function or activity for any length of time, in order to take up another task, simply to show that progress is being made on more than one project.

The trouble is the effect this has on the end completion date of all tasks.

Task A should take 9 hours.  After 18 hours it is still not complete.  Tasks A+B should take 16 hours to complete, but it isnt after 18.  The three tasks should take 23 hours to complete, C will take 21, B 22 and A 20.

When you are scheduling work to hand over to others it creates chaos.  Rember the previous talks about bottlenecks and the importance of even throughputs.  We have created a bottleneck.

Also when you switch from task to task, you introduce overhead into the equation.  Whether setting up a machine or reading into a new subject.

Note we have only referred to bad multistasking.  That multitasking that slows you down and is avoidable.

Good multitasking may be an essential part of your job. Or you may be working on something where you really need to switch to something else after a little while in order to recharge your batteries. Bad multitasking is the enemy.

Bad multitasking is created by lack of focus and planning, or even worse trying to use the ‘level resources’ function on microsoft project.

The brain finds it hard to multitask. Many researchers believe that  the human brain can only perform one task at a time. Psychiatrist Richard Hallowell has described multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously.”(Crazy Busy: Overstretched, Overbooked, and About to Snap! Strategies for Handling Your Fast-Paced Life. 2007. Ballantine Books)

Professer Earl Miller at MIT has studied multitasking with CAT scanning

He scanned volunteers’ heads while they performed different tasks and found that when there is a group of visual stimulants in front of you, only one or two things tend to activate your brain, indicating we’re really only focusing on one or two items.

In other words, our brains have to skitter to and fro inefficiently between tasks.
But the real problem occurs when we try to concentrate on the two tasks we are dealing with, because this then causes an overload of the brain’s processing capacity.

This is particularly true when we try to perform similar tasks at the same time – such as writing an email and talking on the phone – as they compete to use the same part of the brain. As a result, your brain simply slows down.

Elinor Ochs of UCLA believes habitual multitasking may condition brains to an overexcited state, making it difficult to focus even when people want to.

This may explain research Stanford University, the people that multitask most are the people who are worst at it

But to avoid multistasking we have to be ruthless about prioritisation – next section.

Multitasking causes car crashs – think about it.

Decision Theory for Planners #108 – 95% on time

Some well known and frightening statistics about major projects, from the Standish Group.

Typical Challenges & Symptoms with Projects Mean Performance (1998)
Late Only 44% of all projects finish on schedule or before.  The rest tend to be very late.
On average, projects are 222% longer than planned.
Over budget By 189%
Fall short of planned technical content 70% of projects
Canceled before finished 30% of projects

If 2004 is taken as a base then the failure of around 80% of local planning authorities to have an adopted core strategy musst rank as one of the greatest project management failures in History, alongside the passport office computer system and the Mars Probe.

I actually think that the situation is even worse than this, because core strategies often have inadequate testing by end users, lots of testing by inspectors, but very little by development management staff prior to publication.  With continuing reliance on other documents (such as RSS and old saved policies) there is little guarantee that the new plan is more effective than the old.  This lack of testing would be unthinkable in any other sphere with projects of this cash value scale.

So if projects take twice as long and cost nearly twice as much as planned what can we do about it?

With these rates of failure it was surprising that the field of project management had no major innovations for almost 40 years since the late 1950s.

However in the late 1990s Eliyahu M. Goldratt, who we met in the last section and we learned today sadly died on Saturday, turned his attention of how the Theory of Contraints could be turned from business processes to project management processes.

“A smart man learns from his mistakes, but a wise man learns from the mistakes of others'”

His starting point was the programme, the series of multiple interrelated projects.  The problem is that programmes share resources, such as people.  In traditional critical path approaches it doesn’t examine whether these resources are balanced between projects.  So if for example the person who was going to do your SEA is called away to another project you can rip up that Ghant chart on the wall.

In the real world processes dont all move along at the same pace, some move along faster than others, and if there is a resource problem at any one stage it becomes a bottleneck, and work piles up behind it.   Thiss was his first insight enabling the application of the theory of constraints to project management.

His second insight was that the time taken to complete tasks follows a bell curve, remember the previous section on estimation, at the mean 50% of the time a project will be late and 50% of the time it will be early.

No make an estimation of the time taken for a task with a 100% chance of completion, and halve it.  We will be at the point of 50% chance of task completion.  This is the time you assign.  You shouldn’t be shocked if you realise that people are not expected to complete tasks on time 100% of the time only 50% of the time.   Performance is measured by variance of the mean.

The innovation in critical chain is what it does with that spare 50%.  It uses to create a buffer, exactly as in the previous lecture.

In the traditional approach if a task finishes early then it causes a problem, your project plan might not be ready to accept the saving and so there may be no saving in the project completion time overall.

But if resources can be flexibly reassigned then you can achieve savings overall.

But what if a task is late?  This is where the concept of buffers come in.  The 50% of task estimates is split (50-50) between:

A)  Task buffers

B) Project Buffers

Now if a task is late it is not critical – you can even reassign resources from finished early tasks.  If all of a task buffer is used up you then eat a project buffer.  Only if the buffer are being eaten into at more than the expected rate is the task behind schedule.  This concept of buffer management allows you to accurately assess where and why a project is running late.  It enables you to distinguish between progress on the project constraint (i.e. on the critical chain) from progress on non-constraints (i.e. on other paths).

Case studies report 95% on-time and on-budget completion when it is applied correctly.

Loss of a Giant – Dr. Eliyahu M. Goldratt 1947-2011

A sad loss, he died on Saturday with his family around him.

He revolutionised how we think about business processes and project management, and taught us that focussing on saving money rather than making money will lead to failure.  He really made a breakthrough in the economic theory of the firm, and hopefully one day the economics profession will realise that.

Decision Theory for Planners #107 Bungs and How to Unbung Them

A chain is only as string as its weakest link.

A bit of a cliché, but the trouble is we cant always wave the weakest link goodbye, we have to manage it.

This is where the Theory of Constraints (TOC) comes in (constraints in the sense of problems bunging up a process, not a spatial designation).  It was introduced in the 1984 book The Goal by Dr. Eliyahu M. Goldratt.  It was based on an earlier idea called the theory of bottlenecks but Goldratt greatly popularised it in the English Speaking World.  It really is one of the very few books id recommend that any manager has to read.

The book is unusually written as a novel, of a manager struggling to save a failing factory.  Through a series of dialogues the protaganist works out the solutions himself through challenging conventional wisdom.  He works out that the business really has only one goal, not to maximise output but to make money.  He then set out to work out how to measure how the money he was spending on costs translated to sales.

He figured out he needed to measure three things (ill use the modern treatment rather than the early version in the book):

  • Throughput is the rate at which the system generates ‘goal units’ e.g. money through sales.
  • Investment -the money tied up in the system. This is money associated with inventory, machinery, buildings, and other assets and liabilities.
  • Operational expense the money the system spends in generating “goal units.” – translating investment to throughput.

The same approach can be used in public services if the concept of sales is replaced by one of the ‘goal output’ of the service.

At the heart of this is a critique of traditional concepts of cost accounting.  If a unit needs to save money cut costs.  The problem is that across the board cuts can just as easily lose you money – or of course make the service much less value for money.

When cost accounting was developed in the 1890s, labour was the largest fraction of product cost, and hours if work could be highly variable. Cost accountants, therefore, concentrated on how efficiently managers used labour since it was their most important variable resource, & many managers are still evaluated on their labour efficiencies, and many “downsizing,” “rightsizing,” and other labor reduction campaigns are based on them.

Now, however, workers who come to work on Monday morning almost always work 35-40 hours or so; their cost is more fixed rather than variable.

This approach is important because it allows focus on how of constrained resources restrict the achievement of an organisations goals.

Imagines a body has to process 1000s of forms – a not unfamiliar example to planners – before it can progress a plan.

Now imagine there was only one workstation and only one person to enter them – in the language of the Theory of Constraints the  both the number of workstations and the number of operators would be bottlenecks – constraints.

Now lets say that it was found that to process the forms within 1 month you needed an extra 5 workstations and 5 operators. Imagine the operators were hired on a temporary contract.  The constraint would not shift to the workstations and no longer the operators. Lets now say that 6 workstations were leased but sat in a box because there was noone to set them up.

Along comes a cost accountant keen to downsize.  They send the workstation back, they are overhead.  They sack the operators.  Now by their books they have radically improved efficiently.  However the one original person will now have to take 6 months to enter all the data.  During the extra 5 months all of the co-workers salaries have to be paid, and the output is 5 months late.  The real costs, measured in terms of achieving the goal, costs have not gone down they have dramatically gone up.  Those staff and the workstations was not an operational expense but an investment.

This is one of the reasons why organisations should operate on the principle that everything is a project and of zero-based budgeting.  Otherwise overhead costs of staff are not properly accounted.

The Theory of Constraints is based on the premise that the rate of goal achievement is limited by at least one constraining process. Only by increasing flow through the constraint can overall throughput be increased.

Assuming the goal of the organization has been articulated  the steps are:

  1. Identify the constraint (the resource or policy that prevents the organization from obtaining more of the goal)
  2. Decide how to exploit the constraint (get the most capacity out of the constrained process)
  3. Subordinate all other processes to above decision (align the whole system or organization to support the decision made above)
  4. Elevate the constraint (make other major changes needed to break the constraint)
  5. If, as a result of these steps, the constraint has moved, return to Step 1.
The concept of subordination means that once a bottleneck has been identified improving throughout anywhere else in the business process wont increase the output overall one iota.  Things will still stack up infront of the bottleneck.
Elevating the constraint means that if you have done all of the above and it is still a constraint you need to invest to increase capacity, but only to the extent that it remains the constraint otherwise the investment is waisted.

Using these principles you will find that constraints will often switch and switch back, you need to be constantly improving.

A major way to avoid constraints being a problem is to utilise the concepts of ‘pull’ management introduced in the previous lecture.   This is through the idea of ‘Drum Buffer Rope‘.

The rope is the ‘pull’ of resources from upstream.  The drum is the rate of the business process.  The buffer is what protects the drum from being held up by the constraint.

Think of poor dad in this diagram.

To keep going he cant go faster than the pull of the rope.  Unless he slows down to a crawl the rate he goes out will be determined by the variability of how quickly his toddler behind goes, although the impact of this will be significantly reduced if the toddler is at the end of an elastic cord, in which case his speed will be defined by the length of the cord.

The idea of a buffer is to protect a process from a variable constraint.

It is surprising perhaps that this concept has not been used more in planning, especially in infrastructure planning, as infrastructure is a classic constraint.

Heres an example of where $85 million was spent on removing a bottleneck, to no effect on the level of service.

One good example of where it has been used successfully is concerning the upgrading of the East Coast Line. Conventional wisdom held that the main constraint was the notoriously narrow Welwyn Viaduct. This study found it wasnt, and that upgradings elsewhere on the network were needed.

These ideas are just as powerful when looking at constraints in time – project plans – the next section.

Decision Theory for Planners #106 Push and Pull – Spanners in the Works

If the last section was a bit theoretical its time to get very practical.

For most of the last 10 years planning has been obsessed with process. About getting a ‘product’ – be it a planning application, or a development plan, from step a to step b as smoothly as possible.

Yet despite this focus on the production line of bureaucracy there has been increasing problems of the production lines crashing to a halt – of a ‘spanner in the works’.

A development plan might be found unsound. An unexpected issue might arise with an application, such as the finding of a ‘rare’ newt, the shifting sands of national policy and process might create new grounds for judicial review.

There has been view that problems come from complexity. That we are living in a more complex world. The issues we have to deal with are increasing. The public is becoming more aware of these, and of bureaucractic processes and know just when and how to throw spanners in the works when they want to stop things.

All of this is true, but it is all controllable if anticipatable.

Every process will smoothly run forwards as long as information smoothly runs backwards.

This is the push and pull concept, if kept in balance then a business process is in balance.

In economics there have been push schools and pull schools.

The push school has focussed on production – the classic example being Ricardo, and his influence on Marx. The emphasis is on costs and on stocks. From this view you can focus on the producer rather than the consumer because in the ‘long run’ a process of competition will weed out those that dont meet requirements.

The opposite school is focussed on consumption – on demand. The classic examples being Jevons and the Austrian Wieser. The focus in on consumer demand and flows. Wieser used the concept of ‘imputation’, you can focus on the end demand and not production because every aspect of production ‘in the long run’ will have to meet the wants of consumers, and prices of production can be ‘imputed back’ to those wants.

You will notice of course that both are making exactly the same ‘long run’ assumption – that economic balance will be achieved if a process is allowed to run its course. Success will be achieved if their is sufficient failure. The signals of success will be picked up by others who will adjust. The economy as a process of gathering and diffusing information.

What then if you can anticipate what is working, or what will work, if you can increase the flow of information.

A body – whether a firm or public sector body – has to supply its products and services to its customers. Markets are just one of many ways of diffusing information and supplying economic resources to keep products and services moving. There are underlying laws that apply to all production processes in any economic system.

Business processes run forward in time, but information runs backwards.

Imagine a one step process. With one consumer and one producer. The producer creates a process output, the consumer can say yes or no. Every output is matched by a piece of information. Yes or no in its simplest form. If the answer is yes then the producer will produce a second output and so on.

From then on you can add additional steps, although the ‘consumer’ in these intermediate steps might be an internal one within a body. For example a shop, or a one-stop-shop.

After the second world war Japan enormously in creased its comparative advantages as a nation through revolutionising its production. The rest of the world, especially now China, has copied those ideas, and that advantage is now eroded.

One of the key ideas it introduced, more specifically in Toyota and then more widely, was Kanban – which literally means ‘signboard’. Kanban is a ‘pull’ system of manufacturing.

In the late 1940s, Toyota began studying supermarkets.

Supermarkets were developing store and shelf-stocking techniques, only stocking what it believes it will sell, and customers will tend to only take what they need because future supply is assured.

Toyota figered that if in a in a supermarket customers get what they need, at the needed time, and in the needed amount why not on the factory floor.

As in supermarkets, originally, signboards and now computerised signboards were used to guide customers to specific restocking locations.

“Kanban” uses the rate of demand to control the rate of production, rather than guessing it and using signals from competition in output, and possible firm failure, to supply that information. You dont need to fail if you plan to succeed.

Flexibility also reduces costs wasted in overproduction through reducing inventory – the ‘just in time‘ concept.

Demand signals immediately propagate through the supply chain. A risk though is that supply shocks (such as those caused by natural disasters) propagate forward much more quickly because of low inventories.

If information on a potential spanner in the works can be propagated backwards through business processes it can be anticipated and avoided. Planning backwards.

Ill give several examples.

If you are in an area where protected species are common (such an irony) then you should not be surprised if a process is halted because of a failure to consider the impacts and evidence earlier. Hence if proper screening processes are in place at pre-application and application stage the chances of a future JR can be avoided.

A second example is the soundness and lawfulness tests for a development plan. What will an inspector want to see. You need to think about that from the outset and plan your programme backwards from the anticipated point of a plan being found sound, and not just leave the checking to a long pre-submission list. For example if the inspector wants to see examples of alternatives being tested, test early on those alternatives. If an inspector will dismiss an alternative for not being reasonable then that can be dismissed early on, providing the reason is explained.

Following Stafford and Lichfield I recommended to the dept that rather than waiting a year for unsound plans to come to them they needed to get out to local authorities and help them head off problems. Im glad that a version of such a ‘pull’ system was introduced. By and large now plans only come forward when the signals from the inspectorate say they are ready.

There are numerous other ways in which customer information can be propagated backwards and business processes adjusted to match going forward. I know some firms are making a lot of money on such advice. But there is no mystique about it. ‘Business Process Reengineering’ is just jargon for thinking this way.

Think of all planning case files, real or virtual, as inventory. Remember Hesiltines charge of ‘jobs sitting in filing cabinets’, time to prove him wrong.

Decision Theory for Planners #105 Bias Bingo -The Cost of Being Wrong

We have a tendency to favour information that confirms our existing point of view – this is called confirmation bias.  For example if you dont like the EU you are more likely to read the Daily Express.

A fun game is to place all of the illogical fallacies and psychological biases used in a speech on a grid and tick them off one by one.  The winner gets bias bingo. This song has a good list.

Rather than searching through all the relevant evidence, people look for the consequences that they would expect if their hypothesis were true,  rather than what would happen if it were false.  Even our memory appears to show this bias.  We even to to cling to some beliefs even if we have seen some evidence to comprehensively disprove them.  We create a bias blind spot in our own minds.  The purpose of scientific methods is to protect us from this, to protect us from ourselves.

The order in which we receive evidence become important for this reason.  If a piece of evidence is presented early rather than later, it will be more likely to be believed than later evidence even if the reason for the ordering of the evidence is unimportant.

The term “confirmation bias” was coined by English psychologist Peter Watson, for an experiment published in 1960.

The conclusions and inferences he drew from this experiment have recently shown to be flawed.  The basis of this critique has led to a new theory on how we test and confirm ideas that fundamentally challenges planning theories on ‘critical rationalism’, and suggest that being rational is more about avoiding the consequences of being wrong than it is trying to be right every time.

In Watsons experiment he challenged his subjects to identify a rule applying to a set of three numbers (a triple).

First, they were told that (2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.

Watson’s actual rule was simply “any ascending sequence”.  But his subjects had a great deal of difficulty in getting at it. They often announced rules that were far more specific, such as “the middle number is the average of the first and last”.

The subjects seemed to test only positive examples—triples that obeyed their hypothesised rule. For example, if they thought the rule was, “Each number is two greater than its predecessor”, they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it.

Watsons interpretation was challenged in a paper by  Joshua Klayman and Young-Won Ha in 1987.

They argued that the Wason experiments had not actually demonstrated a bias towards confirmation. Instead they interpreted the results in terms of a tendency to make tests that are consistent working hypothesis,  a huersistic,  which is imperfect but easy to compute.

Watson had used the concept of falsification as his assumption as to how his subjects tested their hyoptheses.  This came from the philosopher Popper, and is based on the view that you cannot prove a hypothesis through induction, you can only disprove it with a contrary ‘black swan’ – that knowledge is a process of deduction not induction.

This philsophy, critical rationalism, has been highly influential amongst some planning thinkers.  Notably Andreas Faludi – his ‘decision centred’ view of planning. That planning is about complex decisions. Rules for making complex decisions. ‘A decision is rational if it results from an evaluation of all alternatives in the light of their consequences‘. Clearly a generation of planners educated by this principle framed the way in which the 2004 reforms were implemented, with their exhaustive focus on evidence, justification and alternatives.

I would argue that we cannot hold to this philosophy as a modus operandum in the light of the findings of decision theory. We need a better way.

Klayman and Ha by constrast did not  base their ideas on Bayesian probability and information theory. Put simply how we approach problems based on our prior beliefs and the knowledge we hold and gain through making decisions.

Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests.

However, in Wason’s rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing a revised experiment that avoided implying that the aim was to find a low-probability rule. Subjects had much more success with this version of the experiment.

Look at this diagram.

If a hypothesis is tightly defined any number of them might fall within the ‘true area’  so a positive test alone will tell us little about the true hypothesis, and a falsification will only tell us if it is false.  We need prior information and a sequence of tests, looking at the problem from different perpectives, to grope towards the truth.  What matters is whether people test hypotheses in an informative way – rather than simple confirmation or falsification.

Psycological explantions on confirmation bias are based on limitations in people’s ability to handle complex tasks, and the need for shortcuts, called “heuristics” to get to the truth.  Decision theory tells us that huerestics are not necessarily irrational if they produce results, results often better than an exhaustive and often impossible process of assessment of every consequence of every decision.  In this view Faludis view of rational planning is deeply irrational. We can avoid confirmation bias with better hueristics.

People do not just test hypotheses in a disinterested way, but assess the costs of different errors.  This ideas comes from evolutionary psychology.  James Friedrich suggests that people do not primarily aim at truth  in testing hypotheses, but try to avoid the most costly errors. A good example is employers asking one-sided questions in job interviews because they are focused on weeding out unsuitable candidates

Yaacov Trope and Akiva Liberman’s have refined this idea, assuming that  people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis, and the consequences of each one.

This concept has great paralells with the ideas advanced in the 1950s by the economist G.LS. Shackle, who advanced a non-probabilistic conceptualisation of decision under uncertainty. He criticised conventional views of rational choice becuase they did not adequately deal with the issue of ‘surprise’ by events caused by our ‘unknowledge’.

In Shackle’s view, individual choices made in real world (non-experimental) conditions -are non replicable. You cannot undo the decision.

Shackle put forward the concept of potential surprise, the consequces of an action. The potential surprise values of the various outcomes do not add up to one – they are non-additive.

Lets give an example suppose the entrepreneur is asked to make an exhaustive list of the specified distinct events which can affect the value of alternative investments, as required by the application of probability theory and critical rationalism. The entrepreneur, contended,

“will in the end run out of time for its compiling, will realize that there is no end to such a task, and will be driven to finish off his list with a residual hypothesis, an acknowledgement that any one of the things he has listed can happen, and also any number of other things unthought of and incapable of being envisaged before the deadline of decision have come: a Pandora’s box of possibilities beyond reach of formulation.”

Familiar ideas from the opening lecture. Shackle viewed us taking decisions as evaluating a gain-loss pair rather. We focus on the degree of surprise of the loss and gain occurring. We we would be less surprised by a gain of 1000 pounds on an investment that the loss of 1000 pounds we are more likely to make that investment. If the extent of gain is not known, we have probability of it occurring, we are likely to try to minimise risk, minimise losses. With ‘unknowledge’ of future events we cannot rationally make decisions on the basis of expected gains, the evidence we need hasn’t even occurred yet, but instead on the basis of minimising the maximum expected loss. Of reducing the cost of being wrong. This is the minimax rule.

In an uncertain world then, where we cannot pin down probaibilities, we can at least reduce the consequences of getting it badly wrong, which out human tendencies to optimism and confirmation bias might lead us.

Decision Theory for Planners #104 Playing Planning Poker

Charlie Brown loved sport.

Despite all experience he ran up to the ball believing that this time, for once, just this time, Lucy would not pull the ball away.

Oh dear. His behaviour is a classic example of optimism bias. One of several types of bias that slant our decision making.

Optimism bias is the demonstrated systematic tendency for people to be overly optimistic about the outcome of planned actions. This includes over-estimating the likelihood of positive events and under-estimating the likelihood of negative events.

There is a specific type of this bias which affects planning – it is known as the Planning Fallacy, used in the sense of a project planning fallacy, a tendency to underestimate how is needed to complete a task, even when they have past experience of similar tasks over-running.

The term was first proposed in a 1979 paper by Daniel Kahneman and Amos Tversky.

In 2003, Lovallo and Kahneman proposed an expanded definition where the planning fallacy results in not only time overruns, but also cost overruns and benefit shortfalls (Lovallo, Dan; Daniel Kahneman (July 2003). “Delusions of Success: How Optimism Undermines Executives’ Decisions”. Harvard Business Review: 56–63).

Hofstadter’s Law: it always takes longer than you expect, even when you take into account Hofstadter’s Law.” — Douglas Hofstadter, cognitive scientist and Pulitzer Prize–winning author of Gödel, Escher, Bach: An Eternal Golden Braid

The term ‘fallacy’ implies that it is a logical fallacy – which it is not. I prefer to think of it as the optimistic planning bias.

There is even a concept called the rule of pi, which is to multiply time you think something will take by pi.  (this was invented by Nasa scientist von Tiesenhausen, who invented the lunar rover, as a half joke – perhaps from having to run around in circles).

So what should a project manager do then, add a buffer of X2 or X3.1597…… to each task?  The problem is this is the very worst thing you can do.

Eliyahu M. Goldratt is one of the key thinkers has how to make things run on time and on budget.  His solution is ingenious and ill look at in fully in another lecture  For now only deal with the issue of estimation.

Goldratt noticed that whilst project managers tended to be overoptimistic, those specifically responsible for tasks tended to be pessimistic, was there a connection between the two?  The connection of course is blame avoidance.

As Steve Jobs has observed you cant make excuses the closer you get to vice-president, that privelege is only for lowly workers.

If someone thinks a task might take a day, they might estimate for it to take longer in the project plan.  But this creates problems.  Goldratt noticed that it led to people starting to fully apply themselves to a task just at the last possible moment before a deadline – wryly he called this the Student Syndrome. And of course there is parkinsons law – work expands to fill the time available – to create the impression they are invaluable. Economists would also say this is because the undervaluing of the cost of time leads to demand meeting supply.

Effectivley people are adding a buffer before each task. What then if someone starts a task early, if they do the buffer can be wasted, especially if the project planner doesnt know this and cannot assign a new task.

There is also an equivalent to student syndrome with costs – Parkinsons Law of Finance ‘work expands to fulfill the available budget’.

It is not then a simplistic solution to say that things have gone wrong because we have underestimated – if the ‘solution’ of overestimating can lead to things going badly wrong.

One of the problems is that the longer an estimate is, the more uncertainty it contains.  You are asking someone to give a single point estimation on a probability distribution, with a fixed minimum period (zero) and an unknowably long tail.

A brillaint solution to this is to play what is called ‘Planning Poker’ – this involves a pack of cards using the fibbionoci sequence – 0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100, which represent days for a task.  Players in a team secretly make an estimate for task length, then reveal their card, and defend it even if the task is not theirs.

This avoids Anchoring – someone saying ‘”I think this is an easy job, I can’t see it taking longer than a couple of weeks” and none wanting to disagree. The seqence of numbers through clever maths deals well with the uncertainty issue. Estimation is also a learnable skill and as with anything else social learning is quicker than individual learning.

Molokken-Ostvold & K. Haugen, N.C. (13 April 2007). “Combining Estimates with Planning Poker–An Empirical Study”. IEEE – found that that estimates obtained with this method were less optimistic and more accurate, than estimates obtained through atomic self assessments for the same tasks.

Better estimates does not however mean that these should be translated into targetted task lengths in a project plan – for the reasons we have discussed.

Estimates are not committments, they are not plans to meet a target.  Decoupling the two, and adopting new tecniques to ensure committments is often the key to solving this dilemma.  You can only make an accurate commitment when  you know you can deliver.

One important technique now widely used in infrastructure planning, to avoid optimism bias in estimating the economic benefits of projects is called Reference Class Forecasting developed by  Daniel Kahneman and Amos Tversky, and helped Kahneman win the Nobel Prize

The idea is that a reference class of past, similar projects is used and studied, taking an outside view helps avoid the optimism bias of those designing a scheme, as those designing it are understandably biased to stress its benefits.

In America it is now officially endorsed by the APA.

“APA encourages planners to use reference class forecasting in addition to traditional methods as a way to improve accuracy. The reference class forecasting method is beneficial for non-routine projects … Planners should never rely solely on civil engineering technology as a way to generate project forecasts” (the American Planning Association 2005).”

It has been used in the uk to estimate the true benefits of the proposed Edinburgh Tram.

It could be used far more, for example to estimate the true infrastructure costs, and true time to completion for different housebuilders in different areas, in looking at the deliverability and phasing of major housing sites.

Decision Theory for Planners #103 How to get an Elephant through a Keyhole

I compare getting a development plan adopted as like trying to fit an elephant through a keyhole.

Exactly how do you get an elephant through a keyhole?

The answer is easy when you think laterally, get a bigger keyhole. Its easier than shrinking the elephant.

What is preventing the elephant from fitting through the keyhole

‘The elephant is causing a blockage’

Why is there a blockage?

‘The elephant is large’.

Now you can do nothing about the size of an elephant, you can change a hole.

I use this example for two reasons. Firstly its an example of a process requiring a project plan to complete – and what tends to cause delay in project plans is blockages. Ill return to this concept in the next lecture though.

The second reason is because it is an example of spatial-temporal thinking – lets just say spatial thinking for short.

Spatial thinking is the key skill for a planner – and not all planners have it.

Ill give you an example. I was advising one authority where the boss had a very very poor opinion of their housing policy bod. I was talking to them about the problems of one market town, and it became apparent that the key issue was everything you wanted in the town centre would not fit, and hence lots of previous projects had failed because of lack of viability. Talking it through with him it became obvious that something had to give – and so the shibboleth of whether a poor quality open space dominating the centre- in area but not in character, had to be moved/partially developed.

He went to an event organised by the local town council (not the District) where people were split into teams, some led by architects, my colleagues teams ideas were voted the best by the town council and won a prize.

A few days later I pinned up on the wall the front page of the local newspaper showing my friend brandishing a cup. The look of the manager was the look of death.

Despite not having good verbal skills their spatial thinking skills were outstanding.

Spatial thinking is a creative process but it is not about creating something from nothing, it is often about unlocking a solution which is imminant in a place, which any one of a number of people can unlock, including a local community if given half the support and chance.

People think in different ways. Some have skills in verbal thinking, some in logical, some in spatial. Each of these ways of thinking requires different parts of the brain. We need all kinds of people, all kinds of brains.

About a third of people think spatially, about the same proportion of the population that can read a map, surely no coincidence. A much smaller proportion use ‘picture thinking’ laying out ideas in the form of pictures. Famous and well documented examples of picture thinkers were Albert Einstein and Winston Churchill.

Spatial thinking is not quite the same as spatial-temporal awareness, it can also be used abstractly logical thinkers may use it to thinking symbollically in terms of logical connections. Verbal thinkers to think in terms of metre and rhyme. Creativity often turns out to come from linking different parts of the brain to spatial thinking.

We have a term for overdeveloping spatial, logical or verbal reasoning to the extent of the others – autism. It is no coincidence that those with strong spatial thinking skills are often seen as lying somewhere on the autistic spectrum. One group is the Netherlands is trying to change attitudes and understanding of spatial thinkers.

Think about the ideas of Dr Temple Grandin ‘The Woman who thinks like a Cow’ – who has transformed thinking on animal welfare by realising that the way we experience our environment as human beings is not universal.


Consider too the experience of the thinker I wrote about yesterday. Giancarlo De Carlo. He first lit up the spatial parts of his brain when he was a very small child seeing a dog with unusually long legs on his stair. He realised then that proportions did not have to be as they are, he mapped himself onto the walls and imagined himself taller and shorter, and the walls and floors mapped back again. He understood his place in the universe and how to shape it.

Another example, my freind Mike Hayes had a problem when Chief Executive at Watford, both Watford Football club and the local hospital wanted to expand. If you know vicarage road you know what a keyhole it is. His solution was simple, lock them in a room together (metaphorically) and sort each others problems out.

So there you have it the way to get through that keyhole is to think like a mouse, a very big frustrated mouse with a hacksaw.

Decision Theory for Planners #102 Cleaning up a social mess

What a social mess.

The term comes from Russell Ackoff “”Every problem interacts with other problems and is therefore part of a set of interrelated problems, a system of problems…. I choose to call such a system a mess.” (Redesigning the Future -1974)

They’ve been called a variety of things including “wicked problems.” (by Horst Rittle & Melvin Webber), “ill-structured problems.” (by Ian Mitroff).

A tame problem (Conklin, J, 2001, p.11)

  • has a relatively well-defined and stable problem statement.
  • has a definite stopping point, i.e. we know when the solution or a solution is reached.
  • has a solution which can be objectively evaluated as being right or wrong.
  • belongs to a class of similar problems which can be solved in a similar manner.
  • has solutions which can be tried and abandoned.

Wicked problems/Social Messes have none of these things.

Different writers have emphasised different aspects but all essentially the same problem – that its not just a problem its a mess.

Problems, tame problems, well structured problems, have solutions. Messes do not have straightforward solutions.

The first time you mention a controversial issue you will know if you have a social mess if immediately you have two to more points of view the moment you mention it – from different value systems, different perceptions and even the nature, causes, boundaries, and range of solutions of offer to the policy maker.

Think of issues such as nuclear power, wind farms, genetically modified crops, housebuilding etc. etc.

Complexity is part of the problem Robert Horn says that a Social Mess is a set of interrelated problems and other messes‘. This Complexity makes Social Messes so resistant to clear structuring and definitions and hence clear analysis and, resolution

In their classic paper of planning theory Rittell and Webber understood that this also undermines traditional rationlist approaches to planning

“The classical …approach … is based on the assumption that a planning project can be organized into distinct phases: ‘understand the problems’, ‘gather information,’ ‘synthesize information and wait for the creative leap,’ ‘work out solutions’ and the like. For wicked problems, however, this type of scheme does not work. One cannot understand the problem without knowing about its context; one cannot meaningfully search for information without the orientation of a solution concept; one cannot first understand, then solve.”

In a much quoted paper (though almost unknown to town planners) Richard Buchanan used the concept as a basis for a theory of design – in essence that design was a creative act because it was the art of solving wicked problems through spatial forms.

Why are design problems indeterminate and, therefore, wicked? Neither Rittel nor any of those studying wicked problems has attempted to answer this question, so the wicked-problems approach has remained only a description of the social reality of designing rather than the beginnings of a well-grounded theory of design…
Design problems are “indeterminate” and “wicked” because design has no special subject matter of its own apart from what a designer conceives it to be. The subject matter of design is potentially universal in scope, because design thinking may be applied to any area of human experience. But in the process of application, the designer must discover or invent a particular subject out of the problems and issues of specific circumstances. This sharply contrasts with the disciplines of science, which are concerned with understanding the principles, laws, rules, or structures that are necessarily embodied in existing sub-
ject matters.

So for Buchanan the mess is a liberation – because it creates the field and space for creative expression. Sorting out social messes is the very act of being a creative person.

Jonathan Rosenhead, of the London School of Economics, has presented the following criteria for dealing with complex social planning problems (Rosenhead,. J. (1996). “What’s the problem? An introduction to problem structuring methods”. Interfaces 26(6):117-131.)

  • Accommodate multiple alternative perspectives rather than prescribe single solutions
  • Function through group interaction and iteration rather than back office calculations
  • Generate ownership of the problem formulation through stakeholder participation and transparency
  • Facilitate a graphical (visual) representation of the problem space for the systematic, group exploration of a solution space
  • Focus on relationships between discrete alternatives rather than continuous variables
  • Concentrate on possibility rather than probability

In other words a process of group social learning.

There is a very large literature of techniques for group social learning, and also for the complementary tools for multi-criteria decision making. Ill look at these in future chapters.

Ultimately though improvements will only arise through creative planning.  I use the term improvements rather than solutions, to planning problems, because planning problems, being messy, never go away and often generate new problems.

This requires a modesty and an immersion when dealing with our clients – certainly not an erasurehead approach.

We can learn from the great Italian Architect and Planner, Giancarlo -De-Carlo who approached these problems from an anarchist perspective. (for 360 degree panoramas of his most famous schemes see here)

De Carlo did not believe in rigid separation of disciplines, of  a solution imposed on an historic backdrop, or of a designer seperating themselves from the needs and understanding of the client. He began with a deep “reading of the territory.” He has described this as an iterative process,
involving tentative design and feedback.

“all barriers between builders and users must be abolished, so that building and using become part of the same planning process..Wise Plans fail because the collectivity had no reason to defend them. Since it did not participate in their forumulation ” (Architectures Public 1969)

How to break down such barriers?

In a university really worthy of the name, every citizen should be free to enter and listen to a lecture. You could say, “well, what stops anyone from attending a lecture now?”[this was before fees] I believe the answer is the architecture itself. Thresholds, for instance, are the expression of authority and institutionalization. And the most important barriers are those thresholds which you cannot touch.
The issue of easing access should be much more important than simply concern for disabled entrances. In a way, we are all disabled when we cannot use a particular space. Thresholds built up in words are more powerful than physical thresholds.

Think of the barriers that planning departments erect to the understanding and availability of planners and planning information with their one stop shops designed like estate agents – really designed to sell a ‘service’ and keep the public outside the door marked private. The false hermetic seal between the order behind and the social mess out front.