Seriously there could be a JR because of the way Stratford DC has ‘filtered’ its survey results on the latest round of its core strategy consultation.
It asked respondants to rank the preferred option location for its ‘Big lump’ of a strategic allocation. Including the previous preferred option (At Gaydon, a new settlement) but also four more including another new settlement. Not ranked all of there responses 1 to 5. Some for example ranked them 1,2, or 1,2,3 etc. It was not until midway through the consultation that Stratford DC indicated you had to rank all five, and even then did not say they incomplete rankings would render a response invalid.
But now their committee report ‘throws away’ incomplete rankings – treating them as what is known in statistics as a ‘non response’. This has seriously affected the result in terms of the most ‘preferred’ response, Gaydon would not have been top of the list but now is. So the result of this decision could seriously sway councillors and lead them to select Gaydon or not. Naturally the anti Gaydon local group is furious. It could end in the courts.
Now why would you want to do that? Does it introduced distortions if you dont? Well in terms of statistical theory no – if you are only considering top ranked questions. that is rank 1. Then you can report without any distortion the top rank. But then you lose all information on the 2, 3 ranks etc. Its the same as asking a single response question on what one you prefer.
If you do want to use the rank 2, 3 response etc. in your analysis you get serious distortions if you don’t deal with the incomplete rank order questions responses. That is because as we have all learned people arn’t just ranking their top order preferred response. In reality many respondents are not ranking top preferences but least preferred. Ranking lowest the area furthest way from them. Here if someone ranks 1 to 3 and another ranks 1 to 5. The rank order 3 of the first is not the same as the rank order 5th of the other – you cant add them up. If you want to do serious statistical analysis – to find out if people are ranking dispreference rather than preference, then you cant treat all responses the same. You need to re-weight the response scores, and there are several techniques to do so, mostly pioneered in the marketing industry. Then you can, for example, use spatial autocorrelation and other tests to see if people are simply always ranking worst the area closest to them. Which of course not an unbiased preference.
But there is no indication that Stratford DC is doing this. They are half aware of the problem. Have chosen the clumsiest and worst method of all to ‘compensate’, throwing away all incomplete response and as a result have seriously distorted the headline result. They didnt need to anyway as they dont seem to be really analyzing the results below rank 1. So they have committed a serious, though unintentional, statistical blunder and risk misleading cllrs at the forthcoming ‘final’ meeting on submission.
Myself and the entire English Planning profession thought we were so clever in pioneering this kind of question. Instead we were idiots. Questionnaire design and analysis without at least some grasp of statistics 101 is worse that not doing a questionnaire at all.
Stratford should pull the report and give the raw data to a statistical professional to analyse first. It would be much cheaper than a ‘sadly’ well deserved JR, and published the raw anoymised results on its website (good practice). (DPA means you cant include the full postcode but you can and should the postcode district).
After allhow can you be sure of what is ‘locally led’ if you dont even know what locals actually prefer or not?