• Looking for a smarter way to manage your heating this winter? We’ve been testing the new Aqara Radiator Thermostat W600 to see how quiet, accurate and easy it is to use around the home. Click here read our review.

Flag dynamics

Status
Not open for further replies.
Nor did you mention it is a sample of only 2043 people or 0.003% of UK population (some poll services use sample size of 5000 or even 10000 as a mimimum to qualify as nationwide representation)
That changed
Nor did you mention it is a sample of only 2043 people or 0.003% of UK population (some poll services use sample size of 2000 as a mimimum to qualify as nationwide representation)

I didn't see the sample size (and couldn't find it after looking).

Yes it did, years ago I had a little dealings with polls, the 2 services I had detial of used 2000, searching after I made that statement highlighted the fact I may be out of date so I made the amendment.
Since there is no 'right and wrong' about this, neither Sunray's original nor his revised figures were 'wrong'.

Any sample size (even N=1 :-) ) will provide "an" estimate of the proportion in the sampled population, the question being how good/useful one wants/needs the estimate to be.

Provided that the sample is a fairly small proportion of the total population being sampled, and provided that the true (in entire population) proportion being estimated is not very high or very low (i.e. fairly close to 0% or 100%), then a sample size of 2,000 will give a 'margin of error' of roughly ±2.19% (usually thought of as "±2%"), with 95% confidence of that. That sample size (2,000) is very commonly used for 'routine' political opinion polls (of voting intention, views about politicians/policies etc. etc.).

A sample size of 5,000 will give a 'margin of error' of roughly ±1.38% (usually thought of as "1%") with 95% confidence, or roughly ±1.82% (usually thought of as "±2%) with 99% confidence.

Sample sizes of 10,000 or above are rarely used for those sort of opinion polls.

In other words, "one pays ones money and takes one's choice". The chosen sample size will never be 'wrong', but it may be too small for the resulting estimate to be a useful estimate of the 'true' proportion (in the entire population) for the purpose in question, or it might be unnecessarily large (hence costly) to achieve the degree of precision that one actually wants/needs.
 
There's one thing I've never really understood about the 'polling organisations', particularly given that they employ plenty of very competent statisticians....

.... Whenever I am involved in this sort of thing (and nearly always the same when anyone else is involved with 'serious research'), I start by deciding (usually 'being told' !) what degree of precision is required (i.e. what 'margin of error' and what confidence level) and from that calculate the required sample size.

These polling organisations seem to just go for very round numbers for their design sample size (very commonly 2,000 or 5,000) [ and, for logistical/operational reasons usually end up with a slightly larger sample than they designed for ], and then calculate what 'margin of error' that will result in for a given degree of confidence (usually 95%). That is, in my opinion, all a bit 'back-to-front', and leads them to having to state that their margin of error is "about ±2%" or "about ±1%.
 
Since there is no 'right and wrong' about this, neither Sunray's original nor his revised figures were 'wrong'.

Any sample size (even N=1 :) ) will provide "an" estimate of the proportion in the sampled population, the question being how good/useful one wants/needs the estimate to be.

Provided that the sample is a fairly small proportion of the total population being sampled, and provided that the true (in entire population) proportion being estimated is not very high or very low (i.e. fairly close to 0% or 100%), then a sample size of 2,000 will give a 'margin of error' of roughly ±2.19% (usually thought of as "±2%"), with 95% confidence of that. That sample size (2,000) is very commonly used for 'routine' political opinion polls (of voting intention, views about politicians/policies etc. etc.).

A sample size of 5,000 will give a 'margin of error' of roughly ±1.38% (usually thought of as "1%") with 95% confidence, or roughly ±1.82% (usually thought of as "±2%) with 99% confidence.

Sample sizes of 10,000 or above are rarely used for those sort of opinion polls.

In other words, "one pays ones money and takes one's choice". The chosen sample size will never be 'wrong', but it may be too small for the resulting estimate to be a useful estimate of the 'true' proportion (in the entire population) for the purpose in question, or it might be unnecessarily large (hence costly) to achieve the degree of precision that one actually wants/needs.
A lot also depends on where and how the poll is collected and thence how the results are harvested/analysed and the services reason for taking the poll and their interest in the matter ie the sponsor.
I'm not in any way saying I disbelieve the results of the More in common results but since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants a small group of half dozen started a discussion and managed to bring it to the fore, accordingly I feel the result is affected by how the question was asked.

As an example of this: I remember a TV programme demonstrating how results could be influenced by showing several different tests with food.
The most obvious being 20 cats being offered 2 plates of food, just like the adverts at the time with a can of cat food standing beside each saucer, I don't know what they put in the 2 saucers but 10 cats went one side and the other 10 went the other way, photos were taken of each. At that the statistics were 50:50 of the 20 cats
However then the photos were arranged showing 4:1, 5:1(I think), & 9:1 and completely ingoring the other 15, 14 & 10 pictures to deliberately demonstrate how statistics can be tailored to suit the needs, which is of course deliberat coruption of the facts.

One that really held my attention was mashed potato by getting 12 people to chose which they preferred, actually done in 2 blocks of six, the second block shown after the explanation to the first block and the TV audience. The test was carried out seated in a black cubicle (think polling station sort of thing but big enough to completely sit in) with a lifting flap in front of them. The light was turned off .
In complete darkness they were told the flap would open (which happened to be 'fan light format' and they would see 2 similar shape dishes and 'So they didn't get mixed up' one was clear and the other white containing a similar quantity of potato and an identical fork so they didn't get contaminated. All they had to do was compare the 2 different varieties and choose the best, they weren't told what best meant. The whole test then repeated with a second similar pair. During the test they could pick a dish up but only one at a time, smell it etc but the other had to stand in its place while they handled one.
Of course all of the dishes contained identical potato, the surface they stood on had a dip so the dishes could only stand fairly accurately in one position.
The only light was directly above the flap illuminating the dishes but leaving the subjects in darkness

They predicted at least ¾ would choose the clear in one test and a different ¾ the white in the other test.
The result was 10 and 9 as predicted.

The only difference was the position of the light, in one test it illuminated the whole dish, in the other the front rim of the dish was in shadow. After the test was completed and resuls taken the flap was closed, light moved and flap reopened and whole cycle repeated, only one of the subjects noticed the difference until it was pointed out to them.

It really is that easy to influence such results.

Just changing one word in a question will singificantly influence a survey and of course the old classic is asking the negative question.
The language used by different age generations/teaching changes will always affect them, older people will remember the state of the country in different decades, under different governments etc.
 
Last edited:
However then the photos were arranged showing 4:1, 5:1(I think), & 9:1 and completely ingoring the other 15, 14 & 10 pictures to deliberately demonstrate how statistics can be tailored to suit the needs.

I've seen such stats on TV, where they have said something similar to 97% out of 45 people asked, agreed that A was better than B.
 
I've seen such stats on TV, where they have said something similar to 97% out of 45 people asked, agreed that A was better than B.
recently there was a hair shampoo doing it.

and sometimes the percentages don't work.

Something I can never work out is 'twice as cheap'
 
A lot also depends on where and how the poll is collected and thence how the results are harvested/analysed and the services reason for taking the poll and their interest in the matter ie the sponsor.

On their website they don't identify a sponsor/commissioner for that poll, but they do for others, and would have no reason to conceal, so it seems they did it on their own initiative, as part of their research activities




I'm not in any way saying I disbelieve the results of the More in common results but since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants a small group of half dozen started a discussion and managed to bring it to the fore, accordingly I feel the result is affected by how the question was asked.

We can see how the questions were asked: https://www.moreincommon.org.uk/media/u4xinys3/august-flag-polling.xlsx

Your figure of 10% is a lot lower than the range in the poll results (31-58%). But what question did you ask? Did you offer this multiple choice:

  • Pride in Britain and support for the flag
  • A political statement against immigrants
  • Don’t know

or just ask "what do you think it means"?

Can you be sure that nobody was anxious about saying "A political statement against immigrants" for fear of how you or others would react?

And we wait for JohnW2 to tell us how much reliance we can place on results from a sample of 20.


It really is that easy to influence such results.

Just changing one word in a question will singificantly influence a survey and of course the old classic; the language used by different age generations/teaching changes will always affect them, older people will remember the state of the country in different decades, under different governments etc.

All of those things are true.

But given what we know about More in Common I'd be amazed if they were doing anything underhand.
 
Something I can never work out is 'twice as cheap'
I've always assumed they mean "half the price".
I'm, with Sunray here "twice as cheap" is a very odd statement, on a par with the commonly seen "100% less" etc. (which I suspect usually is meant to mean '50% less') or, even worse "at least 100% reduction" :-)
 
A lot also depends on where and how the poll is collected and thence how the results are harvested/analysed and the services reason for taking the poll and their interest in the matter ie the sponsor.
Very little of this is anything to do with (mathematical) statistics, per se. In fact, everything I've written relies only on the sample being a true 'random sample' of the population of interest (in practice, as close to that as is possible in the real world)

However, as you highlight, there are many other crucial questions. A true random sample will provide an estimate of quantifiable confidence of the proportion of the entire population who would give a certain answer to the question which was asked, and in the manner that it was asked. For the result to be meaningful, the question asked has to be precise, unambiguous, relevant and asked in a totally unbiased way.

The maths of sample surveys is relatively trivial and 'easy', but choosing the appropriate questions and wording/asking them in a totally unbiased way can be extremely challenging, and is very often not done in an ideal way. The questions to be asked in a decent survey should be (and often are) extensively 'field tested' before use in anger, to make sure that they are being posed, interpreted and answered in the manner that was intended. As examples of the sort of issues that have to be considered, you later write:
Just changing one word in a question will singificantly influence a survey and of course the old classic is asking the negative question. The language used by different age generations/teaching changes will always affect them, older people will remember the state of the country in different decades, under different governments etc.
... all of which is very true. Another is that, in our cosmopolitan country, we need to consider the fact that people from different cultures, particularly those whose first language was not English, may interpret and/or answer questions differently, maybe 'incorrectly'.

In a similar fashion, as you go on to illustrate, when one conducts 'experiments' there are countless factors (many of which one might not even think of or expect) that can markedly influence results but which really need to be considered in designing a valid and meaningful experiment.

It goes without saying that everything to do with the sampling, the questions, how the questions are asked and how the data is analysed and conclusions drawn from it all have to be done 'objectively' - and that if those who commission the survey in any way 'bias' any of those processes to suit their own 'interests', then that is just plain fraud, essentially no different from simply lying about the results.

I obviously cannot say that such 'fraud' never happens, since it sometimes does (there are some very infamous historical examples), but one can only have sensible discussions by excluding those cases, and I think one can be reasonably sure that all the major polling organisations in 'respectable' countries will be very 'objective', and will do all they can to avoid being influenced/biased by those paying them to undertake the survey.
 
..... since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants ...
And we wait for JohnW2 to tell us how much reliance we can place on results from a sample of 20.
Yes, I can have a stab at that :)

However, I have to start with the caveats implicit in what I have recently wrote - namely what I can tell you relies on the assumption that the 20 people were something approaching a random sample of the entire UK population (I fear probably far from true) and that the question was asked very clearly, with no sort of bias or 'putting words into people's mouths' resulting from the manner in which people were asked.

A couple of other points, which I also forgot to mention in my recent post .... Firstly, it's desirable that respondents to survey questions are not aware of the answers given by other surveyed people, which could bias them'. Secondly, 'secret/anonymous' surveys are much safer since, particularly in relation to some issues, some people would be embarrassed to give a truthful answer to a question posed face-to-face (in which case they are more likely to give the 'expected'/'decent' answer, even if not truthfuilly :-) ).

Anyway, after all that preamble ....

If it were a true random sample (N=20) of the entire UK population (which, as above, I very much doubt), then if 2 of those 20 "instantly came up with something to do with immigrants, that would mean that we could be 95% confident that between about 1.2% and 31.7% (i.e. of the entire UK population would come up with that 'instant answer'.

In reality, given my doubts about the randomness of the sample (in terms of the entire UK population), I think it would be fairer to say that we can be 95% of such an answer in between 1.2% and 31.7% of the population available to Sunray to sample from (in terms of geography, ethnicity, gender, nationality, socioeconomic status etc. etc.)

.... a small group of half dozen started a discussion and managed to bring it to the fore,
I'm not quite sure what Sunray means by that, but if he's saying that he ended up (after discussions) with 8/20 answers which were "something to do with immigrants", that would then equate to 95% confidence that the true proportion (in whatever population was being sampled from) who would give such an answer (eventually, after discussion) would be between about 19% and 64%.

Kind Regards, John
 
It is difficult to find a survey that is completely unbiased, whether deliberate or careless bias is at play here.
I also suspect that many who pay for a survey will disregard its findings if it does not fit into the results they were hoping for but place great credence upon it if it seems to further their cause.
 
since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants a small group of half dozen started a discussion and managed to bring it to the fore, accordingly I feel the result is affected by how the question was asked.

I'm not quite sure what Sunray means by that, but if he's saying that he ended up (after discussions) with 8/20 answers which were "something to do with immigrants", that would then equate to 95% confidence that the true proportion (in whatever population was being sampled from) who would give such an answer (eventually, after discussion) would be between about 19% and 64%.

Kind Regards, John
The rest of post I have no real response to other than you seem to indicate my ad hoc survey may have been more represantitive than I would have given it credance for. Much of it was over 'a cup of tea' during AmDram rehearsals and far from secret, especially the half dozen which was ironically brought into the conversation by a six year old pulling a hand waving flag from a box of props and asking if she could put it in her bedroom window and the conversation drifted into lamp posts being on the news etc. The reality is none of that cluster knew enough but the few minute conversation brought enough snippets together to associate them with the Bell Hotel etc.

My sample has included only one between ages of 13 and late 30's and he has no idea there is a world outside Fortnite, youtube football, music and films. The 13 yo thinks St George flag only means England football so the only meaningfull sector was >40 age group, My question was not multiple choice, more along the lines of what are the flags on lamp posts for or something similar, I feel a number of those asked would have ended up in the DK category if they had taken the More in Common poll.

Despite my statement
...since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants...
Being a conversation rather than an examination, replies went on to include other comments. If I were to translate my verbal and non documented results into the More in Common poll I think I'd struggle as I had mostly sport answers and that isn't in the poll, as such I think it would be a fairly even split across the 3 categories with a toss up between DK and pride (and of course that in itself is on the way to meaning something else) taking the lead.
E&OE
 
It is difficult to find a survey that is completely unbiased, whether deliberate or careless bias is at play here.
Indeed. That's true of any survey or any experiment (e.g. things like Clinical Trials) and it's probably true to say that in the (imperfect) real world, it's not just 'difficult' but actually impossible to achieve 'perfection'. However, with very careful design, conduct, analysis and reporting one can get reasonably close.
I also suspect that many who pay for a survey will disregard its findings if it does not fit into the results they were hoping for but place great credence upon it if it seems to further their cause.
Very much so - known in the trade as one of the forms of 'publication bias'. However, although it often is, it's not necessarily the deliberate fault of those 'paying for' the survey or experiment. In terms of 'real publication' (e.g. in journals, or even by popular media), even when those involved attempt to get results 'published', it can be essentially impossible to interest journals/media etc. in reports which are 'negative' (e.g. "experimental treatment X does not help with disease Y").

I suppose one can't really blame the journals/media, since they would probably regard regular publication of such 'negative' results as being similar to telling us on the 6 o'clock News every day about all the planes that haven't crashed, all the people who have not been assassinated etc.etc.!

In some fields, all ongoing research (and subsequent results) is meant to be 'registered' in a publicly accessible database - such, although no-one can enforce 'publication', results are there to be seen by anyone who goes looking for it.

Kind Regards, John
 
Status
Not open for further replies.
Back
Top