Since there is no 'right and wrong' about this, neither Sunray's original nor his revised figures were 'wrong'.
Any sample size (even N=1

) will provide "an" estimate of the proportion in the sampled population, the question being how good/useful one wants/needs the estimate to be.
Provided that the sample is a fairly small proportion of the total population being sampled, and provided that the true (in entire population) proportion being estimated is not very high or very low (i.e. fairly close to 0% or 100%), then a sample size of 2,000 will give a 'margin of error' of roughly ±2.19% (usually thought of as "±2%"), with 95% confidence of that. That sample size (2,000) is very commonly used for 'routine' political opinion polls (of voting intention, views about politicians/policies etc. etc.).
A sample size of 5,000 will give a 'margin of error' of roughly ±1.38% (usually thought of as "1%") with 95% confidence, or roughly ±1.82% (usually thought of as "±2%) with 99% confidence.
Sample sizes of 10,000 or above are rarely used for those sort of opinion polls.
In other words, "one pays ones money and takes one's choice". The chosen sample size will never be 'wrong', but it may be too small for the resulting estimate to be a useful estimate of the 'true' proportion (in the entire population) for the purpose in question, or it might be unnecessarily large (hence costly) to achieve the degree of precision that one actually wants/needs.
A lot also depends on where and how the poll is collected and thence how the results are harvested/analysed and the services reason for taking the poll and their interest in the matter ie the sponsor.
I'm not in any way saying I disbelieve the results of the More in common results but since Thursday evening I've asked around a 20 or so people what the flags on lampposts means and only two instantly came up with anything to do with immigrants a small group of half dozen started a discussion and managed to bring it to the fore, accordingly I feel the result is affected by how the question was asked.
As an example of this: I remember a TV programme demonstrating how results could be influenced by showing several different tests with food.
The most obvious being 20 cats being offered 2 plates of food, just like the adverts at the time with a can of cat food standing beside each saucer, I don't know what they put in the 2 saucers but 10 cats went one side and the other 10 went the other way, photos were taken of each. At that the statistics were 50:50 of the 20 cats
However then the photos were arranged showing 4:1, 5:1
(I think), & 9:1 and completely ingoring the other 15, 14 & 10 pictures to deliberately demonstrate how statistics can be tailored to suit the needs, which is of course deliberat coruption of the facts.
One that really held my attention was mashed potato by getting 12 people to chose which they preferred, actually done in 2 blocks of six, the second block shown after the explanation to the first block and the TV audience. The test was carried out seated in a black cubicle (think polling station sort of thing but big enough to completely sit in) with a lifting flap in front of them. The light was turned off .
In complete darkness they were told the flap would open (which happened to be 'fan light format' and they would see 2 similar shape dishes and 'So they didn't get mixed up' one was clear and the other white containing a similar quantity of potato and an identical fork so they didn't get contaminated. All they had to do was compare the 2 different varieties and choose the best, they weren't told what best meant. The whole test then repeated with a second similar pair. During the test they could pick a dish up but only one at a time, smell it etc but the other had to stand in its place while they handled one.
Of course all of the dishes contained identical potato, the surface they stood on had a dip so the dishes could only stand fairly accurately in one position.
The only light was directly above the flap illuminating the dishes but leaving the subjects in darkness
They predicted at least ¾ would choose the clear in one test and a different ¾ the white in the other test.
The result was 10 and 9 as predicted.
The only difference was the position of the light, in one test it illuminated the whole dish, in the other the front rim of the dish was in shadow. After the test was completed and resuls taken the flap was closed, light moved and flap reopened and whole cycle repeated, only one of the subjects noticed the difference until it was pointed out to them.
It really is that easy to influence such results.
Just changing one word in a question will singificantly influence a survey and of course the old classic is asking the negative question.
The language used by different age generations/teaching changes will always affect them, older people will remember the state of the country in different decades, under different governments etc.