Skip to main content
Menu
Is Constituency Polling Worth It?

Constituency-level polling commissioned by Michael Ashcroft is a major new source of information about the upcoming U.K. general election. For the past year, Ashcroft has been vocal on Twitter about how important it is to understand what’s going on at the constituency level, rather than just following the “mood music” of the national polls:

But is this true? Clearly national polls do take into account what’s happening in the marginal constituencies that will decide the election; it’s just that they also take into account what is happening everywhere else. Thus, the extent to which they tell us what’s happening in the marginal seats depends on whether different seats are seeing very different changes over time or whether they’re generally moving up and down together. That is, when Labour goes up 2 percent nationally, is the party gaining 2 percent in every constituency, or is the underlying distribution of changes across constituencies more complicated than that?

In past U.K. elections, there was no way to know whether support for a given party was moving up and down in parallel across different constituencies because there was so little constituency-level polling. But now Ashcroft’s polls enable us to assess this question. In recent months, he has started to revisit constituencies that he polled in the second half of 2014. This enables us to assess whether there is evidence that party support at the constituency level is following the national trends up and down or whether there is more going on (and if so, how much more).

The degree to which constituencies shift in parallel has implications for Ashcroft’s polling: Is it worth revisiting these constituencies rather than exploring unpolled ones? It also has implications for our forecasting methodology: Should we continue to use the old polls in our general election predictions once there are new ones for a given constituency?

Ashcroft has polled 38 English constituencies at least twice. Since only a small number of these have more than two polls, we focus on the most recent pair of polls for each.1

Consider the following thought experiment. Imagine that nothing had actually changed in how anyone in any constituency intended to vote between when the first polls were conducted and and when the second ones were conducted. If this were true, any difference in the polls for a given constituency would be entirely due to sampling variability. So how much variability between polls in the same constituency should we expect? If the actual degree of variability is greater than the expected sampling variability, that indicates that levels of support for the parties really are changing across the polled constituencies. If not, it suggests that nothing much is actually changing and that we are just seeing sampling variation.

We will omit the technical details here, but we are able to use some standard assumptions about sampling variability and the normal distribution to calculate what fraction of the variation in polling within the same constituencies is due to sampling variation and how much is real variation due to changes in voting intention.2

When we do this calculation, we find that 64 percent of the variation between Ashcroft’s polling results is likely to be due to real changes in opinion and the remaining 36 percent to sampling variation.

When we break this out by party, we see that a lot of the change that Ashcroft is measuring involves changes in support for the U.K. Independence Party (UKIP). For all the other parties, the fraction of variation due to real change in support is 45 percent to 49 percent — for UKIP, it’s 84 percent. This makes sense: UKIP’s support has changed the most of any party’s, both in the past year and since the last election. At the same time, UKIP’s support has been redistributed to a range of parties, and so none of the other parties has seen changes as large as those for UKIP.

But this analysis does not yet answer the question we started out with — whether the constituency polls provide clear evidence about how the election is shifting beyond what you would have learned from the traditional national polls. The extra variation we saw may have been simply because of a uniform national swing. To account for this, we have to redo the same analysis, subtracting the national shift in support for each party from the shift we calculated for each pair of polls.

With these adjustments for national trends, we find that a bit less than half (44 percent) of the reported changes in the poll pairs is associated with change in support at the constituency level, relative to what is happening nationally. The fraction of variation due to constituency-level change is 27 percent for the Conservatives, 36 percent for Labour, 51 percent for the Liberal Democrats, 48 percent for the Greens, and 52 percent for UKIP. Voting intention for Labour and the Conservatives has been following a more uniform swing across constituencies; voting intention for the other parties has been much more in flux over time and across constituencies.

As a rough summary of these calculations, we can say that 36 percent of the change in repeated Ashcroft polls of the same constituencies has been due to sampling variability, 20 percent has been due to national swings, and 44 percent has been due to constituency-specific swings.

On the one hand, one could argue that this means that re-polling the same constituency does not bring much value. Less than half the variation will be due to the thing that constituency polling ostensibly provides: information about how constituencies are moving relative to the national polls. On the other hand, 44 percent is not nothing, and there would be value in simply doubling the effective sample size from a given constituency even if there were no real change.

Ashcroft has also re-polled five Scottish constituencies in the past week that he polled previously this year. This is a very small set, so the corresponding calculations for these are very uncertain, but it appears that 80 percent of the changes in these polls can be attributed to relative changes in support at the constituency level. This is true even though the time elapsed between the two polls is shorter than it was for the average English pair. We suspect that this is because Scotland has seen much larger changes since the last election, with the rise of the SNP, and so there is simply a lot more change occurring across constituencies as the new reality of Scottish politics settles in.

These results have some implications for whether it makes sense for Ashcroft (or anyone else) to repeatedly poll the same constituencies. For the most part, he has aimed for breadth of constituency coverage rather than repeated measurement, and our results indicate that this was a good choice. The one exception might be in Scotland, where there seems to be considerable continuing turmoil across different constituencies and recent follow-up polling has revealed substantial changes even since January 2015. Outside of Scotland, the swings over the past year have been relatively modest. But, of course, we would not have known this if Ashcroft had not gone back to poll the same constituencies a second time!

These results also matter for forecasting because they tell us whether we should keep using older polls of a constituency when new ones are released. If it appeared that there was a lot of change occurring out in individual constituencies, we would want to stop using old polling data as soon as we had any newer data to replace it. But because a lot of the change is due to sampling variation, it makes sense to keep using the old polls. Two polls are better than one, as long as they are measuring the same thing, and it appears that they mostly are.


Check out our 2015 general election predictions and full U.K. election coverage.

Footnotes

  1. The first poll in each pair was conducted at some point between May and December 2014, with typical polls in the field for about one week. The second poll in each pair was conducted at some point between August 2014 and April 2015. On average, there are 151 days between the polls in each pair we consider, with a range of 74 to 293 days.

  2. We calculate the difference between Poll 1 and Poll 2 for each constituency. We then calculate the standard error of polls the size of Ashcroft’s (about 500 to 550 respondents, excluding undecided voters). This assumes that Ashcroft’s polls have roughly the level of sampling variability that a simple random sample would have, even though real polling methodologies are more complicated. Using these and the rule for the variance of the difference between two normally distributed quantities, we construct z-scores from the ratio of the actual differences for each party to the theoretical standard deviation of the differences if there were no underlying change in support at the constituency level. If there were only sampling variation, the root mean square of these z-scores would be 1. If it is greater than 1, that indicates there is some real change. To describe the value we calculate, we compute r-squared statistics indicating the fraction of the observed variation across poll pairs that is not attributable to sampling error.

Ben Lauderdale is an associate professor of social research methods at the London School of Economics.

Comments