Skip to main content
ABC News
What Else We Got Wrong In Our U.K. Election Model

With the benefit of a few more days to examine the data — and a lot more hours of sleep — we can make a few additional points about what went wrong with our U.K. election forecasting model.

On Friday, we had two initial diagnoses. First, we had a problem with the level and structure of the uncertainty in our national vote share predictions. And second, we may have chosen to use the wrong question from the constituency polls. On further inspection, the first of these turns out to have been the more serious issue.

The question choice we made looks more reasonable on further inspection. As we said earlier, we would have had a better seat forecast for the Liberal Democrats had we used the generic format of the question, rather than the more specific question. But it would have been a case of two errors in opposite directions canceling each other out, rather than a real improvement in the model. Our national vote share model overstated the Lib Dem share, and the generic questions understated the party’s strength in its incumbent seats.

We have now done the analysis necessary to say that the Ashcroft polls with the specific-question format were indeed more accurate at predicting the outcomes in the seats that were polled. This was true early in the campaign as well as late in the campaign. There are many ways to measure this. But if we use the Euclidean distance of the polls from the ultimate seat-level results, in percentage points, we find that the mean Euclidean distance of the generic-format polls from the ultimate results was 13.6 percentage points; for the specific-format polls, it was 11.8 percentage points. The specific questions really were better than the generic questions.

So the problem was primarily with our national vote share model. There was nothing we could do about the national miss in the polls that were reported, so here our focus has to be on what kinds of errors we expected. The key point is that we expected we might be wrong about the national vote shares, but not by how much. The magnitude and combination of errors led to our poor seat prediction.

The 90 percent prediction intervals in our final pre-election forecast put the Conservatives between 31.8 percent and 37.1 percent of the vote, Labour between 30.0 percent and 35.6 percent, and the Liberal Democrats between 9.8 percent and 13.9 percent. The actual results in Great Britain (i.e., excluding Northern Ireland) were 37.6 percent, 31.2 percent and 8.1 percent, respectively. So the Conservatives did slightly better than the top end of our prediction interval, but we did not expect the Liberal Democrats to do as badly as they did. This was a big problem for us because it was the poor performance of the Liberal Democrats combined with the poor performance of Labour that enabled the Conservatives to get from the top end of our predicted seat interval (305 seats) to the narrow majority (330, excluding the speaker) that they secured.

The reason that we are now comfortable putting most of the blame on the inadequacies of our national vote share model is that we have been able to run some further hypotheticals since the election. The most revealing is simply to plug the correct national vote shares into the model and then see what seat predictions we would have generated. The true seat counts were 330 for the Conservatives, 232 for Labour, eight for the Lib Dems, and 56 for the Scottish National Party. If we plug the true Great Britain vote shares in to our model for seats, we get 310 for the Conservatives, 242 for Labour, 16 for the Lib Dems, and 57 for SNP. In this scenario, our predictions were wrong for only 34 of 632 individual seats. This is still not perfect, but it is a lot better. Most of the remaining error comes from the fact that the Conservatives outperformed precisely in the marginal seats where it mattered most to increasing their seat total, which can be seen in the charts from our original post-mortem as a kink upward for the Conservatives at around 30 percent to 40 percent vote share.

lauderdale-postmortem

So our conclusion is that if we had put a reasonable amount of probability on the right national vote shares, we would have also had a reasonable probability of the right seat totals. If we could have figured out some way to capture the Conservatives’ performance in the marginals, that would have helped, too. As it is, we have a lot more work to do refining the national vote share analysis to protect us against the possibility of the kind of error we saw in 2015.

Ben Lauderdale is an associate professor of social research methods at the London School of Economics.

Comments