Skip to main content
ABC News
The U.K. Election Wasn’t That Much Of A Shock

Despite betting markets and expert forecasts that predicted Theresa May’s Conservatives to win a large majority in the U.K. parliamentary elections, the Tories instead lost ground on Thursday, resulting in a hung parliament. As we write this in the early hours of Friday morning, Conservatives will end up with either 318 or 319 seats, down from the 330 that the Tories had in the previous government. A majority officially requires 326 seats.1

Conservatives will wind up with the plurality of seats and the plurality of the popular vote. It’s still possible — indeed, probable — that Conservatives will form a government, either as a minority government or as part of a coalition, most likely with the Democratic Unionist Party, which won 10 seats in Northern Ireland. By contrast, a coalition between Labour, Liberal Democrats and the Scottish National Party (which lost a significant number of seats) would have about 310 seats — short of a majority. It’s also probable that May will continue as Conservative leader and prime minister, the BBC reports.

With 42 to 43 percent of the vote, in fact, the Tories should wind up with their largest vote share since under Margaret Thatcher. But the outcome isn’t being interpreted as any sort of moral victory for May; instead, it’s being portrayed in the British media as a disaster. That’s because at the time May unexpectedly called for a “snap” election seven weeks ago,2 polls showed Conservatives leading Labour by 17 percentage points and poised to win as many as 400 seats in parliament. Instead, they went backward and are at best hanging on by a thread. It’s even plausible that there could be another election later this year.

And yet, the results should not have been all that surprising if one followed this year’s polling and the polling history of the U.K. closely. The final polling average showed conservatives ahead by 6.4 percentage points. In fact, Conservatives should wind up winning the popular vote by 2 to 3 percentage points. That means the polling average will have been off by about 4 percentage points. (The table below lists YouGov twice because they polled the race using two different methods.)

POLLSTER CON. LAB. UKIP LIB. DEM. OTHER LEAD
Qriously 39% 41% 3% 6% 11% Lab. +3
Survation 41 40 2 8 8 Con. +1
SurveyMonkey 42 38 4 6 10 Con. +4
Norstat 39 35 6 8 12 Con. +4
YouGov.co.uk 42 38 3 9 7 Con. +4
Kantar Public 43 38 4 7 8 Con. +5
YouGov (The Times) 42 35 5 10 8 Con. +7
Opinium 43 36 5 8 7 Con. +7
Ipsos MORI 44 36 4 7 7 Con. +8
Panelbase 44 36 5 7 8 Con. +8
ORB 45 36 4 8 7 Con. +9
ComRes 44 34 5 9 7 Con. +10
ICM 46 34 5 7 7 Con. +12
BMG Research 46 33 5 8 9 Con. +13
Average 42.9 36.4 4.3 7.7 8.3 Con. +6.4
The U.K. polls missed, but not by that much

Percentages are rounded.

Sources: UK Polling Report, Wikipedia, @britainelects

While a 4-point error would be fairly large in the context of a U.S. presidential election, it’s completely normal in the case of the U.K. On average in U.K. elections since World War II, the final set of polls have missed the Conservative-Labour margin by 3.9 percentage points, almost exactly in line with this year’s error.

YEAR POLLING AVERAGE ACTUAL RESULT ACTUAL V. POLLS
2017 Con. +6.4 Con. +2.4* Lab. +4.0
2015 Con. +0.6 Con. +6.5 Con. +5.9
2010 Con. +7.9 Con. +7.2 Lab. +0.7
2005 Lab. +6.2 Lab. +2.9 Con. +3.3
2001 Lab. +14.2 Lab. +9.4 Con. +4.8
1997 Lab. +17.5 Lab. +12.8 Con. +4.7
1992 Lab. +1.5 Con. +7.6 Con. +9.1
1987 Con. +8.1 Con. +11.7 Con. +3.6
1983 Con. +20.3 Con. +15.2 Lab. +5.1
1979 Con. +5.9 Con. +7.2 Con. +1.3
1974 (Oct.) Lab. +9.2 Lab. +3.6 Con. +5.6
1974 (Feb.) Con. +2.9 Con. +0.6 Lab. +2.3
1970 Lab. +4.1 Con. +3.4 Con. +7.5
1966 Lab. +11.2 Lab. +6.0 Con. +5.2
1964 Lab. +1.5 Lab. +0.8 Con. +0.7
1959 Con. +3.2 Con. +5.6 Con. +2.4
1955 Con. +3.3 Con. +3.2 Lab. +0.1
1951 Con. +4.5 Lab. +0.8 Lab. +5.3
1950 Con. +0.7 Lab. +2.8 Lab. +3.5
1945 Lab. +6.0 Lab. +8.0 Lab. +2.0
U.K. polls haven’t been very accurate historically

* 2017 results reflect a preliminary estimate. For elections since 1974, results reflect Great Britain (England, Scotland and Wales) only and not Northern Ireland. Most UK pollsters have excluded Northern Ireland from their samples in recent years.

Source: Report of the Inquiry into the 2015 British general election opinion polls, Researchbriefings.parliament.uk

As has been the case in several recent elections, therefore, the problem was not so much with the polls, or at least not with some of the polls. The problem was with how people were interpreting them. Betting markets on Thursday morning showed Conservatives with only about a 15 percent chance of failing to achieve a majority. In our view, this was unrealistically low given that their lead in the polling average was close to the Tories’ winning margin in 2015 (6.5 percentage points), which had barely been enough for a majority. Taken at face value, the polls suggested a result wherein Conservatives took barely over half the seats, not a blowout.

In that way, this result was much like the 2016 U.S. presidential election, when Donald Trump was just a “normal polling error” away from winning, and yet people seemed to have a lot more confidence than they should have had that Hillary Clinton was going to win. The Brexit vote provides another example of this groupthink; the London-based media found a “Leave” vote so “unthinkable” that they ignored polls showing the race almost tied. The confirmation bias wasn’t as bad in this election as it was for Brexit or Trump, but there were signs of it here and there: YouGov took a huge amount of abuse ahead of the election for daring to publish a model that (correctly, it turned out) showed a hung parliament, for example.3 Therefore, this was the latest in a string of failures for the conventional wisdom that has given rise to our half-sarcastic (but half-serious) “first rule of polling errors”: whenever the pundits try to outguess the polls, you should assume that the polls will miss in the opposite direction of what they expect.

But while pollsters had middling results on the whole, some did much better than others. Indeed, the polls in the lead-up to this election showed a wide range of outcomes, with final polls showing margins that ranged from Labour +3 to Conservatives +13. There were a number of pollsters who had results very close to the final outcome, including Survation, Kantar Public, Norstat and SurveyMonkey. That’s far different than what occurred two years ago in the 2015 U.K. election, when all but two polls had the election within a narrow range between Labour +1 to Conservatives +1. That had been a sign of pollsters “herding” toward a consensus instead of behaving independently.

Although pollsters may not have herded in this election, however, some of them were guilty of another sin: distrusting their own data. After the 2015 election, many pollsters applied much more aggressive turnout models which weighted down the number of younger voters who are generally more favorable to Labour. In turn, they boosted the percentage of older voters, who are generally more friendly toward the Conservatives.

Given the substantial age gap in this election, the turnout weighting had an especially big effect. Among the final polls, seven of them adjusted their results in this way, in all but one case yielding a more friendly result for the Conservatives. You can see this in the table below, which looks at the raw results for these seven pollsters and their adjusted results. The average difference was nearly 6 percentage points in Tories’ favor. That’s a huge gap. In 2015, such adjustments had shifted the results by only about 1 point toward Conservatives. And likely voter models in U.S. presidential elections typically only move the numbers by a couple of percentage points in either direction (usually toward Republicans).

POLLSTER RAW VOTING INTENTION HEADLINE VOTING INTENTION
Ipsos MORI Tied Con. +8
BMG Research Con. +1
Con. +13
Survation Con. +1
Con. +1
ICM Con. +5
Con. +12
YouGov Con. +2
Con. +7
ComRes Con. +5
Con. +10
Opinium Con. +4
Con. +7
Average Con. +2.6
Con. +8.3
Pollster adjustments shifted the results toward Conservatives

Source: Huffington Post Pollster, BMG Research

These adjustments proved to be counterproductive in the U.K. The average “raw” result from these polls — which applied demographic weights but otherwise relied on voters’ self-reported likelihood to vote instead of complicated turnout models — showed a 2.7 percentage point lead for the Conservatives, very close to their actual results. But the “headline” version of the polls, which applied the turnout models and removed or reallocated undecided voters, showed an 8.3-percentage point lead for the Conservatives instead.

Although polling the U.K. is hard, we’re not very sympathetic to the pollsters who made these adjustments. That’s because turnout models were probably not the cause of the Conservative underestimation in 2015. According to a report prepared by the British Polling Council and the Market Research Society, the error had more to do with the fact that the initial samples were unrepresentative of the overall population.

The 2017 election therefore seems to be a case of an overcorrection. The pollsters apparently did a good enough job of weighting the raw samples properly, which got them fairly close to the right outcome. Then on top of that, some of them gave extra weight to the Conservatives through their turnout models. As a result, they discounted signs of a youth-driven Labour turnout surge. As was the case in the U.S. with Bernie Sanders, younger voters turned out in a big way for Labour’s left-wing leader, Jeremy Corbyn. It’s one thing for a pollster to get an outcome wrong because voters fail to turn out when they say they will. But if voters tell you they’re going to turn out, you ignore them, and they show up to vote anyway, you really don’t have much of a defense.

The overall theme is that many people who covered the U.K. election (whether as pollsters or pundits or journalists) were guilty of fighting the last war. It’s true that Conservatives have some history of outperforming their polls in the U.K. and had done so in 2015. Yet, those previous underestimations may have occurred just by chance or for reasons peculiar to each election. Many people were so preoccupied with not underestimating Conservatives that they failed to consider how they might be underestimating Labour instead. Thus, they largely ignored the possibility of a hung parliament despite its being an entirely realistic outcome. (We estimated the chances of it to be about 1 in 3.)


But if the results ought not to have been much of a surprise given where the polls stood on Thursday morning, what about where they stood in April, when May called the election as Conservatives led by 17 percentage points? While this is a trickier case, we’re still not sure the outcome should be considered all that shocking.

When May called the election, we noted that she was undertaking a risky move because the U.K. polls had historically been volatile (in addition to not being very accurate even at the end of the campaign). And there were some reasons to expect an especially large amount of volatility in this race. The election was unexpected, so voters had only about 50 days to get to know the candidates. May had only become prime minister last July and might still have been in her “honeymoon period” when she skated above the partisan fray — something that was bound to change once she asked voters to go to the polls again. A better economy traditionally boosts the incumbent party, but the British economy wasn’t doing all that well. And U.K. elections are typically fairly close — only three of them since World War II have been decided by double-digit popular vote margins — so there was some risk of reversion to the mean.

There were also a lot of events during the campaign, but the compressed time frame makes them hard to sort out from one another. How much did the Conservative manifesto hurt the Tories? Did terrorist attacks in Manchester and London work against them? Was May’s perceived softness toward President Trump a factor, especially after Trump began to attack London Mayor Sadiq Khan? Given the results of the French election, is there an overall resurgence toward liberal multiculturalism in Europe, perhaps as a reaction to Trump? We don’t know the answers to these questions, although we hope to explore some of them in the coming days. We do know that elections around the world are putting candidates, pollsters and the media to the test, and there isn’t a lot they can be taking for granted.

Footnotes

  1. Although, there are some ambiguities on account of Sinn Fein, the Irish nationalist party which traditionally does not take its seats in parliament, and the Speaker of the House of Commons, who traditionally does not vote.

  2. The next regularly-scheduled U.K. election wasn’t until 2020.

  3. YouGov used a technique called multilevel regression and post-stratification (MRP) for its model, which estimated the results on a constituency-by-constituency basis. We think this is a good technique for an election such as this one, where few constituencies are polled individually. Critically, however, YouGov’s model was derived from polling that showed Conservatives ahead by only 4 percentage points nationally, close to their actual margin. If their polling had shown the Tories up by 10 points instead, no amount of modelling could have salvaged a decent seat forecast. By contrast, some of the models that called for Conservatives to win 350 or more seats assumed that they’d do better than the 6- or 7- point margin that the polling average showed. One model put Conservatives on 366 seats and had them winning the popular vote by almost 11 points, for example.

Harry Enten was a senior political writer and analyst for FiveThirtyEight.

Nate Silver is the founder and editor in chief of FiveThirtyEight.

Comments