Sunday’s French presidential election was the latest in a trend. The centrist candidate, Emmanuel Macron, won by a considerably wider margin than most observers predicted, with a 32-percentage-point landslide over Marine Le Pen, larger than the 24-point margin that the final polls showed.
But the trend isn’t that center-left globalism is making a comeback — that’s too early to say.1 Instead, it’s this: When the conventional wisdom tries to outguess the polls, it almost always guesses in the wrong direction. Many experts expected Le Pen to beat her polls. Currency markets implied that she had a much greater chance — perhaps 20 percent — than you’d reasonably infer from the polls. But it was Macron who considerably outperformed his numbers instead.
While this was somewhat amusing — the one time the experts decided to take the nationalist candidate’s chances really seriously was the time she lost by 32 points — it should actually worry you, even if you’re a “fan” of polling and data-driven election forecasting. It’s a sign that the polls may be catering to the conventional wisdom, and becoming worse as a result.
This French election was part of a pattern that I began to notice two years ago in elections in the U.S. and elsewhere in the world. Take the 2012 U.S. presidential election as an example. Most of the mainstream media concluded that the race was too close to call, despite a modest but fairly robust Electoral College lead for then-President Barack Obama. But on Election Day, it was Obama who beat his polls and not Mitt Romney.2
The 2014 U.S. midterms provided another example, only in reverse. That time, the mainstream media was full of articles suggesting that polls might be “skewed” against Democrats, perhaps because they underestimated minority turnout. Republicans beat their polls by several percentage points, however, and gained nine seats in the U.S. Senate.
The pattern also replicated itself in the three highest-profile elections around the world in the past year. Ahead of the U.K.’s vote to leave the European Union, polls showed a razor-close race, with “Remain” ahead by only a percentage point or two in a country notorious for inaccurate polling. It would have been reasonable to call the race a toss-up. And yet the London-based media was highly confident that Remain would prevail and bookmakers assigned Remain about a 90 percent chance on the day of the vote. It was “Leave” that won instead, of course, by 4 percentage points.
In the U.S. last year, after Hillary Clinton’s lead narrowed after James Comey’s letter to Congress in late October, mainstream media accounts mostly waved away the tightening polls. Early voting data proved that her lead was safer than polls implied, it was said. The campaigns’ internal polls suggested that Clinton still had a clear lead, it was reported. And pundits on the Sunday before the election boldly predicted that Clinton would beat her polls and cruise to a 5 percentage point win. Instead, it was Donald Trump who outperformed his polls by enough to win the Electoral College.
That brings us back to France. As if to atone for past sins, the mainstream media went out of its way to indulge the possibility of a Le Pen victory. So did financial advisory firms and regional experts, some of whom put Le Pen’s odds as high as 40 percent in March and April. Her odds decreased some down the stretch run, retreating to the range of 10 percent to 20 percent in betting and financial markets by a few days before the vote. Still, those probabilities were considerably higher than the roughly 3 percent chance that the most cautious statistical model gave her.
Forecasters are overconfident more often than they might realize — and there’s a lot to be said for media outlets erring on the side of caution until a vote has taken place. But France was the wrong hill for anything-can-happen-because-Trump! punditry to die upon. Whereas Clinton led Trump by just 3 to 4 percentage points in national polls (and by less than that in the average swing state), and “Remain” led “Leave” by only a point or so, Le Pen had consistently trailed Macron by 20 to 25 points.
Despite their vastly different polling, however, Trump, Brexit and Le Pen had all been given a 10 to 20 percent chance by betting markets — a good proxy for the conventional wisdom — on the eve of their respective elections. Experts and bettors were irrationally confident about a Clinton victory and a “Remain” victory — and irrationally worried about a Macron loss. In each case, the polls erred in the opposite direction of what the markets expected.
My purpose here isn’t to settle scores between pundits and pollsters and prediction markets, however.3 Instead, it’s to raise a concern that pollsters are being influenced by the conventional wisdom and issuing less accurate polls as a result. Here’s how I expressed it two years ago:
So … the pundits are so bad that you should literally bet against whatever they say? Even I wouldn’t go that far. I’m happy to mostly ignore them instead. The problem, though, is that because of herding, it’s become harder to get what I really want — an undiluted sample of public opinion. Instead, there’s sometimes quite a bit of conventional wisdom baked into the polls.
Here’s an example of what I mean by that. Suppose that in a certain election, the Republican really leads by 8 percentage points. The conventional wisdom, for reasons that aren’t entirely clear, wrongly insists that the race is a tie. The pollsters mostly stick to their guns, but they compromise by publishing a poll showing the Republican with a 6-point lead instead. Under conditions like these, the conventional wisdom will pull the polls slightly in the wrong direction. So if you think the conventional wisdom is worthless,4 you should guess that polls will err in the opposite direction of what the conventional wisdom expects.
What was that about the pollster being influenced by the conventional wisdom? Aren’t pollsters supposed to be objective? Well, yes, they’re supposed to be. And the best pollsters trust their data even when it comes to an unpopular conclusion. (By “unpopular,” I mean a conclusion that differs from what journalists and other elites expect.) But pollsters also have a lot of choices to make about which turnout model to use, how to conduct demographic weighting, what to do with undecided voters, and so forth. This can make more difference than you might think. An exercise conducted by The New York Times’s Upshot blog last year gave pollsters the same raw data from a poll of Florida and found that they came to varied conclusions, showing everything from a 4-point lead for Clinton in Florida to a 1-point lead for Trump.
Now suppose you’re conducting a poll of France. It’s a tricky election — lots of voters are undecided or say they’ll abstain, and neither candidate is from one of France’s traditional major parties. With one reasonable set of assumptions, you might show Macron ahead by 23 percentage points. With another, he might be up by 30 points. Which set of numbers are you going to publish? The 23-point lead is in line with the consensus of other pollsters and opinion-makers. The 30-point lead would stick out, by contrast. Do you really want to take the chance of publishing that number, especially after Brexit and Trump and when lots of smart people say that there’s a risk of underestimating Le Pen’s chances? It’s easier to go with the turnout model that shows the smaller lead.
While this is a contrived example, there are some similar real-world cases. In advance of the 2015 U.K. general election, the firm Survation declined to publish a poll showing Conservatives with a 6-percentage-point lead, fearing that it would be out of line with the consensus; other polls of the race had shown the parties in a near-tie. In fact, Conservatives won by 6.5 points over Labour, so the poll would have been spot-on. There have been similar problems with pollsters sitting on results in U.S. Senate races.
There’s also lots of statistical evidence for pollster “herding,” meaning that they avoid publishing results that differ too much from what other polls show. There was considerable evidence of herding in the first round of the French election, for example, when polls for the leading candidates were incredibly consistent with one another, more so than they “should” be given sampling error. And in the second round, the range of polls narrowed as the election neared. In mid-April — before the first-round vote — polls of a prospective Macron-Le Pen runoff had shown everything from a 17-point Macron lead up to a 42-point lead. Among polls conducted after the candidates’ lone debate on May 3, however, the range spanned only from a 23-point Macron lead to 26 points.
Pollsters have a difficult and essential job, but they’re under a lot of pressure from media outlets that don’t understand all that much about polling or statistics and who often judge the polls’ performance incorrectly.5 They’re also under scrutiny from voters, pundits and political parties looking for reassurance about their preferred candidates. Social media can encourage conformity and groupthink and reinforce everyone’s prior beliefs, leading them to believe there’s a surfeit of evidence for a flimsy conclusion. Under these conditions, it’s easy for polls to be contaminated by the conventional wisdom instead of being a check on elites’ views — and to be the worse for it.