Skip to main content
Can You Trust Trump’s Approval Rating Polls?

For perhaps the first time since last November’s election, polls are making headlines again. Surveys from CNN, Gallup, ABC News and The Washington Post, NBC News and the Wall Street Journal and Quinnipiac University all have Donald Trump with net-negative numbers on handling his presidential transition and duties as president-elect. On average between the five surveys, 41 percent of Americans approve of Trump’s transition performance while 52 percent disapprove.

CNN 40% 52%
Gallup 44 51
ABC News / Washington Post 40 54
Quinnipiac University 37 51
NBC News / Wall Street Journal 44 52
Average 41 52
January polls give Trump poor marks for transition

While those numbers wouldn’t be all that out of line for a sitting president’s job approval rating — almost every recent commander-in-chief has endured a slump or two — they’re unusual for a president-elect. Newly elected presidents typically enjoy high approval ratings as they transition into office. Even George W. Bush, who won the 2000 election only after a contentious recount and despite losing the popular vote, had about two-thirds of Americans approving of his transition as he prepared to take the oath of office.

And so Trump — in what’s almost certainly a sign of things to come — has pushed back against the approval-ratings polls on Twitter. Let’s let @realDonaldTrump take the dais:

One might be inclined to make a variety of rebuttals here. For instance, these particular pollsters aren’t especially good targets for Trump’s ire. Of the five pollsters, only ABC News and NBC News issued late national polls, and they were both fairly close to the mark, projecting Hillary Clinton to win the popular vote by 4 percentage points (in fact, Clinton won the popular vote by 2.1 points). Quinnipiac did issue late polls that showed Clinton narrowly ahead in Florida and North Carolina, although Trump’s narrow win in Florida was comfortably within the poll’s margin of error.

Also, these approval ratings polls are measuring opinions among all adults, instead of registered or likely voters. In theory, polls that sample all adults are less error-prone, since a pollster doesn’t have to worry about projecting who will turn out to vote.

But all of that may be getting too much into the weeds. There’s no doubt that polls took a trust hit during the campaign and that Trump is going to exploit it.

Here’s the thing. The loss of trust mostly isn’t the pollsters’ fault. It’s the media’s fault. Oh, yes, I’m going there. The loss of trust in polls was enabled, in large part, by reporting and analysis that incorrectly portrayed the polls as showing an almost-certain Clinton win when in fact they showed a close and highly uncertain Electoral College race, especially after FBI Director James B. Comey’s letter to Congress on Oct. 28.

As my colleague Harry Enten put it a few days before the election, Trump was only a normal-size polling error away from winning. Clinton would win if the polls were spot on — and she’d win in a borderline landslide in the event of an error in her favor. But the third possibility — if the polls underestimated Trump, even slightly — would probably be enough for Trump to win the Electoral College. (That’s why FiveThirtyEight’s forecast during the final week of the campaign showed Trump with roughly a 1-in-3 chance of winning the Electoral College, dipping slightly to 29 percent on Election Day itself.)

That third possibility is pretty much exactly what happened. Trump beat the final FiveThirtyEight national polling average by only 1.8 percentage points. Meanwhile, he beat the final FiveThirtyEight polling average in the average swing state — weighted by its likelihood of being the tipping-point state — by 2.7 percentage points. (The miss was larger than that in Wisconsin, Michigan and Pennsylvania, but Clinton met or slightly exceeded her polls in several other swing states.) This was nothing at all out of the ordinary. The polls were about as accurate as they’d been, on average, in presidential elections since 1968. They were somewhat more accurate than they’d been in the most recent federal election, the 2014 midterms. But they were enough to tip the election to Trump because Clinton had been in a precarious position to begin with.

Yep, this is opening up a can of worms. And get ready, because we’re going to be uncracking a giant, industrial-size vat of worms later this week, with a series of articles that ask why the conventional wisdom was so sure Clinton would win when (in our view, anyway) that conclusion wasn’t justified based on the polls. The answers turn out to be pretty interesting — and complicated — so I’ll save the detail for later.

In the meantime, with polls in the news again, I’d urge my journalistic colleagues to do a better job of reporting on uncertainty when they report on polling data. Not only do polls have a margin of sampling error — for instance, the margin of sampling error on CNN’s poll of 1,000 adults is plus or minus 3 percentage points — but they also have other types of errors, such as nonresponse bias. The people who respond to polls — often under 10 percent of the population contacted — may not be representative of the population as a whole, and that creates a lot of challenges.

These types of errors are harder to quantify, but as an empirical matter1 they probably work out to an additional margin of error of 2.5 to 3 percentage points for national polls. Call this figure the margin of methodological error (this is my term, not one in common use). The error can be a lot higher under some circumstances, such as when measuring voter preferences during a low-turnout primary, or for subsamples of hard-to-reach populations. But since our focus is on national approval ratings polls, we’ll ignore those complications for now.

So then, to calculate the overall error in a poll — what I’ll call the true margin of error — you add the margin of sampling error and the margin of methodological error together, right? No, not quite. Instead, you need a sum of squares formula. To save you some math, here are a few useful benchmarks:

  • For a high-quality, 1,000-person national poll, a good estimate of the true margin of error is about plus or minus 4 percentage points.
  • For a national polling average, meanwhile, the true margin of error is about plus or minus 3 percentage points. It’s true that polling averages can greatly reduce sampling error, by aggregating thousands of interviews together. But polling averages don’t necessarily eliminate methodological error. When the polls are wrong, they tend to miss in the same direction.

There’s one other critical distinction that people often miss. The margin of error, as traditionally described, applies only to one candidate’s vote share (“Clinton has 47 percent of the vote”) or one side of a yes/no question (“41 percent of voters approve of Trump’s performance”). The margin of error for the difference between two candidates (“Clinton leads Trump by 5 percentage points”) — or a candidate’s net approval rating (“Trump has a negative-10 approval rating”) — is roughly twice as high:

  • That means for a high-quality, 1,000-person national poll, the true margin of error for the margin between candidates — or a candidate’s net approval rating — is about 8 percentage points.
  • And for a national polling average, the true margin of error for the margin between candidates, or a candidate’s net approval rating, is about 6 percentage points.

Whenever you see an article that cites polling data, you should add or subtract the true margin of error and consider how the story would change. For instance, the polling average we calculated above had Trump’s approval rating at 41 percent. The true margin of error on this number, based on the rules-of-thumb above, is about plus or minus 3 points. What if Trump’s approval rating were really 44 percent? Or 38 percent? How much would this change the story? In this case, I’d suggest, it wouldn’t change the story all that much. Trump would still be unusually unpopular for a president-elect.

By contrast, national polling averages during the final week of the campaign had Clinton up by 3 to 4 percentage points. By the rules above, the true margin of error on this number was about plus or minus 6 points. That means Clinton could really have been ahead by 9 to 10 percentage points — or that Trump could have been up by 2 to 3 points.2 The story would be completely different, in other words, based on even modest errors in the polling. But very little of the horse-race coverage that I read conveyed that sense of uncertainty.

You can, of course, seek to describe uncertainty with probabilities (“Clinton has a 71 percent chance of winning”) instead of with words (“Clinton’s favored, but it’s anybody’s race”). Obviously, probabilities are our preferred way of doing things around here, and they can work great for certain readers while producing confusion, or misinterpretation, among others. That’s another theme I’ll explore in our upcoming series.

But so many of the articles I read toward the end of last year’s campaign didn’t convey any sense of uncertainty at all. A small Clinton lead was misreported as a sure thing. And then a small polling error was misreported as a massive failure of the data. It’s a fairly minor part of the puzzle, but if journalists want to rebuild trust in their reporting, ending the boom-and-bust cycle in how they report on polling — first overrating its precision and then being shocked when it’s even a couple of percentage points off — would be one way to start. Doing so would make it harder for Trump, or other politicians, to undermine confidence in polls they don’t like.


  1. As measured by the historical accuracy of final national polling averages for the presidential popular vote, for example.

  2. And there’s even a small chance the true result could have fallen outside that range. The margin of error is intended to cover 95 percent of possible outcomes, but not 100 percent.

Nate Silver is the founder and editor in chief of FiveThirtyEight.