Skip to main content
Menu
How Accurate Are State Polls? And What Could That Mean For November?

Our latest recap of the accuracy of polling in American elections reinforced something we’ve been saying for a while: The polls are all right. In particular, polls of presidential general elections have historically been quite accurate.

Polls conducted in the final 21 days before the last five presidential general elections had a weighted average error of 4.0 points.1 (We define error as the absolute difference between a poll’s margin between the top two candidates and the actual vote share margin. For example, if a poll gave the Republican candidate a lead of 3 percentage points but the Democrat won the election by 2 points, that poll had a 5-point error.) And even in 2016, when many people blamed the polls for not predicting President Trump’s victory, the polls within 21 days of the election performed decently well, with a weighted average error of 4.9 points.

But in the United States, the Electoral College picks the winner of presidential elections, which means state polls are what really matter. And state polls did have an off year in 2016 (although still pretty close to the long-term average). They had a weighted average error of 5.3 points, compared with 3.2 for national polls. Plus, that error systematically overestimated Democrats: State polls had a weighted average statistical bias (a metric that tells us which direction the error ran) of 3.5 points toward Hillary Clinton.

State polls are more error-prone than national polls

Weighted average error and statistical bias of national and state polls in the final 21 days before presidential general elections, among polls in FiveThirtyEight’s pollster ratings database

Weighted Average Error Weighted Average Statistical Bias
Cycle National State National State
2000 3.8 4.6 R+3.1 R+2.2
2004 2.2 3.5 D+0.9 D+1.2
2008 2.2 3.8 0.0 D+1.2
2012 3.3 3.7 R+3.2 R+2.3
2016 3.2 5.3 D+2.4 D+3.5
All years 2.9 4.3 R+0.5 D+0.6

Averages are weighted by the square root of the number of polls that a particular pollster conducted in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

However, it is not unusual for state polls to be less accurate than national ones (although they did diverge a lot in 2016). Since 2000, state polls have a weighted average error of 4.3 points, while national polls have a weighted average error of 2.9 points. Some states have more accurate polls than others, though. Thanks to our pollster ratings data set, we can quantify which states’ presidential general election polls are the most and least accurate, which can help us better understand the state polls we’ll get later this year.

Below is a table of the weighted average error and statistical bias of each state’s polls in the last five presidential general elections for states that had at least 15 polls conducted during the last 21 days of the campaign. This eliminates states where the sample size of polls is too small to draw meaningful conclusions and narrows the list to each election’s main battleground states (which tend to get polled the most often anyway).

Swing state polls are usually pretty good

Weighted average error and statistical bias of state polls in the final 21 days before presidential general elections, for states with at least 15 such polls, among polls in FiveThirtyEight’s pollster ratings database

2000 Election
State No. of Polls Weighted Average Error Weighted Average Statistical Bias
Michigan 21 4.1 R+3.6
Pennsylvania 21 3.8 R+2.3
New York 17 7.7 R+7.7
Florida 15 3.3 D+1.0
2004 Election
State No. of Polls Weighted Average Error Weighted Average Statistical Bias
Florida 28 4.9 D+4.3
Ohio 25 3.4 D+2.5
Pennsylvania 24 2.0 R+0.1
Iowa 16 2.8 R+0.5
2008 Election
State No. of Polls Weighted Average Error Weighted Average Statistical Bias
Ohio 31 3.5 D+1.1
Florida 29 2.1 R+0.3
North Carolina 28 2.4 D+1.6
Pennsylvania 28 2.4 R+0.8
Virginia 24 2.2 D+0.1
Missouri 18 1.4 0.0
Colorado 16 3.4 R+1.9
New Hampshire 16 4.8 D+1.1
Nevada 15 7.2 R+7.2
2012 Election
State No. of Polls Weighted Average Error Weighted Average Statistical Bias
Ohio 44 1.8 R+0.2
Florida 32 2.4 R+1.9
Virginia 27 3.4 R+3.0
Colorado 21 4.0 R+3.9
Wisconsin 21 2.9 R+2.5
Iowa 19 3.8 R+3.6
New Hampshire 19 3.6 R+3.3
2016 Election
State No. of Polls Weighted Average Error Weighted Average Statistical Bias
Florida 36 3.8 D+2.9
Pennsylvania 29 4.8 D+4.6
North Carolina 26 5.7 D+5.2
New Hampshire 21 4.8 D+3.2
Nevada 21 3.1 R+1.8
Michigan 20 4.7 D+4.4
Virginia 19 2.8 D+1.3
Ohio 18 5.8 D+5.8
Arizona 17 2.4 D+1.5
Colorado 16 2.7 R+1.3
Georgia 16 3.0 D+1.8

Averages are weighted by the square root of the number of polls that a particular pollster conducted in that particular cycle. Polls that are banned by FiveThirtyEight because we know or suspect they faked data are excluded from the analysis.

The first thing that might jump out to you are the trouble spots in the 2016 election — many state polls had a statistical bias toward Democrats, for instance. In Michigan, polls had a weighted average statistical bias of 4.4 points toward Democrats, and in Pennsylvania, polls had a weighted average statistical bias of 4.6 points toward Democrats. North Carolina and Ohio get less attention but were even more inaccurate.

But setting 2016 aside, the most consistent story the table tells is how good swing-state polls typically are. Their weighted average error is usually not that high — even in the states that tripped us up in 2016. In 2008, for example, North Carolina polls had a weighted average error of just 2.4 points. And while Ohio polls missed the mark in 2016, they were dead on in 2012 (1.8 points). The weighted average error of Florida polls has been below 4.0 in four out of the five elections. Pennsylvania polls exhibited very low weighted average errors of 2.0 (in 2004) and 2.4 (in 2008) before their off year in 2016. And the polls of other likely 2020 swing states, such as Arizona (2.4 in 2016) and Wisconsin (2.9 in 2012), have also boasted very low weighted average errors in the past.

None of these states appears to be systematically difficult to poll either. Just like polls overall are not getting any more or less accurate over time, the error and statistical bias of state polling bounces around unpredictably from year to year. While there are some reasons to be concerned that some flaws in state-level 2016 polling, including lack of weighting by education, have not been fixed, the accuracy of a state’s polls one election does not appear to have any bearing on their accuracy the next.

So as we enter the thick of the 2020 general election, we can be secure in the knowledge that polls of swing states are about as trustworthy as polls can get — though of course, polls aren’t perfect. Even an error of 3 points can be the difference in a close election.


‘The VP effect is probably real, but it’s pretty small’: Nate Silver

Footnotes

  1. To avoid giving prolific pollsters too much influence, we weight our average by the number of polls each pollster conducted. Specifically, the weights are based on the square root of the number of polls that a firm conducted. For instance, a pollster that conducted 16 polls of a given type of election in a given cycle would be weighted four times as heavily as a pollster that conducted just one poll. Pollsters that are banned by FiveThirtyEight because we know or suspect that they faked data are excluded from all calculations in this article. The cutoff for a poll’s inclusion is whether it had a median date of 21 days before the election.

Nathaniel Rakich is FiveThirtyEight’s elections analyst.

Comments