Skip to main content
ABC News
Election Update: Check Out How The Forecast Works For All 435 House Races

Welcome to our Election Update for Wednesday, Aug. 29!


Our 2018 House forecast is … complicated. The model considers a lot of information. And when you’re looking at what the model says about any particular district, you might wonder, “How in the hell is it coming to that conclusion?!”

You can find the answer on our district pages, which published on Wednesday — 435 detailed forecasts (one for every House race) that show how the model is processing all the data that goes into it. You should click around, check out your district and explore interesting races, but first let’s walk through one race as an example.

Take California’s 25th Congressional District, where Democrat Katie Hill is facing incumbent Republican Rep. Steve Knight. First up, the top-line forecast (we’re using the Classic version of the model here):

This is all pretty self-explanatory. Hill has a 3 in 4 chance of winning the seat (a 75.1 percent chance, if you want to be more precise). Below those top-line odds, you can see the model’s projected vote share for each candidate — Hill gets an average of 52.5 percent of the vote, according to the model, and Knight gets 47.5 percent. But that’s just the average; we’re also showing a confidence interval for these vote share forecasts: In 80 percent of the model’s simulations, Hill’s vote share falls within that blue band and Knight’s falls within that red band. As you can see, those two bands have quite a bit of overlap, meaning the outcome of this race is still very much up in the air.

If you scroll down a little, you’ll see how the model arrives at those forecasted vote shares. In short, the three versions of our House model use four types of data for forecasting a race’s outcome: the polls of that district (more on these in a moment); CANTOR (a system that infers results for districts that have little or no polling available by looking at what’s going on in similar districts that do have polling); the “fundamentals” (non-polling factors like fundraising and a district’s voting history); and experts’ race ratings (from Inside Elections, Cook Political Report and Sabato’s Crystal Ball). The model calculates a forecasted margin according to each of these inputs and then combines the results:

The arrows connecting each type of data to the final forecast are sized according to how much weight the model is putting on each method. (The Classic version of our model, shown above, doesn’t use expert ratings, which is why there’s no arrow connecting the “Experts” input to the output area.) The weights the model uses for each forecasting method are based on how much data is available (if a district has only one poll, for example, the model won’t put much stock in the polling “average” there), and how reliable a predictor that method has been historically.

So, in the example above, the Classic forecast relies most on the fundamentals, which show the Democrat up by 8.9 points (that’s mostly because Hill has outraised Knight and the generic ballot favors Democrats). Then it’s adding in a healthy dollop of the district’s polls, which show the Republican up by 1 point. And finally it takes a smidgen of CANTOR, which shows the Republican up by 2 points. Combining the fundamentals, polls and CANTOR in those proportions — and then making a slight adjustment that accounts for long-term patterns in how past midterms have unfolded, which is shown in the small number toward the bottom of the arrow — the Classic forecast comes out showing the Democrat ahead by 5 points.

Using the left-hand navigation, you can switch between versions of our model. Here, for instance, is the Lite version:

The Lite version tries to make the best forecast it can using only the polls. Combining California 25 polls and polls of similar districts, the forecast projects Knight to win by 1.5 points. (You can see in this case how much the fundamentals help Hill.)

Finally, the Deluxe version, which adds in a forecasted vote margin inferred from the race ratings put together by Cook, Inside Elections and Sabato’s Crystal Ball:

The Deluxe forecast is barely looking at CANTOR in this case, and it’s instead relying on the fundamentals, the experts and the district’s polls, in that order.

Scroll down the page again and you’ll get to the last element: the polls.

This part is also pretty self-explanatory. But two things are worth noting: First, in the “Weight” column, you can see how much the model relies on each survey. The model gives more weight to better polls, more recent polls and nonpartisan polls. Second, the model makes three adjustments to each poll. You can read about those in detail here, but in short, the model makes:

  1. A likely voter adjustment, which translates results from polls of registered voters or all adults to an equivalent for likely voters.
  2. A timeline adjustment, which tweaks a poll’s result based on its timing and the generic ballot. If a poll came out two weeks ago and the generic ballot vastly improved for Republicans over those two weeks, the model will move the survey’s result to be a bit more favorable to the GOP candidate.
  3. A house effects adjustment, which corrects for a given pollster’s persistent statistical bias.

And that’s it! Simple, right? Please go explore the forecasts for yourself, and if you find interesting/puzzling/amazing things, send them to @538politics.



FiveThirtyEight House forecast update for August 29, 2018


Micah Cohen is FiveThirtyEight’s former managing editor.

Comments