Skip to main content
ABC News
How Our NBA Predictions Work

The Details

FiveThirtyEight’s NBA predictions have gone through quite an evolution over the years.

Our first iteration simply relied on Elo ratings, the same old standby rating system we’ve used for college and pro football, college basketball, baseball, soccer, Formula One racing and probably some other sports we’re forgetting. Basic Elo is generally useful — particularly when tracking teams’ trajectories throughout history — but it only knows who won each game, the margin of victory and where the game was played. So if a player is injured or traded — or resting, as is increasingly the case in the NBA — Elo wouldn’t be able to pick up on that when predicting games or know how to account for that in a team’s ratings going forward. In fact, even if a team simply made a big offseason splash (such as signing LeBron James or Kevin Durant), Elo would take a long time to figure that out, since it must infer a change in team talent from an uptick in on-court performance.

To try to address that shortcoming, in 2015 we introduced a system we called “CARM-Elo.” This still used the Elo framework to handle game results, but it also used our CARMELO player projections to incorporate offseason transactions into the initial ratings for a given season. In a league like the NBA, where championships now feel like they’re won as much over the summer as during the season itself, this was an improvement. But it still had some real problems knowing which teams were actually in trouble heading into the playoffs and which ones were simply conserving energy for the games that matter. Since a team’s underlying talent is sometimes belied by its regular-season record — particularly in the case of a superteam — an Elo-based approach to updating ratings on a game-to-game basis can introduce more problems than it actually solves.

Moving beyond Elo

One attempt to salvage CARM-Elo was to apply a playoff experience adjustment for each team, acknowledging the NBA’s tendency for veteran-laden squads to play better in the postseason than we’d expect from their regular-season stats alone. This also helped some, but CARM-Elo still had problems with mega-talented clubs (such as the 2017-18 Golden State Warriors) that take their foot off the gas pedal late in the NBA’s long regular season. It was clear our prediction system needed a major overhaul, one that involved moving away from Elo almost completely.

As we hinted at in our preview post for the 2018-19 season, we made some big changes to the way we predict the league that year. Chief among them is that our team ratings are now entirely based on our player forecasts (which come from the projection algorithm formerly known as “CARMELO”). Specifically, each team is judged according to the current level of talent on its roster and how much that talent is expected to play going forward. Here’s how each of those components work:

Talent ratings

At their core, our player projections forecast a player’s future by looking to the past, finding the most similar historical comparables and using their careers as a template for how a current player might fare over the rest of his playing days. After running a player through the similarity algorithm, we produce offensive and defensive ratings for his next handful of seasons, which represent his expected influence on team efficiency (per 100 possessions) while he’s on the court.

The player ratings are currently based on our RAPTOR metric, which uses a blend of basic box score stats, player tracking metrics and plus/minus data to estimate a player’s effect (per 100 possessions) on his team’s offensive or defensive efficiency. Historical RAPTOR ratings are estimated for players before 2014 using a regression to predict RAPTOR from the more basic stats that were kept in the past. For current players, you can find their RAPTOR metrics in the individual forecast pages under the player’s offensive rating and defensive rating. During the 2019-20 season, we used a “predictive” variant of RAPTOR to generate the player ratings, but subsequent testing showed that standard RAPTOR is much better to use for this purpose. So now we use just one version Model tweak
Dec. 17, 2020
of RAPTOR for both measuring performance and predicting it going forward.

These RAPTOR ratings provide a prior for each player as he heads into the current season. But they must also be updated in-season based on a player’s RAPTOR performance level as the year goes on. To do that, we assign a weight to the prior that is relative to 1 minute of current-season performance, varying based on a player’s age and previous experience. (Young players and/or rookies will see their talent estimates update more quickly than veterans who have a large sample of previous performance.)

These talent ratings will update every day throughout the regular season and playoffs, gradually shifting over time based on how a player performs during the season. As a change from the 2019-20 season, we have tweaked the updating process slightly to make the talent ratings more stable during the early stages of the regular season and playoffs. Now, we don’t adjust a player’s rating based on in-season RAPTOR data at all until he has played 100 minutes, and the current-season numbers are phased in more slowly between 100 and 1,000 minutes during the regular season (or 750 for the playoffs). Model tweak
Dec. 17, 2020

Overnight updates

Because our data sources for player ratings don’t update individual statistics immediately after the end of every game, we added a function to preliminarily estimate the changes to a team’s rating as soon as a game ends. For each player in our database, we adjust his offensive and defensive ratings up or down very slightly after each game based on his team’s margin of victory relative to our forecast’s expectation going into the game. These numbers add up at the team level to reflect how we predict that a team’s ratings will change in the wake of a given result.

The advantage of this is that we can provide an instant update to the model as soon as a game ends. However, since these estimates are stopgaps, they will be changed to the full RAPTOR-based ratings from above when the data from those sources updates. After any given game, these differences should be small and generally barely noticeable. But we think this change will be particularly worthwhile in the playoffs, when team odds can shift dramatically based on a single game’s result.

Playing-time projections

Now that we have constantly updating player ratings, we also need a way to combine them at the team level based on how much court time each player is getting in the team’s rotation.

For CARM-Elo’s preseason ratings, we used to accomplish this by manually estimating how many minutes each player would get at each position. Needless to say, this is a lot more work to do in-season (and it requires a lot of arbitrary guesswork). So as part of our move toward algorithmizing our predictions in a more granular way, we developed a program that turns simple inputs into a matrix of team minutes-per-game estimates, broken down by position.

This system requires only a categorized list of players on a given team, grouped by playing-time preference, a list of eligible positions a player is allowed to play (the system will assign minutes at every player’s “primary” position or positions first, before cycling back through and giving minutes at any “secondary” positions when necessary to fill out the roster) and some minutes constraints based largely on our updating forecasted minutes-per-game projections.

For that last part, we have developed an in-season playing-time projection similar to the one we use to update our individual offensive and defensive ratings. For each player, our player forecasts will project a preseason MPG estimate based on his own history and the record of his similar comparables. We then adjust that during the season by applying a weight of 12.6 games to the preseason MPG projection, added to his current-season minutes and divided by 12.6 plus his current-season games played. (Interestingly, this implies that the amount of weight the MPG prior receives is the same regardless of whether the player is a fresh-faced rookie or a grizzled veteran.)

Those minutes are used as the default for our program, which then automatically creates a team’s depth chart and assigns minutes by position according to its sorting algorithm. The defaults, however, can and will be tweaked by our staffers to help the program generate more accurate rosters. For instance, we can mark certain games in which a player is injured, resting, suspended or otherwise unavailable, which will tell the program to ignore that player in the team’s initial rank-ordered list of players before allocating minutes to everyone else. (We also have a method of penalizing a player’s talent ratings if he is forced to play significantly more MPG than his updated player projection recommends.) As of the 2020-21 season, there is even a “load management” setting that allows certain stars to be listed under a program of reduced minutes during the regular season.

For the 2022-23 season Model tweak
Oct. 14, 2022
, we added another component to this process: A “history-based” tally of recent MPG for each player (based on how much he’s been seeing the court in the past 15 days, including up to five games of data, and his projected availability for the forecasted game). This rolling average is then blended with the depth chart-based algorithmic MPG projection on a game-to-game basis, based on how soon the game in question is being played. For a game being played today, for instance, the history-based forecast will get 60 percent weight when projecting a player’s minutes, while our classic depth charts-based projection will get 40 percent weight. This gradually changes over time until, for games 15 days in the future and beyond, the history-based forecast gets 0 percent weight and the depth charts-based projections get 100 percent weight. (This rolling average resets at the beginning of the regular season and playoffs.)

Also new for 2022-23 Model tweak
Oct. 14, 2022
, short-term injuries and player movement will be automated using ESPN’s data, helping us better stay on top of daily roster changes. Through this system, we will be able to account for most injuries, trades and other player movement throughout the season on a game-by-game basis.

Because of the differences between a team’s talent at full strength and after accounting for injuries, we list two separate team ratings on our interactive page: “Current Rating” and “Full-Strength Rating.” Current is what we’re using for the team’s next game and includes all injuries or rest days in effect at the moment. Full-strength is the team’s rating when all of its key players are in the lineup, even including those who have been ruled out for the season. This will help us keep tabs on which teams are putting out their best group right now, and which ones have room to improve at a later date (i.e., the playoffs) or otherwise are more talented than their current lineup gives them credit for.

Game predictions

As a consequence of the way we can generate separate depth charts for every team on a per-game basis, we can calculate separate strength ratings for the teams in a matchup depending on who is available to play.

For a given lineup, we combine individual players’ talent ratings into a team rating on both sides of the ball by taking the team’s average offensive and defensive rating (weighted by each player’s expected minutes) multiplied by 5 to account for five players being on the court at all times. This number is then multiplied by a scalar — 0.8 for the regular season and 0.9 for the playoffs — to account for diminishing returns between a team’s individual talent and its on-court results. We also estimate a team’s pace (relative to league average) using individual ratings that represent each player’s effect on team possessions per 48 minutes.

Those numbers are then converted into expected total points scored and allowed over a full season, by adding a team’s offensive rating to the league average rating (or subtracting it from the league average on defense), dividing by 100 and multiplying by 82 times a team’s expected pace factor per 48 minutes. Finally, we combine those projected points scored and allowed into a generic expected “winning percentage” via the Pythagorean expectation. In the regular season, the exponent used is 14.3:

\(\text{Winning Percentage} = \frac{(\text{Projected Points Scored})^{14.3}}{(\text{Projected Points Scored})^{14.3}+(\text{Projected Points Allowed})^{14.3}}\)

In the playoffs, the exponent is 13.2. The league ratings come from efficiency and pace data; in 2018-19, the league average offensive efficiency was 108.44 points per 100 possessions and the average pace was 101.91 possessions per 48 minutes. In the playoffs, we multiply the average pace factor by 0.965 to account for the postseason being slightly slower-paced than the regular season.

After arriving at an expected winning percentage, that number is then converted into its Elo rating equivalent via:

\(\text{Elo Team Rating} = 1504.6 – 450 \times \log_{10}((1 / \text{Winning Percentage}) – 1)\)

In a change starting with 2020-21 Model tweak
Dec. 17, 2020
— and a bit of a return to our roots — we also mix in our old friend, the standard Elo rating, to complement each team’s pure player-ratings-based talent estimate. Extensive testing during the 2020 offseason showed that giving Elo about 35 percent weight (and RAPTOR talent 65 percent) produces the best predictive results for future games, on average. But this varies by team, depending on how much the current roster contributed to that Elo rating. So we vary the weight given to Elo by anywhere from 0 to 55 percent, based on the continuity between a team’s current projected depth chart and its recent lineups.

From there, we predict a single game’s outcome the same way we did when CARM-Elo was in effect. That means we not only account for each team’s inherent talent level, but we also make adjustments for home-court advantage (the home team gets a boost of about 70 rating points, which is based on a rolling 10-year average of home-court advantage and changes during the season) Model tweak
Oct. 14, 2022
,1 fatigue (teams that played the previous day are given a penalty of 46 rating points), travel (teams are penalized based on the distance they travel from their previous game) and altitude (teams that play at higher altitudes are given an extra bonus when they play at home, on top of the standard home-court advantage). A team’s odds of winning a given game, then, are calculated via:

\(\text{Win Probability} = 1 /\left(10^{-(\text{Team Rating Differential} + \text{Bonus Differential}) / 400} + 1\right)\)

Where Team Rating Differential is the team’s Elo talent rating minus the opponent’s, and the bonus differential is just the difference in the various extra adjustments detailed above.

Season simulations and playoff adjustments

Armed with a list of injuries and other transactions for the entire league, our program can spit out separate talent ratings for every single game on a team’s schedule. For instance, if we know a player won’t be available until midseason, the depth-chart sorting algorithm won’t allow him to be included on a team’s roster — and therefore in the team’s talent ratings — until his estimated return date.

Those game-by-game talent ratings are then used to simulate out the rest of the season 50,000 times, Monte Carlo-style. The results of those simulations — including how often a team makes the playoffs and wins the NBA title — are listed in our NBA Predictions interactive when it is set to “RAPTOR Player Ratings” mode.

It’s important to note that these simulations still run “hot,” like our other Elo-based simulations do. This means that after a simulated game, a team’s rating is adjusted upward or downward based on the simulated result, which is then used to inform the next simulated game, and so forth until the end of the simulated season. This helps us account for the inherent uncertainty around a team’s rating, though the future “hot” ratings are also adjusted up or down based on our knowledge of players returning from injury or being added to the list of unavailable players.

For playoff games, we make a few special changes to the team rating process explained above. For one thing, teams play their best players more often in the playoffs, so our depth-chart algorithm has leeway to bump up a player’s MPG in the postseason if he usually logs a lot of minutes and/or has a good talent rating. As part of the forecasting process, our algorithm outputs a separate recommended-minutes-per-game projection for both the regular season and the playoffs.

We also have added a feature whereby players with a demonstrated history of playing better (or worse) in the playoffs will get a boost (or penalty) to their offensive and defensive talent ratings in the postseason. For most players, these adjustments are minimal at most, but certain important players — such as LeBron James — will be projected to perform better on a per-possession rate in the playoffs than the regular season. (Truly, he will be in “playoff mode.”) These effects will also update throughout the season, so a player who has suddenly performed better during the postseason than the regular season will see a bump to his ratings going forward.

And we continue to give a team an extra bonus for having a roster with a lot of playoff experience. We calculate a team’s playoff experience by averaging the number of prior career playoff minutes played for each player on its roster, weighted by the number of minutes the player played for the team in the regular season. For every playoff game, this boost is added to the list of bonuses teams get for home court, travel and so forth, and it is used in our simulations when playing out the postseason.

Live in-game win probabilities New feature
April 11, 2023

As games are being played, our NBA interactive displays real-time charts for each game that show the current score and how likely both teams are to win. These live in-game win probabilities are calculated using two separate models, depending on how much time is remaining:

  1. A Poisson model — used for all but the last minute of the game — that generates score distributions for each team over the remainder of the game and uses those distributions to calculate win probabilities.
  2. A tree-based endgame model — used during the last 90 seconds of the game — that generates every possible sequence of possessions in the remaining time, along with the likelihood of each sequence, and uses that tree to calculate win probabilities.

We start running the endgame model with 1:30 remaining in the game. With between 1:30 and 1:00 remaining, we blend it with the Poisson model, and at 1:00, our forecast is fully based on the endgame model.

Poisson model

Before a game starts we estimate two main metrics about the game that are used to run our Poisson model:

  1. The number of points we expect each team to score per possession based on its projected lineup and each player’s RAPTOR ratings.
  2. The number of possessions we expect in the game based on the pace factor ratings of the two lineups.

Using those values, we generate two independent Poisson distributions, and use them to calculate the probability each team will score a given number of points in the rest of the game (with the part of the distribution where one team ends up with more points than the other representing the probability it will win).

For example, this is what the Poisson model looked like at halftime of the Mavericks-Hawks game on April 2, 2023:

There are two ways each team’s points per possession values are updated during the game:

  1. If a player fouls out or is ejected, each team’s points per possession is adjusted based on the ejected player’s offensive and defensive RAPTOR ratings.
  2. A team that is losing is given a bonus to their scoring rate relative to the size of their deficit, while a winning team is given a penalty of the same amount. For example, if a team goes into the fourth quarter with a 10-point deficit, it gets a boost of about 0.03 points per possession, and the winning team in this situation has its scoring rate reduced by the same amount. In this scenario, the scoring rate adjustment increases the probability of a comeback by the losing team by roughly five percentage points.

Endgame model

During the last 90 seconds of the game, our model builds out a tree of every possible way the game can end, assigns a probability to each of these endings, and uses the tree to calculate overall win probabilities.

To build the tree, we simplify the game of basketball into a sequence of possessions where each possession: 1) lasts a certain amount of time, and 2) results in a certain number of points for the offensive team.

For example, if the team with the ball is down by one point with five seconds left in the game, there are a finite number of ways the possession can end: the possession can end the game with the team scoring zero points, it can end the game with the team scoring one point, or two points, etc. The possession could also end with one second left and zero points scored,2 and so on. The likelihood of each of these possession outcomes is based on historical distributions of time per possession and points per possession in past possessions with the same score margin.

After we’ve generated all possible outcomes for the current possession, we generate all possible outcomes for the next possession and assign a likelihood to each outcome, repeating the process until the game ends for every branch of the tree. The trees can be gigantic, so we use a technique called dynamic programming to efficiently build up the tree from the bottom.

For example, this is what the endgame model looked like when the Spurs had the ball and were up by 2 with 1.8 seconds left in their game against the Mavericks on March 15, 2023:

Any branches of the tree that end in a tie use the probability of each team winning an overtime period, which is calculated using our live in-game win probability model for a tie game with 5:00 left.

Both of our models assume one team has possession of the ball, but there are situations in the game where we must account for possession not being known. For example, before a jump ball, we run our model twice — once assuming each team has possession — and average the results. Or, if a player is shooting his last free throw, we run our model three times — once assuming they make the free throw, once assuming they miss it and the defense gets the rebound, and once assuming they miss and the offense gets the rebound — and weight the results based on the likelihood of each scenario playing out. When free throws are being shot, we use the free-throw percentage of the shooter and a generic offensive rebounding percentage (about 14 percent), except in late-game situations where the model thinks the shooter is likely to intentionally miss the free throw. In those cases, the free throw percentage is reduced to about 25 percent, and offensive rebounding percentage increased to about 35 percent.

During live games, we also update our full NBA forecast — each team’s chance of making the playoffs or winning the finals — in real-time.

The complete history of the NBA

If you preferred our old Elo system without any of the fancy bells and whistles detailed above, you can still access it using the NBA Predictions interactive by toggling its setting to the “pure Elo” forecast.

This method still has the normal game-level adjustment for home-court advantage, but it doesn’t account for travel, rest or altitude; it doesn’t use a playoff-experience bonus; and it has no knowledge of a team’s roster — it only knows game results. It also doesn’t account for any offseason transactions; instead, it reverts every team ¼ of the way toward a mean Elo rating of 1505 at the start of every season. We use a K-factor of 20 for our NBA Elo ratings, which is fairly quick to pick up on small changes in team performance.

You can also still track a team’s Elo rating in our Complete History of the NBA interactive, which shows the ebbs and flows of its performance over time. This number won’t be adjusted for roster changes, but it should remain a nice way to visualize a team’s trajectory throughout its history.

Version History

5.0 Adds live in-game win probabilities.
4.3 Adds a history-based component to create blended playing-time projections. Tweaks home-court advantage to reflect changes across the NBA in recent seasons.
4.2 A predictive version of RAPTOR has been retired, and team ratings are now generated from a mix of RAPTOR and Elo ratings.
4.1 Player projections now use RAPTOR ratings instead of RPM/BPM. Forecast and ratings rebranded to retire CARMELO name. New methodology is used to turn individual player ratings into team talent estimates. Depth chart algorithm now assigns minutes based on playing-time categories instead of a rank-ordered list of players.
4.0 CARMELO updated with the DRAYMOND metric, a playoff adjustment to player ratings and the ability to account for load management. Pure Elo ratings now use a K-factor of 20 in both the regular season and the playoffs.
3.1 Estimated overnight ratings added as a stopgap between game results and data updates.
3.0 CARMELO is introduced to replace CARM-Elo. Pure Elo ratings are adjusted to have variable K-factors depending on the stage of the season being predicted.
2.1 CARM-Elo is modified to include a playoff experience adjustment.
2.0 CARM-Elo ratings are introduced. Seasonal mean-reversion for pure Elo is set to 1505, not 1500.
1.0 Pure Elo ratings are introduced for teams going back to 1946-47.


  1. This number had originally been 92 rating points, but we reduced it after research showed the effect of home-court advantage has been declining in recent seasons. Previously, we had also reduced the home-court adjustment by 25 percent in 2020-21 to reflect the absence of in-person fans during the COVID-19 pandemic.

  2. We evaluate potential possessions in 2-second time bins — for instance, a play that ends with one second left on the clock would be in the same time bin as a play that ends with 1.5 or 0.3 seconds left. Only two possessions can occur in each 2-second time bin.

Neil Paine was the acting sports editor at FiveThirtyEight.