How Our NBA Predictions Work

## The Details

FiveThirtyEight’s NBA predictions have gone through quite an evolution over the years.

Our first iteration simply relied on Elo ratings, the same old standby rating system we’ve used for college and pro football, college basketball, baseball, soccer, Formula One racing and probably some other sports I’m forgetting. Basic Elo is generally useful — and we still track it for teams going back throughout history — but it only knows who won each game, the margin of victory and where the game was played. So if a player is injured or traded — or resting, as is increasingly the case in the NBA — Elo wouldn’t be able to pick up on that when predicting games or know how to account for that in a team’s ratings going forward. In fact, even if a team simply made a big offseason splash (such as signing LeBron James or Kevin Durant), Elo would take a long time to figure that out, since it must infer a change in team talent from an uptick in on-court performance.

To try to address that shortcoming, in 2015 we introduced a system we called “CARM-Elo.” This still used the Elo framework to handle game results, but it also used our CARMELO player projections to incorporate offseason transactions into the initial ratings for a given season. In a league like the NBA, where championships now feel like they’re won as much over the summer as during the season itself, this was an improvement. But it still had some real problems knowing which teams were actually in trouble heading into the playoffs and which ones were simply conserving energy for the games that matter. Since a team’s underlying talent is sometimes belied by its regular-season record — particularly in the case of a superteam — an Elo-based approach to updating ratings on a game-to-game basis can introduce more problems than it actually solves.

### Moving beyond Elo

One attempt to salvage CARM-Elo was to apply a playoff experience adjustment for each team, acknowledging the NBA’s tendency for veteran-laden squads to play better in the postseason than we’d expect from their regular-season stats alone. This also helped some, but CARM-Elo still had problems with mega-talented clubs (such as the 2017-18 Golden State Warriors) that take their foot off the gas pedal late in the NBA’s long regular season. It was clear our prediction system needed a major overhaul, one that involved moving away from Elo almost completely.

As we hinted at in our preview post for the 2018-19 season, we made some big changes to the way we predict the league that year. Chief among them is that our team ratings are now entirely based on our CARMELO player projections. Specifically, each team is judged according to the current level of talent on its roster and how much that talent is expected to play going forward. Here’s how each of those components work:

### Talent ratings

At their core, our CARMELO projections forecast a player’s future by looking to the past, finding the most similar historical comparables and using their careers as a template for how a current player might fare over the rest of his playing days. After running a player through the similarity algorithm, CARMELO spits out offensive and defensive ratings for his next handful of seasons, which represent his expected influence on team efficiency (per 100 possessions) while he’s on the court. You can think of these as being similar to ESPN’s Real Plus-Minus (RPM) or other adjusted plus-minus-style ratings.

Those numbers provide a prior for each player as he heads into the current season. But they must also be updated in-season based on a player’s performance level as the year goes on. To do that, we have two methods (for both offense and defense) depending on the data available:

• Using Real Plus-Minus and Box Plus/Minus. Ideally, both metrics will be published during a season, allowing us to use a blend (⅔ weight for RPM; ⅓ for BPM) on each side of the ball to update our prior ratings. When that happens, we assign a weight to the prior that is relative to 1 minute of current-season performance. On offense, that weight is calculated with a constant term of 416 minutes, plus 0.3 times a player’s minutes from the season before, plus 0.15 times his minutes from two seasons before, plus 0.05 times his minutes from three seasons before. That number is multiplied by his CARMELO preseason offensive rating and added to the product of his current-season minutes and current-season offensive plus/minus blend, and divided by the sum of current-season minutes and the prior weight to get an updated offensive rating. (The rating for players with 0 current-season minutes played is, by definition, the prior.)On defense, the weight has a constant of 60 minutes, plus 0.3 times a player’s minutes from the season before, plus 0.15 times his minutes from two seasons before, plus 0.05 times his minutes from three seasons before. This weight is combined with current-season performance in the same manner as on offense.
• Using Box Plus/Minus only. At a certain stage of each season, ESPN will not have released RPM data for the current season yet. During that time, we must update the in-season ratings using only BPM, which is usually available since the very start of the season via Basketball-Reference.com. Just like with our blended number from above, we assign a weight to the prior that is relative to 1 minute of current-season performance — but we must use different weights because BPM is not quite as reliable an indicator of player performance as RPM (or our RPM-BPM blend).On offense, the weight is calculated with a constant term of 703 minutes, plus 0.27 times a player’s minutes from the season before, plus 0.13 times his minutes from two seasons before, plus 0.04 times his minutes from three seasons before. That number is multiplied by his CARMELO preseason offensive rating and added to the product of his current-season minutes and current-season offensive plus/minus blend, and divided by the sum of the current-season minutes and the prior weight to get an updated offensive rating.On defense, the weight has a constant of 242 minutes, plus 0.48 times a player’s minutes from the season before, plus 0.24 times his minutes from two seasons before, plus 0.08 times his minutes from three seasons before. This weight is combined with current-season performance in the same manner as on offense.

Regardless of the version being used, these talent ratings will update every day throughout the regular season and playoffs, gradually changing based on how a player performs during the season.

Because our data sources for player ratings (ESPN and Basketball-Reference.com) don’t update individual statistics immediately after the end of every game, we added a function to preliminarily estimate the changes to a team’s rating as soon as a game ends. For each player in our database, we adjust his offensive and defensive ratings up or down very slightly after each game based on his team’s margin of victory relative to CARMELO’s expectation going into the game. These numbers add up at the team level to reflect how we predict that a team’s ratings will change in the wake of a given result.

The advantage of this is that we can provide an instant update to the model as soon as a game ends. However, since these estimates are stopgaps, they will be changed to the RPM/BPM-based ratings from above when the data from those sources updates. After any given game, these differences should be small and generally barely noticeable. But we think this change will be particularly worthwhile in the playoffs, when team odds can shift dramatically based on a single game’s result.

### Playing-time projections

Now that we have constantly updating player ratings, we also need a way to combine them at the team level based on how much court time each player is getting in the team’s rotation.

For CARM-Elo’s preseason ratings, we used to accomplish this by manually estimating how many minutes each player would get at each position. Needless to say, this is a lot more work to do in-season (and it requires a lot of arbitrary guesswork). So as part of our move toward algorithmizing our predictions in a more granular way, we developed a program that turns simple inputs into a matrix of team minutes-per-game estimates, broken down by position.

This system requires only a rank-ordered list of players on a given team by playing-time preference (the default order is sorted by expected rest-of-season wins above replacement), a list of eligible positions a player is allowed to play (the system will assign minutes at every player’s “primary” position or positions first, before cycling back through and giving minutes at any “secondary” positions when necessary to fill out the roster) and some minutes constraints based largely on CARMELO’s updating minutes-per-game projections.

For that last part, we have developed an in-season playing-time projection similar to the one we use to update our individual offensive and defensive ratings. For each player, CARMELO will project a preseason MPG estimate based on his own history and the record of his similar comparables. We then adjust that during the season by applying a weight of 12.6 games to the preseason MPG projection, added to his current-season minutes and divided by 12.6 plus his current-season games played. (Interestingly, this implies that the amount of weight the MPG prior receives is the same regardless of whether the player is a fresh-faced rookie or a grizzled veteran.)

Those minutes are used as the default for our program, which then automatically creates a team’s depth chart and assigns minutes by position according to its sorting algorithm. The defaults, however, can and will be tweaked by our staffers to help the program generate more accurate rosters. For instance, we can mark certain games in which a player is injured, resting, suspended or otherwise unavailable, which will tell the program to ignore that player in the team’s initial rank-ordered list of players before allocating minutes to everyone else. (We also have a method of penalizing a player’s talent ratings if he is forced to play significantly more MPG than his updated CARMELO projection recommends.) Through this system, we will be able to account for most injuries, trades and other player movement throughout the season on a game-by-game basis.

Because of the differences between a team’s talent at full strength and after accounting for injuries, we now list two separate CARMELO ratings on our interactive page: “Current CARMELO” and “Full-Strength CARMELO.” Current is what we’re using for the team’s next game and includes all injuries or rest days in effect at the moment. Full-strength is the team’s rating when all of its key players are in the lineup, even including those who have been ruled out for the season. This will help us keep tabs on which teams are putting out their best group right now, and which ones have room to improve at a later date (i.e., the playoffs) or otherwise are more talented than their current lineup gives them credit for.

### Game predictions

As a consequence of the way we can generate separate depth charts for every team on a per-game basis, we can calculate separate CARMELO ratings for the teams in a matchup depending on who is available to play.

For a given lineup, we combine individual players’ talent ratings into a team rating on both sides of the ball by taking the team’s average offensive and defensive rating (weighted by each player’s expected minutes) multiplied by 5 to account for five players being on the court at all times. Those numbers are then combined into a generic expected “winning percentage” via the Pythagorean expectation:

$$\text{Winning Percentage} = \frac{(108 + \text{Team Offensive Rating})^{14}}{(108 + \text{Team Offensive Rating})^{14}+(108 - \text{Team Defensive Rating})^{14}}$$

That number is then converted into its Elo rating equivalent via:

$$\text{CARMELO Rating} = 1504.6 - 450 \times \log_{10}((1 / \text{Winning Percentage}) - 1)$$

From there, we predict a single game’s outcome the same way we did when CARM-Elo was in effect. That means we not only account for each team’s inherent talent level, but we also make adjustments for home-court advantage (the home team gets a boost of about 92 CARMELO rating points), fatigue (teams that played the previous day are given a penalty of 46 CARMELO points), travel (teams are penalized based on the distance they travel from their previous game) and altitude (teams that play at higher altitudes are given an extra bonus when they play at home, on top of the standard home-court advantage). A team’s odds of winning a given game, then, are calculated via:

$$\text{Win Probability} = 1 / \left(10^{-(\text{CARMELO Differential} + \text{Bonus Differential}) / 400} + 1\right)$$

Where CARMELO Differential is the team’s talent rating minus the opponent’s, and the bonus differential is just the difference in the various extra adjustments detailed above.

### Season simulations and playoff adjustments

Armed with a list of injuries and other transactions for the entire league, our program can spit out separate CARMELO ratings for every single game on a team’s schedule. For instance, if we know a player won’t be available until midseason, the depth-chart sorting algorithm won’t allow him to be included on a team’s roster — and therefore in the team’s CARMELO ratings — until his estimated return date.

Those game-by-game CARMELO ratings are then used to simulate out the rest of the season 50,000 times, Monte Carlo-style. The results of those simulations — including how often a team makes the playoffs and wins the NBA title — are listed in our NBA Predictions interactive when it is set to “CARMELO” mode.

It’s important to note that these simulations still run “hot,” like our other Elo-based simulations do. This means that after a simulated game, a team’s CARMELO rating is adjusted upward or downward based on the simulated result, which is then used to inform the next simulated game, and so forth until the end of the simulated season. This helps us account for the inherent uncertainty around a team’s CARMELO rating, though the future “hot” ratings are also adjusted up or down based on our knowledge of players returning from injury or being added to the list of unavailable players.

For playoff games, we make a few special changes to the team CARMELO process explained above. For one thing, teams play their best players more often in the playoffs, so our depth-chart algorithm has leeway to bump up a player’s MPG in the postseason if he usually logs a lot of minutes and/or has a good talent rating. The formula for recommended playoff MPG is:

Projected Playoff MPG = exp(-1.34 + 1.38 × ln(Projected regular-season MPG + 1) + 0.006 × (Plus-Minus Talent))

Note: This value can never be lower than a player’s projected regular-season MPG.

We continue to give a team an extra bonus for having a roster with a lot of playoff experience. We calculate a team’s playoff experience by averaging the number of prior career playoff minutes played for each player on its roster, weighted by the number of minutes the player played for the team in the regular season. For every playoff game, this boost is added to the list of bonuses teams get for home court, travel and so forth, and it is used in our simulations when playing out the postseason.

### The complete history of the NBA

If you preferred our old Elo system without any of the fancy bells and whistles detailed above, you can still access it using the NBA Predictions interactive by toggling its setting to the “pure Elo” forecast.

This method still has the normal game-level adjustment for home-court advantage, but it doesn’t account for travel, rest or altitude; it doesn’t use a playoff-experience bonus; and it has no knowledge of a team’s roster — it only knows game results. It also doesn’t account for any offseason transactions; instead, it reverts every team ¼ of the way toward a mean Elo rating of 1505 at the start of every season.

We have recently made a slight tweak to the way pure Elo simulations work, relative to past seasons: We’ve optimized them to use several different K-factors depending on how far into the future a game is being played. (K is what Elo calls the multiplier that determines how sensitive the ratings are to recent results.) Two versions of pure Elo are listed on the interactive: A long-term “playoff Elo” with a K-factor of 10, which changes slowly in response to new results (this is the version we’re using to predict playoff games in the season simulations, when “pure Elo” mode is enabled), and a short-term “regular-season Elo” with a K-factor of 20, which is quicker to pick up on small changes in team performance (making it better for predicting run-of-the-mill regular-season games).

This mix is designed to be predictive for the state of the current NBA — where the regular season is fundamentally different from the playoffs and needs to be treated as such. Slight fluctuations in regular-season play can be predictive in the short term because they suggest an immediate absence (whether an injury, trade, rest, etc.) or decline in team focus, but we know that teams also tend to revert to their long-term form once the playoffs begin. This change will hopefully help balance between those two factors when predicting using Elo.

You can also still track a team’s short-term Elo rating (K-factor = 20) in our Complete History of the NBA interactive, which shows the ebbs and flows of its performance over time. This number won’t be adjusted for roster changes, but it should remain a nice way to visualize a team’s trajectory throughout its history.

## Version History

3.1 Estimated overnight ratings added as a stopgap between game results and data updates.
3.0 CARMELO is introduced to replace CARM-Elo. Pure Elo ratings are adjusted to have variable K-factors depending on the stage of the season being predicted.
2.1 CARM-Elo is modified to include a playoff experience adjustment.
2.0 CARM-Elo ratings are introduced. Seasonal mean-reversion for pure Elo is set to 1505, not 1500.
1.0 Pure Elo ratings are introduced for teams going back to 1946-47.

Neil Paine is a senior sportswriter for FiveThirtyEight.