Skip to main content
Where Last Weekend Ranks Among College Football’s Craziest

There’s one word to describe Week 6 of the 2014 college football season: chaos.

Three of the top four teams in the Associated Press’s Top 25 poll fell this past weekend (including unranked Arizona’s upset of No. 2 Oregon on Thursday night). And for the first time ever, five of the AP’s top eight lost in the same week. It was a flurry of upsets that had some calling Week 6 the wildest in the sport’s history.

To objectively measure where this week’s disarray stands historically, I developed a quick-and-dirty hybrid computer rating that borrows elements from both the Elo rating system (yes, we admit it, we’re a little obsessed) and’s Simple Rating System (SRS), applying it to all intra-FBS college football games since the advent of the Bowl Coalition in 1992. (Note: If you care about the particulars of my rating system, there’s a methodology section at the bottom of this post.)

Like Elo, my system can measure ratings points “exchanged” between teams after a given matchup. If a team wins by more or less than the pregame ratings said they “should” have, its rating is updated to reflect the new result. So, we can track which weekends saw the most points exchanged from the favorites to the underdogs on a per-game basis. (This ignores negative points exchanged to the favorites, because all we care about is the frequency and magnitude of the upsets in a given week.)

The weeks with the highest number of average points exchanged toward underdogs were ones such as Week 15 of the 1997 season, when there was a relatively limited number of games and the average was buoyed by a few huge outliers — in that case, eight-point underdog Michigan State beating Penn State by 35, and 19-point underdog Arizona winning by 12 at Arizona State. But most of the college football world was out of action that week; only 26 of the 112 FBS teams played games.

A better comparison for Week 6 of 2014 would be weekends in which at least half of all FBS teams played against other FBS teams. And by that standard, only two other weeks since 1992 made a bigger dent power ratings wise in the landscape of college football than this past week’s games:


Of course, a major point of focus for the tumult was also how many highly ranked teams were upset, regardless of margin. To measure this, we can assign each upset a value for how unexpected it was, based on the pregame power ratings. For example, No. 8 UCLA had a 77 percent chance of beating unranked Utah at home Saturday. Utah’s victory, then, gets an “unexpected win” value of 0.77. To look at the degree of surprise with which the Top 25’s losses occurred in a given week, we can sum up the unexpected wins for each upset involving a Top 25 team and divide by the total number of games featuring Top 25 squads.

By this measure, Week 6 of 2014 was again the third-most upset-heavy since 1992, with an average of 0.35 unexpected wins per Top 25 game (among weeks with at least 15 games featuring one or more Top 25 teams):


By the numbers, it’s probably an exaggeration to call this past week the most upset-laden in college football history. But it certainly ranks among the most chaotic of any in the past 22 years.

Methodology notes: Like the Elo ratings, my system begins with a base rating for each team — in this case, zero — and updates after every game based on the game’s outcome (including scoring margin). Unlike Elo, my ratings are scaled like a points-per-game differential, making direct predictions about the final score line for each game. To avoid Elo’s problem of autocorrelation, it is possible to lose points in a win if the margin of victory is worse than that which would have been predicted from the pregame ratings.

For a given game, the predicted margin for Team A is equal to Team A’s rating minus Team B’s rating plus or minus a home-field factor of three points in favor of the home team. After the game, the amount by which a team’s actual margin was greater or less than its predicted margin is then added to the pregame ratings according to the following formula: Rating_post = Rating_pre + 0.79 * LN((actual_margin – predicted_margin) + 1). For preseason ratings, the final ratings from the previous season should be regressed 10 percent toward the mean rating of 0.0.

These ratings are not as sophisticated as the SRS. They are not iterative, nor do they take into account a team’s strength of schedule using subsequent games played by a team’s opponents. But going back to 1992, the end-of-season versions of these ratings have a 0.96 correlation with the final SRS ratings for all FBS teams. Over the same span, the previous season’s final ratings also correctly predicted future wins and losses at a 68.8 percent rate (SRS correctly predicted 68.6 percent of games) with a root mean square error of 18.3 against the game’s scoring margin (SRS’s RMSE was 17.9). Finally, at-the-time ratings had a RMSE of 16.4 against scoring margin; research by Wayne Winston and Jeff Sagarin finds that Las Vegas point spreads have a RMSE of 16.

In other words, these ratings are only slightly less accurate than more complicated computer ratings, but are far easier to calculate for any given week of a college football season.

Neil Paine is a senior sportswriter for FiveThirtyEight.