Well, that was an absolutely bananas way to end the Oscars season.
The best movie of 2016 actually won best picture! This is wonderful. Sure, the win for “Moonlight” over “La La Land” meant that the FiveThirtyEight Oscars tracker missed the top prize and got only seven of eight picks right. But that’s the magic of Hollywood for you.
So, how did this season of Oscar tracking go?
Generally speaking, the model did better than I expected. I was convinced that it would miss one of the two lead acting prizes, but the Academy stuck to the script and went with front-runners Casey Affleck and Emma Stone. Our model isn’t super sophisticated when it comes to the best-director prize — it essentially backs the winner of the Directors Guild of America award — and once again the Academy went with the DGA, choosing Damien Chazelle of “La La Land.” And the two supporting acting prizes were slam dunks, as expected: Viola Davis and Mahershala Ali.
Most exciting of all, it turns out that our simple approach, where we assign points to nominees as they win awards that have historically been predictive of Oscar wins, can be applied to the best documentary and best animated feature categories. Our tracker called the wins for both “O.J.: Made in America” and “Zootopia.”
Which brings us back to best picture.
Both last year — when “Spotlight” beat out perceived front-runner “The Revenant” — and this year saw upsets in the top award, making it a rough two years for forecasters. But there’s a reason we don’t throw a margin of error or percentage chance on these picks — we’re not measuring voting directly, just historically predictive races. It’s ridiculous to assert with certainty what a film’s chances are unless you are literally polling Academy members. I love incorporating gambling odds into our articles because it’s one of the few legitimate ways to get an implied percentage, since money is actually changing hands. Indeed, our Oscars tracker really just quantifies the same information that Oscars watchers know: “La La Land” was ahead because there wasn’t much of an argument in favor of an alternative. “Moonlight” grabbed awards at the Writers Guild and Golden Globes, sure, but was passed over by the Screen Actors Guild, Directors Guild, Producers Guild and British Academy of Film and Television Arts.
One thing I’d like to do with the model in the future is make it more attuned to the logistics of voting. A mathy Oscar model I admire is the one run by James England, which looks at the Oscars race college football style, as a series of head-to-head matchups. He asks visitors to his site to select the best film in head-to-head comparisons, much as an Oscar voter might when ranking movies as required on their ballot. And in these past two difficult-to-predict years, James has the only model I’m aware of that both is open about its methodology and got both best-picture winners right. Other best-picture models might benefit from incorporating the logistics of Academy voting into the design, which on our end could mean tweaking the scoring by emphasizing larger Academy branches.
On the whole, though, I don’t think any tweaking would have made our model — which can’t detect upsets because it assumes that precursor awards can forecast eventual Oscar winners — go for anything but the movie that cleaned up at other award shows. So we’ll just have to live with one whiff out of eight swings. All in all, a rather good night.