Skip to main content
ABC News
Why It’s Hard to Predict Oscar Winners

At the Academy Awards Sunday night, the front-runner in nearly every category took home the Oscar statue. “12 Years A Slave” won Best Picture, Alfonso Cuarón won Best Director for “Gravity,” Matthew McConaughey took Best Actor for his performance in “Dallas Buyers Club” and Cate Blanchett took Best Actress for her performance in “Blue Jasmine” — all as most Oscar-watchers expected.

This could lead you to think that because the winners were expected, they’re inherently predictable. But it’s much more complicated than that.

In certain areas including politics and finance, prediction models can outperform the benchmark. If your model beats the market consensus — be it the price of a financial asset or the expected electoral vote count — you made a very good forecast. Compared to political prediction markets, Nate Silver was able to more accurately predict the outcomes in the 2012 elections. Compared to the average rise in the market, Warren Buffett manages to earn a better rate of return. It’s the ability to add value through prediction that makes the effort worthwhile.

But when it comes to the Oscars, having an edge — outperforming the benchmark set by prediction markets — is difficult for several reasons.

On first look, the secret-ballot process by which Oscar winners are selected isn’t all that different from a political election. And since we’re pretty good at analyzing political elections, it’s not unreasonable to expect that we could use the same techniques to predict the Oscars. However, there are three things that can be reliably integrated into a political forecast that we don’t have in an Oscars forecast.

This year, 33 Senate seats are up for election. Right now, you could make a very basic prediction of who will win in each race, and you’d probably be right — most of the time. Take Sen. Dick Durbin, a Democrat in a heavily Democratic state, Illinois. Predicting that he will win his re-election bid in November is not exactly a bold stance. In some of the tighter races, you could take an average of the polls the day before the election and probably predict a good share of them. It’s accurately predicting the few remaining races — and outperforming the market — that makes a forecast great.

Beyond polling data, a political forecast could also incorporate historical data and patterns as well as current factors motivating the electorate, such as the state of the economy or demographics. But when it comes to the movies, we can’t meaningfully measure these three things. And that’s why it’s much harder to predict the Oscars.

1. The voting process is secretive and fundamentally un-pollable. The Academy of Motion Picture Arts and Sciences has fewer than 6,000 voting members. The Academy is deliberately vague on how many members it has and who its members are. The Los Angeles Times shed light on the makeup of the membership two years ago, but even Hollywood insiders interviewed for the piece expressed intense frustration at their inability to learn the roster. It took a Herculean effort — by more than 20 reporters and researchers — to identify most of the purported members and the paper still wasn’t absolutely certain of the membership. In fact, the number of people who reported they were members exceeded the actual number of Academy members.

If we don’t even know who the voters are, it’s not possible to poll them. Indeed, this is one of the major instances where shoe-leather reporting is the only way to understand the state of the race. While it would be wonderful to get consistent polling of Academy members in the period preceding the Oscars, that’s clearly not in the interests of the Academy, whose main point of exposure is the live broadcast of the Oscars ceremony. So we’re out of luck if we want to approach Oscar predictions using the same polling methods that can be used in a conventional political race.

2. Statistical approaches using historical data so far have not proven decisive in aiding Oscar winner predictors. For instance, this year’s Best Picture, “12 Years A Slave,” doesn’t have much in common with previous winners, at least when it comes to one of the richest data sets in film analysis, IMDB.com, the Internet Movie Database. And while we can pretty consistently figure out what the Academy likes, turning those patterns into a Best Picture-winner takes more than a film-by-numbers approach. With the exception of, say, “Crash” in 2004, the Academy tends to favor films that take risks, pave new ground and move cinema forward.

3. The reason people vote for Oscar winners is fundamentally different from why they vote for politicians or issues. Members of the Academy — who include entertainment executives, actors, and directors — might vote with their professional allegiances rather than based on the inherent artistic value of a film or performance. They may also seek to reward a nominee for a body of work rather than the film in question. And even if they’re voting on the merits alone, what film is truly the “best” picture and which performer is the “best” actor or actress are fundamentally subjective. And that doesn’t even factor in the campaigning.

The behind-the-scenes lobbying that civilians like us don’t have the opportunity to follow can have huge effects on the outcome of the Academy Awards — take, for instance, the last-minute surge in support for “Argo” last year. Reasons for supporting a film could be entirely artistic, or frankly political, and really the only ones who can perceive the momentum of the nominees are the same Hollywood insiders who are involved in the voting process. In the absence of reliable data, that’s what we’ve got.

All that means is that, for now, a market-beating Oscar prediction model is probably out of the picture.

Walt Hickey was FiveThirtyEight’s chief culture writer.

Comments