Welcome back to our series in which we talk to people who think they’ve found a way to predict the Oscars using data! This week, we’re looking at two models at different ends of the complexity spectrum.
If you want to find a way to predict Oscar winners, how much data do you really need? One of these models is very simple and uses only two inputs to figure out the state of the best picture race. The other is more sophisticated and pulls details from dozens of sources to try to get perspective on who’s ahead in multiple categories. But first, a quick update on our own Oscar predictions.
An update to our Oscar predictions
“The Big Short” won the top prize at the Producers Guild Awards on Saturday, making it the new favorite in our model. Historically, the winner of that Producers Guild honor has gone on to win the Academy Award for best picture about 70 percent of the time. Even more importantly, many of the people who vote on it are also members of the Academy; that gives the Producers Guild prize extra predictive power.
In our model, Oscar nominees receive points for being nominated for or winning other awards that historically predict the Oscars. The better a historical predictor a given award is, the more points it’s worth. With 30 percent of possible points remaining to be distributed, “The Big Short” has 19 percent of possible points, “Spotlight” has 14 percent and “The Revenant” has 11 percent. “The Big Short” now has the highest potential final score in the field, followed closely by “Spotlight.” It’s increasingly looking like a two-picture race, but it’s too early to count anyone out.
Now, a conversation with our modelers! (These interviews have been lightly edited and condensed.)
The small model
Zach Wissner-Gross and Randi Goldman submitted a model that looks at two things — how much money a film made and the film’s score on review aggregator Rotten Tomatoes — to predict the Oscars with a minimalist approach. They are a husband-and-wife team working out of Boston, and they are precious.
Walt Hickey: Can you tell me a little bit about yourselves?
Zach Wissner-Gross: We got married last March. I got my Ph.D. in physics in 2012 and then started an online education company that was recently acquired. So now I’m working at Amplify Education in Brooklyn, but we still live up in Boston. And Randi is an OB-GYN — I’ll let her tell you about that!
Randi Goldman: I graduated from medical school in 2011, and then I did a residency in obstetrics and gynecology at Brigham and Women’s Hospital at Harvard Medical School. I’m now a fellow in reproductive endocrinology and fertility, which means I do a lot of IVF and help infertile couples have babies!
WH: How did y’all meet?
RG: We met online — we met on JDate. Although we grew up around 20 minutes away from each other in New York, we never knew each other until we were both in Boston.
WH: Cool — I didn’t know if there was a StatsPeopleMeet or something like that. So what’s behind your model?
RG: We wanted to pick something that could be used to predict [the eventual winners] right when the nominations came out, rather than waiting until all the other award shows. Obviously, some of the award shows have already happened, but we wanted something that was simple for people to understand and that would let them pick out commonalities among the winners.
ZWG: We take long car rides between Boston and New York when we visit our family, and we were trying to rattle off some of the parameters. So, first, it has to be a good movie — sure, we can parameterize that easily. If nobody’s seen it, that’s not good, and if it makes a lot of money — like the first two Lord of the Rings films or Star Wars this year — that’s a win [for the studio] right there — it doesn’t need a best picture Oscar. So our goal was to have as few parameters as possible to try to create something that’s simple but powerful, and it did pretty well in the past six years.
WH: How does it work?
ZWG: For best picture, it’s just how much money the movie made and how it’s rated on Rotten Tomatoes. For the money, it’s quadratic. A movie gets punished if it makes too much or it makes too little. Then for best actor and best actress, it’s the same, but with a few more parameters: We decided when actors or actresses transform on screen, either by changing their voice or appearance or body, that also makes a difference.
WH: Any plans to try to predict other categories?
ZWG: I think for some of the other Oscars, we’ll try.
RG: I’m on call this week.
Randi and Zach sent in their initial picks a week ago and are now on their honeymoon. We’ll talk to them again once they’re back, but they said they don’t expect their model to change all that much over the season. Films that continue to earn money will have an impact on the model; besides that, their predictions are basically set until the big night.
Leonardo DiCaprio (“The Revenant”) is the major favorite to win best actor, with a 77 percent chance. He is followed by Michael Fassbender (“Steve Jobs”), with a 21 percent chance. In the best actress category, the simple model favors Jennifer Lawrence (“Joy,” a 79 percent favorite) and Cate Blanchett (“Carol”; 11 percent). The model likes “Mad Max: Fury Road” for best picture and its director, George Miller, for the direction prize.
Randi and Zach also sent along predictions for some other categories! “Inside Out” is their 59 percent favorite to win best animated feature; “Shaun the Sheep Movie” is in second place, with 36 percent.
The big model
Next up we have Brian Goegan, an economics professor at Arizona State University who’s a huge fan of the Academy Awards. Goegan’s model shares a lot of DNA with our own: It looks at guild and press awards. But Goegan is pulling in a lot more data, from local critics awards and down-ballot guild awards, going as far back as 1970. Is more data the source of better predictions? I talked to Brian to find out.
Walt Hickey: Hey! So what’s your story?
Brian Goegan: I finished graduate school and my Ph.D. in August 2014 and got a job at Arizona State University, where I went to undergrad. It’s been exciting to come back! I’m colleagues with some of the professors I had when I went to school. When I was in graduate school, I noticed Nate Silver predicting the Academy Awards. So I decided to take that on, mostly as a side project, to teach myself more about econometrics and give myself a project to learn the ins and outs of the software (I use Stata to do most of my analysis).
I’ve always been a big movie fan. I love movies, I love the Academy Awards — it’s sort of my NFL playoffs. This time of year, every time someone is going on about the playoffs, I’m going, “Did you see what happened at the Golden Globes? I can’t believe it!”
WH: So what’s your approach to Oscar prediction?
BG: The first category (and the one I think is the most important) is the previous awards that are given — that includes the Golden Globes but more importantly includes the different guilds. In the smaller categories, I look at the production designers guild, the editors guild, who give out awards as well. I look at those and see the overlap, because a lot of those voters are in the Academy. I would say in the end that pushes about 40 percent to 50 percent of the model.
The next most important factor is the other nominations the film receives. That’s the first and only place we get the Academy’s preferences listed out, so a film that’s nominated for best picture but not nominated for best director or editing or cinematography or something probably isn’t going to win. Actors tend to have a better chance of winning if their movie is nominated for best picture, but for actresses, it’s sort of the opposite: The more obscure their film is, the more attention they seem to get.
The third component is the critics’ awards; they’re the least important but have become more important lately. Their preferences seem to be matching up with the Academy’s these days. I think they’re similar in the political primaries to endorsements — critics give their awards early.
WH: What do you consider success? What’s a good year of prediction?
BG: I would say for me a good year would be getting best picture — that’s the prime award, you want to get that right — and doing well in the major categories. But over the whole thing, out of the 19 or 20 categories, getting 15 is a really good year for my model.
Brian’s model, like ours, will move a lot over the next few weeks as more award shows announce their winners. These are his early picks, as of Sunday.
Brian has a three-film race for best picture; “The Big Short” has a 42 percent chance of winning, with “Spotlight” at 31 percent and “The Revenant” at 25 percent. For best director, it’s neck and neck between George Miller (49 percent) and Alejandro G. Iñárritu (“The Revenant”; 48 percent).
DiCaprio is a 98 percent favorite to win best actor, and Brie Larson (“Room”) is a 99 percent favorite to win best actress. Sylvester Stallone (“Creed”) has a 60 percent chance of winning best supporting actor, while both Mark Rylance (“Bridge of Spies”) and Mark Ruffalo (“Spotlight”) have around a 15 percent chance. Best supporting actress is the closest race, with Alicia Vikander (“The Danish Girl”) at 33 percent, Kate Winslet (“Steve Jobs”) at 31 percent and Rachel McAdams (“Spotlight”) at 26 percent. Brian pointed out that variables he included in his model to predict the acting categories may be partially responsible for the large leads that DiCaprio and Larson have. To try to capture the politics of voting, Brian incorporated both previous Oscar nominations and previous Oscar wins into the model. It turns out that if someone is “owed” an Oscar — think DiCaprio, who has been nominated previously but has never won — he or she has a higher chance of victory. On the other hand, someone who has won an Oscar previously has a lower chance of winning.
He also threw in that “Carol” and “The Hateful Eight” are early leaders in the best score category and that “Inside Out” is a slam dunk to win best animated feature.
More Predict The Oscars:
Can The Internet Predict The Oscars?