In Oscar Predicting, Data Still Beats Expertise

The quants versus the critics, Year 3

You may judge a film by its heart, but for some folks, it’s just about the cold, hard numbers.

Since 2013, Popular Science has profiled the statisticians that have tried their hands–or rather, algorithms–at Oscar-predicting. These math models determine how measurable factors, such as Rotten Tomatoes scores and other awards, are correlated to Oscar wins. Then the algorithms crunch the numbers for totally intuition- and sentiment-free predictions. In the past, we’ve found the quantitatives are slightly better than traditional movie critics at predicting awards. We’ve taken to calling the comparison the quants versus the critics.

This year, the quants win again. The three models we examined got, at most, one prediction wrong out of 10 popular Oscar categories. Meanwhile, the four major critics’ predictions we examined each got two or three predictions wrong. See a table of the comparisons here.

Sadly, we’ve seen a number of our favorite quants leave the game recently. Farsite Forecast, a consultancy that performed best among the quants in predicting the results of the 2013 show, didn’t offer predictions this year. Neither did Peter Gloor, an MIT professor who wrote his first Oscar-predicting algorithm in 2007. Farsite and Mr. Gloor: Please come back!

To learn more about how Oscar math modeling works, check out our profiles from 2013 and 2014.

Until next time–