More often than not, the NCAA men’s basketball tournament is defined by upsets that occur in its first two days. An average fan may struggle to recall all the teams that made last year’s Final Four but will remember Middle Tennessee, Mercer and Florida Gulf Coast for years to come. It happens each year like clockwork: At least five underdogs seeded No. 10 or higher pulled out a first-round win in each of the past 10 years.
Because of this, filling out a bracket becomes an exercise in sussing out who will be this year’s Cinderellas. And this is no simple task: Taking the same span of 10 years, there were 168 underdogs seeded No. 10 through No. 15 that fell quietly into bracket oblivion, along with anyone who bet big on them.
The internet is filled with heuristics for deciphering which obscure mid-majors are dangerous in the round of 64 and which overseeded brand names are vulnerable. The more analytically minded of these often highlight particular statistics that show up in teams that have pulled off stunners in the past — such as solid offensive rebounding or turnover rates. But these attributes tend to correlate not with underdog success specifically but with success in general, and they don’t translate well across conferences and leagues.
Fortunately, this is nothing a little machine learning can’t fix.
Looking for a way to apply my graduate coursework in data science to March Madness upsets, I designed a model that was inspired by results from computer image recognition. My Localized Upset Classification model (LUC, pronounced “Luke”) is trained to find upsets using team-to-team similarities instead of raw statistics such as offensive rebounds or turnover rates.
In image recognition, if you’re trying to decide whether a certain image is a dog, you’re more interested in how similar it is to other images of dogs than you are in the image’s “raw statistics,” like brightness and hue. This sort of thinking lends itself to local insights, as opposed to global ones.
A global insight is one that attaches general importance to individual characteristics, as in, “As brightness increases, the image becomes more likely to be a dog,” or in basketball terms, “Teams with high offensive rebounding rates are more likely to win as a high seed.” By contrast, a local insight is much more modest: “Images similar to this image of a dog are likely to be dogs.” In computer vision and in LUC, similarities are calculated with something called the Gaussian kernel, and leveraging these similarities allows a modeler to capture signals that are present in small regions without making generalizations. By looking at local trends in basketball data, LUC is essentially searching for the mathematical equivalents of statements like, “These guys remind me of that George Mason team that reached the Final Four.”1
One added component of my model: LUC does not just calculate similarities, it also learns the predictive power of each similarity.2 In other words, it investigates whether teams that remind us of the George Mason team really do perform better than teams that don’t.
Back-testing LUC on the past four tournaments,3 we can see its propensity to pick upsets often and with prescience. Here are two metrics for judging the model: the percentage of the upsets predicted by LUC that turned out to be correct (upset precision) and the share of upsets predicted by LUC among all upsets that actually occurred (upset recall).
|Year||Share of ACTUAL upsets predicted||Accuracy of predicted upsets|
The early returns are promising. For those four tournaments, LUC scored a precision of 70.3 percent while also identifying 59.4 percent of all upsets, including those of No. 9 seeds over No. 8 seeds. The model had some risky calls that worked in its favor. It gave the 2014 Mercer team a 57.1 percent chance of beating No. 3 Duke. Two years later, it gave No. 13 Hawaii a 55.4 percent chance of beating Cal and No. 14 Stephen F. Austin a 61.5 percent chance of beating West Virginia. Any of these picks would be enough to catapult LUC to the top of most office-pool standings in the first week.
So what does LUC think of the 2018 tournament? The model favors 10 teams that the committee seeded as underdogs. Here they are ranked by the likelihood of an upset, which can be interpreted as the model’s confidence.
|Lower Seed||Higher Seed||Upset Probability|
|12||Murray State||5||West Virginia||64.8%|
|9||NC State||8||Seton Hall||60.4|
|11||San Diego St.||6||Houston||57.9|
|12||South Dakota St.||5||Ohio State||56.0|
|14||Stephen F. Austin||3||Texas Tech||51.9|
Since these predictions were based on measures of similarity among all teams in Division I, we can get a glimpse into the model’s internal logic by examining the “neighbors” of the underdogs that LUC promotes. As a demonstration, let’s look at what LUC identifies as the most likely upset of the 2018 first round: fifth-seeded West Virginia’s matchup against double-digit underdog Murray State, an automatic qualifier from the Ohio Valley Conference. Murray State’s three nearest neighbors, the teams that most resemble the Racers statistically, are the 2007-08 Xavier Musketeers, the 2004-05 Florida Gators and the 2010-11 Villanova Wildcats. That is startlingly good company. That Xavier team reached the Sweet 16, and the fourth-seeded Florida team included Al Horford and much of the core of the following year’s team that won the championship. Even that Villanova team, which was not one of Jay Wright’s best squads, still made the tournament as a No. 9 seed.
West Virginia’s neighbors, on the other hand, are a bit of a mixed bag: the 2012-13 Santa Clara Broncos, which didn’t make the tournament (but did win the CBI, which is like the NIT of the NIT); the Kyle Lowry-led 2005-06 Villanova Wildcats, which reached the Elite Eight; and the 2008-09 Marquette Golden Eagles, which won a single tournament game as a No. 6 seed. LUC compares each team with hundreds of historical cases, but just by looking at a few comparisons, we can see that the region of teams near West Virginia contains squads with a huge range of postseason success (CBI to Elite Eight) and the region around Murray State has more winners than their seed might suggest.
The similarity scores sometimes tell an interesting story themselves. The 2018 Arkansas Razorbacks, for example, are most similar to the 2016 Arkansas Razorbacks, the 2013 Arkansas Razorbacks and the 2017 Arkansas Razorbacks. Arkansas’ fifth nearest neighbor? The 2007-08 Missouri Tigers coached by Mike Anderson, who’s been coaching since 2011 at — you guessed it — Arkansas. Unfortunately, these Razorbacks have been consistent in identity but inconsistent in the postseason, and LUC is betting instead on Butler.
Picking Butler this year might not make LUC seem risky or unique (the Bulldogs are a slight favorite in Las Vegas), but LUC is much more aggressive elsewhere. Right now FiveThirtyEight’s predictions give Stephen F. Austin, Buffalo and Murray State an 11, 15 and 16 percent chance to win, respectively, but LUC assigns probabilities over 50 percent to each of these underdogs, and even suggests that Murray State has nearly a 65 percent chance to win. This model picks with such brazenness because it was trained specifically to hunt for upsets.
LUC and the FiveThirtyEight model have comparable accuracy in picking first-round games, but they make mistakes differently. LUC captures most of the upsets, but it does occasionally recommend ill-timed bets against powerhouses. More conservative models like FiveThirtyEight’s often seem as if they are doing a better job because their mistakes align with outcomes that no one considered plausible. LUC’s false positives, in which an underdog is favored but defeated, seem avoidable. But that is not how fans bet on March Madness in their brackets. No one wants to be stuck rooting against Cinderella in the middle of a miracle.
Check out our latest March Madness predictions.