March Madness Kaggle Competition
This year I participated in Kaggle's March Madness Competition. However, this isn't your typical office pool; instead participants are tasked with assigning every possible match-up in the field of 64 with a probability that a certain team will win.
In this post, I will describe my simple approach to building a machine learning model to predict winning percentages for March Madness games. Then, we'll see how it performed for this years competition!
Here's a sneak peak of my predicted bracket using machine learning...
Introduction¶
The best part of the March Madness kaggle competition is the data. Kaggle proves formatted regular season stats, play-by-play data, Tourney match-ups and results plus more running back to 2003, and some data back to 1985.
Unlike normal bracket challenges, Kaggle doesn't ask you to predict just the winner of each game. Instead they want the probability that a team will win. Furthermore, we have to predict win probabilities for every possible game, which for a $64$-slot bracket, can be calculated as $\sum_{i=1}^{64} (64-i)$, and comes to a grand total of $2016$ games.
Submissions then comprise a $2016\times2$ vector where each row represents one of the possible matchups where the first column includes a string that defines a matchup such as 2016_North Carolina_Clemson
, and the second column represents the probability that the first team (UNC in this case) wins, which we call $y_p$.
import pandas as pd
final_predictions = pd.read_csv('/Users/bechtel/Downloads/adv_stats_elo_predictions.csv')
df_teams = pd.read_csv('/Users/bechtel/Kaggle/NCAA_mens_tourney/Stage2/Teams.csv')
final_predictions.head()
In the final submission format the match-up ID involves team IDs as defined below
df_teams.head()
Measuring Success¶
The measure of successful classification is judged as the log loss. This is defined as
$$-\text{log} P(y_t|y_p) = -(y_t \text{log}(y_p) + (1 - y_t) \text{log}(1 - y_p))$$.
When true, $y_t$ is 1, and $y_p$ is probability that $y_t$ is true. Hence if you predict an outcome $y_p=100\%$ and it turns out that the outcome is true $y_t=1$ then $$-\text{log} P(y_t|yp) = -1 * \text{log}(1.0) = 0$$. Thus a perfect log loss score is 0.
To get a better intuition of log loss let's remind ourselves of the shape of the log function.
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0.001,0.999,100)
plt.plot(x,-np.log(x),label='$y_t=1$')
plt.plot(x,-np.log(1-x),label='$y_t=0$')
plt.xlabel('$y_p$')
plt.ylabel('-log $P(y_t|y_p)$')
plt.legend()
plt.show()
This plots the log loss of a prediction $y_p$ (x-axis) which is the probability that the outcome is true ($y_t=1$). Hence to find the log-loss given $y_p$ when the outcome is true, follow the blue curve. As expected the log loss is zero when our prediction of $y_p=100\%$ matches the outcome $y_p=1$. However, when the outcome is false ($y_t=0$), you can find the log-loss by following the orange curve.
As we can see, log-loss severely penalizes high probability mistakes. That is, if we predict a team wins with a 99% probability, and then they lose, the log-loss increases dramatically.
What happens if we assume that every game is a coin toss? I.e. the probability of a team winning is just 50%.
Plugging into the formula: $-\text{log} P(y_t|yp) = -1 * \text{log}(0.5) = 0.6931$.
This sets a good baseline. If we build a model, it should at least predict winning percentages with a log-loss less than 0.6931, otherwise just assign 50% to every game...
Building a Machine Learning Model¶
Now, given a boat load of data, how do we go about constructing a probabilistic classification model. Sci-kit learn provides a useful flow chart to aid in model selection:
First, this is a classification problem, so the techniques that we explored in the last post are all at our disposal.
Although, sci-kit learn doesn't include it in the flow chart, logistic regression is a useful model for classification problems.
In particular fitting a logistic regression model corresponds to minimizing the log loss, which makes it a natural candidate. However, there are other possibilities such as neural nets, random forests, and support vector machines.
Feature Engineering¶
Before we settle on a classification algorithm, let's consider the feature space. That is, what data is the most important in determining a game's outcome.
Brainstorming some possibilities:
1) Seed. We expect a 1-seed to defeat a 16-seed.
An obvious place to start is team Seed. One of the kaggle starter kernels employs this strategy, and it performs reasonably well. However, since we have so much data, I think it makes sense to feed our model with more descriptors than just Seed.
2) Season Stats. A team that averages 70 points a game might defeat a 60 ppg team.
There are many different stats in basketball. But some of the most predictive stats, so-called Advanced Stats, incorporate several statistics. Many of these advanced stats have been implemented by another Kaggler in this notebook, so we will use this as a starting point. However, since we are just interested in predicting Tourney results, we will use the season-averaged advanced stats, so we have a picture of a team's performance when heading into the March Madness.
3) Elo rating. A team that has a 25-6 record against strong opponents should defeat a team with a 25-6 record against weak opponents.
The Elo rating was first devised as a chess ranking system, but has since spread to other sports and competitive games. Fortunately, the Elo ratings were also implemented in a Kaggle notebook here, so we will calculate each Team's Elo rating at the end of the regular season.
Feature Vectors / Design Matrix¶
For each team we will concatentate a handful of season-averaged Advanced Stats and the team's Elo rating.
Then, the database of training data looks like:
# Example Training Data.
# Each Row
training_data = pd.read_csv('/Users/bechtel/Kaggle/NCAA_mens_tourney/training_data.csv')
training_data.head()
Here, we see some of the stats that I used as descriptors. WScore
is the average points scored among wins, while LScore
is the average score in losses. Also, notice the last column which lists the team's SeasonELO rating for that season.
A list of all of the training features is given below.
training_data.columns
Hence each team has many descriptors for its regular season performace. Also note that TeamID
and Season
aren't actually part of the feature vector, they are just included to make it more readable.
Set up Classification Problem¶
Now how do we set up our labels and features for this classification problem?
We are going to be training on all of the past tournaments for which we have data. This includes the 2003-2017 seasons.
For each Tourney game, we will describe it by the year, name of the first team: t1, and name of the second team: t2. The label describing the outcome is y, which equals 1 when the first team wins, and 0 if the second team wins.
The feature vector, $x$, for this game is the difference of the first team's feature vector, $f_1$ and the second team's feature vector, $f_2$:
$$ \vec{x} = \vec{f_1} - \vec{f_2}.$$
Then we build our training data as a list of all of the $\vec{x}$'s, and the labels are the list of results, y.
Just in case, we will also include all of the data where the names of the first and second team are switched.
Train Model¶
Now we can use any of our favorite classification algorithms to build our model. I tried several, and settled on logistic regresssion which naturally outputs probabilities that a certain example will result in a specified outcome.
Logistic Regression, briefly.¶
Essentially, logistic regression maps a feature space $x\in\mathbb{R}$ to a probability $\mathbf{P}(x)$ using a logistic function:
$$ \mathbf{P}(x) = \frac{1}{1+e^{-(\beta_0 + \beta_1 x)}} $$
The parameters $\beta_0$ and $\beta_1$ are estimated while training the model.
# plot logistic function
x = np.linspace(-5,5,100)
plt.plot(x,1./(1.+np.exp(-x)))
plt.ylabel('$\mathbf{P}(x)$')
plt.xlabel('x, (feature value)')
plt.show()
Fitting and Validation¶
Of course, we want to avoid overfitting, so we use cross validation when training our model, and we also exhaustively search the space of hyperparameters to find their appropriate values using the GridCV
function from sci-kit learn.
In the end, I achieved log-losses around ~0.5 - 0.6 using logistic regression, which is at least better than assigning 50% probabilties to every game (log-loss 0.6931).
# Results of cross-validated fitting scheme
# check out the github notebook for full details
df_results = pd.read_csv('/Users/bechtel/Kaggle/NCAA_mens_tourney/fitting_results.csv')
df_results
I thought this was decent for the amount of time I was willing to invest, so I trained my model with all available data from 2003-2017, then predicted the probabilistic outcomes for all possible 2018 matchups and uploaded the results to Kaggle!
Track my Results¶
Then comes the exciting part... seeing how my model stacks up against the competition. Since this was my first Kaggle competition, I got pretty obsessed with checking my current standing in the leaderboard. While waiting for Kaggle to update the leaderboard, however, I simply inputed the games into my model so that I could at least track my personal log-loss score with up-to-date game results. The code for that can be found on Kaggle.
Other apps that allow you to visualize your predictions include an html version and a python module: bracketeer which is pictured on the top of the page.
Currently I am at least in the top 33%, even after the tough Virginia loss.
At the close of the tournament I finished at 288/934. Not bad for a first go!
Concluding Remarks¶
Participating in this competition was a fun experience and I encourage any readers to enter it next year and also to explain your methodology in a blog post!