Game projection model – Pre-season Model (Part IIIA)

This is the continued saga of “how to build a game projection”. I’d recommend reading the three previous articles of the series before you continue reading this post:

The goal in this article is to lay the groundwork for the final game projection model. In the previous article I built the in-season model. Today I will focus on the pre-season model and combining the two.

Just to summarize: The in-season model is supposed to adjust player values based on performance in the present season. The pre-season model is supposed to give us the player values at the start of the season based on the performance in previous seasons.

Isolating pre-season models

To begin with, I’m only interested the isolated performance of the potential pre-season models. So, I’m basically calculating a player value at the start of the season and keeping this value throughout. Then based on these player values I can calculate win probabilities for every game and use log loss to compare the model performance to the actual results.

For now, I will look at Evolving-Hockey’s GAA and xGAA models, sGAA and GSAx. The sGAA model is a combination of Evolving-Hockey’s GAR and xGAR models. You can read about the sGAA model here:

I’m just using a 1-year sample size, so the player values are simply the sum of GAA, xGAA, sGAA or GSAx in the previous season. I then calculate the team strengths for every game the same way I did in the previous articles:

xWin% (team strength) = 0.5 + Player Value * x

The win probability is then:

Win Probability = xWin% / (xWin% + Opponent xWin%)

Then the log loss for every game is calculated:

Log loss = -ln(1 – ABS(Result – Win Probability))

If a team with a win probability of 60% loses, then the log loss is:

Log loss = -ln(1 – ABS(0 – 0.6)) = -ln(0.4) = 0.9163

The x-value in the first formula can now be found by minimizing the log loss – the lower the log loss the better the model!

With this procedure I found the x-values and corresponding log loss values for each model:

Modelx-valueLog loss
Base0.6881
Market0.6714
GAA0.00430.6777
xGAA0.00470.6759
sGAA0.00510.6761
GSAx0.00190.6877

Here’s the visualization of the log loss as a function of game number. The lines are just the trendlines to make the graph easier to interpret.

We see that xGAA and sGAA are better pre-season models than GAA. We also see that GSAx is a really bad model. There’s nothing surprising in this. Other research have shown that goaltender performance in one year correlates poorly with goaltender performance the next year.

Based on these results I want to continue working on a sGAA-based pre-season model. The first step is to combine sGAA (Skaters) and GSAx (Goaltenders). Here’s the results:

Modelx1 (sGAA)x2 (GSAx)Log loss
Base0.6881
Market0.6714
sGAA+GSAx0.005050.000650.6760

It’s worth noting how little weight is put on goaltending. The predictability of goaltending from season to season is very small.

Combining pre-season model and in-season model

The next step is to combine the in-season model found in the previous post and the pre-season model. Here’s the results:

Modelx1 (In-season)x2 (pre-season)Log loss
Base0.6881
Market0.6714
Combination0.6890.5980.6710

Now we have a model that can compete with the closing betting lines. In fact, the log loss of the model is slightly below the log loss of the market.

Currently, the pre-season model is weighed the same throughout the season, but it might be preferable to decrease the weight put on the pre-season model as the season progresses. So, you put more weight on new data and less weight on old data.

I will decrease the weight by 1% and 2% respectively for every game a player has played:

Value (weighted 1%) = Value * (1 – Game No) * 0.01

Value(weighted 2%) = Value * (1 – Game No) * 0.02    if Game No > 51 the value is 0

By doing this the weight put on pre-season data decreases as the season progresses. With a 2% decrease players playing in their game number 51 or above will have all their value come from the in-season model.

Here’s the results with these weightings:

Modelx1 (In-season)x2 (Pre-season)Log loss
Base0.6881
Market0.6714
1% decrease0.7470.8620.6707
2% decrease0.9101.0280.6711

The 1% decrease appear to give the best results.

Theoretical betting results

What would happen if we used the calculated win probabilities to bet on the closing betting lines. The approach here is to bet using the Kelly Criterion with a 0.3 multiplication:

Bet size(Risk) = ((Win probability * (Odds – 1) – 1 + Win probability) / (Odds – 1)) * 0.3

This is of course somewhat cheating, because the model is build on the very games we’re “betting” on… but since the sample size is 6 seasons it should still be a good indicator of model performance. Here’s the results:

SeasonRiskBet resultROI
201420156646%429%6.5%
201520165144%-135%-2.6%
201620174118%110%2.7%
201720184771%359%7.5%
201820196436%372%5.8%
201920205493%722%13.2%
Total32608%1858%5.7%

We could also look at betting results on specific teams. Here’s the results of bets placed on each team:

TeamRiskBet resultROI
N.J2282%20%0.9%
CBJ1827%188%10.3%
OTT1742%-59%-3.4%
NYI1537%190%12.4%
CGY1527%3%0.2%
NYR1503%189%12.6%
WPG1383%94%6.8%
MIN1365%111%8.1%
PHI1279%8%0.6%
EDM1163%151%13.0%
DET1099%45%4.1%
DAL1099%145%13.2%
MTL1072%-252%-23.5%
FLA1024%41%4.0%
VAN994%113%11.4%
STL956%195%20.4%
CAR948%48%5.1%
ARI933%-116%-12.5%
ANA925%105%11.4%
NSH881%150%17.0%
S.J863%-37%-4.2%
TOR828%-89%-10.7%
BOS818%76%9.3%
T.B756%109%14.4%
PIT735%66%9.0%
CHI674%63%9.4%
COL653%-13%-2.0%
WSH536%144%26.8%
L.A482%83%17.3%
VGK403%162%40.2%
BUF324%-78%-23.9%

So, the model was significantly higher on N.J and CBJ than the market. Here’s the results of bets placed against each team:

TeamRiskBet resultROI
BUF3723%100%2.7%
CHI1888%-10%-0.5%
ARI1852%129%7.0%
L.A1753%287%16.4%
PIT1259%350%27.8%
BOS1201%89%7.4%
WSH1186%-118%-10.0%
ANA1184%-25%-2.1%
DET1158%227%19.6%
VAN1138%135%11.9%
COL1128%33%2.9%
NSH1005%42%4.2%
STL996%125%12.5%
T.B993%-176%-17.7%
PHI950%-22%-2.3%
EDM934%189%20.2%
S.J927%289%31.2%
MTL908%-207%-22.8%
FLA838%-32%-3.8%
TOR791%95%12.0%
CAR775%153%19.8%
NYR739%-26%-3.5%
MIN728%183%25.1%
DAL708%173%24.5%
VGK699%72%10.4%
NYI663%116%17.5%
CGY648%-94%-14.6%
CBJ635%-211%-33.2%
OTT544%-8%-1.5%
WPG478%-2%-0.5%
N.J181%3%1.7%

Here we see that the model has been much lower on BUF than the market.

Finally, we can look at the ROI as a function of game number. To see if the model performs better in the beginning or the end of a season:

We don’t really see any trends here, meaning the model performs equally well throughout the season.

Perspective

The model is not yet finished, but the initial results look very promising. There are still improvements to be made. I want to use a 3-year sample size for the pre-season model instead of a 1-year sample size. I could also add an age curve and define a value for rookies and players with small sample sizes. Right now, rookies are considered average by default.

…But before I make any improvements, I want to test a different pre-season model that’s closer to the in-season model in structure. That’s the goal of the next article. If this model isn’t better, then I will move forward with a sGAA-based pre-season model. Either way, I think the results are looking promising.

Data from www.Evolving-Hockey.com and www.sportsbookreviewsonline.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: