The goal here is to compare my projection model (read more here and here) to other models out there. All of my projections below are made well after the games were played, so the true test of the model will be its performance this upcoming season.
The model is the same for all ten seasons (from 10/11 to 19/20) and it’s based solely on data from the previous 3 seasons. I have refined the model to give the lowest average error and the correlation (R-squared).
I’m using Dom Luszcyszyn’s end of the year reviews to compare my model to its peers.
Let’s start off by looking at the 2016/2017 season. Here’s how other predictions went that season. The picture is taken from Dom Luszcyszyn’s prediction review which can be found here.
And here’s how my model would have projected the season:
The one team that jumps at you right away is Colorado. They ended up with just 48 points which was historically bad. I don’t think anyone could have foreseen that kind of season. Dom’s model was off by 37 points on Colorado.
Other than Colorado the model did okay, and it would have been the best prediction out there. Obviously, there’s no fame and glory in predicting results 4 years after they happened.
The following season would turn out to be the toughest to predict. Here’s how the other models performed, and Dom’s review can be found here, if you have a subscription to The Athletic.
Here’s how my model would have done. It would have been the second-best prediction, but way behind Cosica’s prediction.
It’s probably fair to say that Vegas surprised everyone. My model would have been pretty high on them, but still way off. It’s also interesting that the model had no clear-cut contenders – MIN, NSH, PIT, S.J and WSH were all projected to get around 100 points.
In the end the model was wrong about most teams, but at least it was less wrong than most other predictions.
The picture below here shows the performance of other predictions, and Dom’s review can be found here.
My projection model would have been first by a tiny margin. A lot of the predictions had an error around 8 points.
My model gave Tampa Bay the second highest point projection, but it was still way off. Calgary and NY Islanders were the two positive surprises. Overall the model did pretty good, but you would have liked it to be lower on Anaheim and higher on Boston, since that seemed predictable.
Let’s jump to the most current season. The review can be found here, if you have a subscription to the Athletic. All projections are prorated to 82 games.
Again, my model would have been first and by a decent margin. Overall, it was a fairly predictable season and most of the predictions were quite good.
There were a few surprises though. San Jose being this bad probably came as a shock to most, and Detroit ended up 10 points below replacement level. I don’t think either team was this bad, but sometimes losses lead to more losses. It can be a vicious circle.
The model was too low on Colorado and Boston. Not just compared to the results, but also compared to consensus thinking. Most were bullish on Colorado before the start of the season – My model wasn’t.
Comparison with Dom’s model:
It’s also interesting to compare my projections with those from Dom’s model. The table below shows my projection (pPoints), Dom’s projection (Dom) and the difference between the two from the 2019/2020 season:
On average the two models are 3.4 points apart, so there is some difference. Some of that is probably because of goaltending. In my current model I expect goaltending to regress heavily towards average. For the most part it’s a good assumption, but it means that a team like Boston gets undervalued. They consistently get good/great goaltending, but the model expect them to regress every year.
Explaining the differences between the two models would require a very thorough analysis, so for now I will just leave it as it is.
The observant reader might have noticed a difference between Dom’s projections in the previous article and this one. That’s because I used his projections from his team previews last time, but those were made well before the season started. The projections in this article are from opening night.
The projection model seems to predict results quite well, but the true testimony of the model will come next season. It will be interesting to see, how well it predicts future results – both season results and single game results.
The model definitely still needs some work. I would like the goaltender projections to work better, so I could put more weight on them. I would also like to add an age curve to each player, so the age adjustment isn’t done on the team level.
I used articles from www.theathletic.com in this piece.