Below are the details of the process that was done to determine the ranking of the forecasters' predictions. For reasoning on why we did or didn't do the rankings a certain way (ex. only the top 300 players, only using rank error, etc.) please see Our Rationale.


  •   The projections were selected to be evaluated
    • Offensive - The top 300 (plus ties) ranked players for 3 statistical categories (Points, Goals, and Assists).
    • Defensive - The top 40 (plus ties) ranked players for 2 statistical categories (Wins, Shutouts).
  •  All ranks were adjusted ( See Figure 1)

 

Figure 1. Adjusted rank for goals

 

  • ·         The absolute error of the projected rank from the actual rank (final NHL rank) was calculated for each player (See Figure 2).
  •  In groups of 25 skaters (Ex. 1 – 25, 26 – 50, … , 276 – 300 plus ties), the total absolute error was calculated (See Figure 2, Cell H29). Total absolute error for goalie projections were calculated in groups of 10.

 

Figure 2. Absolute rank error calculations for top 25 goal ranks

 

  • ·         The Relative error for each player range was calculated using the projection with the lowest total absolute error as the standard for comparison (See Figure 3).

 

Figure 3. Relative error calculation

 

  • ·         The 1 – Relative Error  calculation was used to determine the accuracy of the forecasters in comparison to each other (See Figure 4).
    •       In the example in Figure 4 TSN had the lowest Total Absolute Error. Therefore, they would have a Relative error of 0 and a 1 Relative Error of 1 ( ex. 1 - 0 = 1).  The other forecasters had a greater absolute total error, so they will have a greater Relative error and lower 1 Relative Error score.  The forecaster who did the best will have a 1-Relative Error score of 1. The other forecaster will have a score lower than 1 based on how much more absolute error they had relative to the forecaster with the minimum total absolute error.
      Figure 4. Relative error comparison.

 

  •  Points were assigned by multiplying the percentage of the league points represented by that range of players (See Figure 5)  by the 1 – Relative Error  statistic to get the Points that the forecaster scored on their predictions for that player range (See Figure 4).

 

Figure 5. Point weighting determination.

 

  • ·         The Points for each player range were totaled.
  •  The forecaster with the greatest amount of points for all player ranges for that statistical category was the winner.  See Rankings for a list of the winners
  •  The Offensive Projection Winner was determined by totaling points from all three offensive statistical categories (Pts, G, A).  The forecaster with the greatest amount of points was the winner. Click here for the Overall Offensive Rankings.
  •  The Defensive Projection Winner was determined by totaling points from the two defensive statistical categories (W, SO). The forecaster with the greatest amount of points was the winner. Click here for the Overall Offensive Rankings.
  •  The Overall Projection Winner was determined by taking an average of all statistical category rankings (except for SO rank).  The forecaster with the lowest average ranking was the winner. Click here for the Overall Rankings.