Unlike the past two seasons, Rich Froning did not enter the final day of competition with a commanding lead. In fact, he didn't even enter the final event with a commanding lead. All it would have taken was a fifth-place finish by Froning and a first-place finish by Mathew Fraser on Double Grace for Froning to finish runner-up this season. But what you may not have realized is that it could have been even tighter.
In the Open and the Regionals, the scoring system is simple: add up your placements across all events, and the lowest cumulative total wins. At the Games, however, the scoring system changes to use a scoring table that translates each placement into a point value. The athlete with the highest point total wins. I've written plenty about this in the past (start here if you're interested), but the key difference is this: in the Games scoring system, there is a greater reward for finishes at the very top, and less punishment for finishes near the bottom. The reason is that the point differential between high places is much higher (5 points between 1st and 2nd) than between lower places (1 point between 30th and 31st).
So you know that small lead Froning had going into the final event? Well, under the regional scoring system*, he would actually have been trailing going into that event... BY 8 POINTS! And he would have made that deficit up, because he won the event while Fraser took 11th. I think it is safe to say that would have been the most dramatic finish to the Games we have seen (I guess Khalipa in 2008 was similar, but there were like 100 people watching, so...).
One reason the scoring would have been so close under this system is that Fraser's performance was remarkably consistent. His lowest finish was 23rd. All other athletes had at least one finish 26th or below, and Froning finished lower than 26th twice. But Fraser also only won one event and had four top 5 finishes. Froning, on the other hand, won four events and finished second one other time.
I also looked at how the scoring would have turned out under two other scoring systems:
- Normal distribution scoring table - Similar to the Games scoring table, but the points are allocated 0-100 in a normal distribution. See my article here for more information.
- Standard deviation scoring** - This is based on the actual results in each event, rather than just the placement. Points are awarded based on how many standard deviations above or below average an athlete is on each event. More background on that in the article I referenced early on in this post.
Here is how the top 5 would have shaken our for men and women using all four of these systems (including the current system):
As far as the winners go, we would not have seen any changes. Clearly, Froning and Camille Leblanc-Bazinet were the fittest individuals this year. Generally, what you can observe here is that the athletes doing well in the standard deviation and normal distribution system had some really outstanding performances, whereas the athletes doing well in the regional scoring system were the most consistent.
What is also nice about the standard deviation system is that it can tell us a little more about how each event played out. For each event, I had to calculate both the mean result and the standard deviation in order to get these rankings. That allowed me to see a few other things:
- Which events had the most tightly bunched fields (and the most widely spread fields)?
- Were there significant differences between men and women in how tightly scores were bunched on events?
- Which individual event performances were most dominant?
To measure the spread of the field in each event, I looked at the coefficient of variation, which is the standard deviation divided by the mean. For instance, the mean weight lifted for women event 2 was 213.6 and the standard deviation was 22.1 pounds, so the coefficient of variation was 10%. The higher this value, the wider the spread was in the results. And remember, if the spread is wider, the better you have to be in order to generate a great score under the standard deviation system.
To see which individual event performances were most dominant, I looked at the winning score on each event. Typically, this score was between 1.5 and 2.75 standard deviations above the mean; this is in the right ballpark if we assume a normal distribution, because there would be about a 7% chance of getting a result of 1.5 standard deviations above the mean and a 0.3% chance of getting a result of 2.75 standard deviations above the mean.
The chart below shows both the winning score (bars) and the coefficient of variation (line) for each event. Note that the Clean Speed Ladder is omitted because there it was a tournament-style event and does not convert easily to the standard deviations system. For my calculations of points on the Clean Speed Ladder, I used a normal distribution assumption and applied points based on the rankings in this event.
The largest win was Neal Maddox's 3.43 in the Sprint Sled 1; a normal distribution would say this should occur about 1-in-3,000 times. For those who watched the Games, this performance was quite impressive. Maddox looked like he was pushing a toy sled compared to everyone else. Also, don't sleep on Nate Schrader's result in the Sprint Carry. It may not have appeared quite as impressive because the field was so tightly bunched (only a 9% coefficient of variation, compared to 23% on Sprint Sled 1).
The most tightly bunched event was the Triple-3 for both men (7%) and women (5%). The Sprint Carry was next (9% men, 7% women). The event with the largest spread was Thick-n-Quick, at 53% for men and 41% for women. Remember, Froning won this event in 1:40 (4.2 reps per minute), while some athletes only finished 2 reps (0.5 reps per minute).
The lesson, as always: Rich Froning is a machine.
*All of the alternate scoring scenarios here assume that the sprint sled events would each be worth half value.
**In order to do this, I had to convert all time-based events from a time score to a rate of speed score (reps per minute, for example). There are lots of intricacies to this, so another individual calculating these may have used slightly different assumptions. The main takeaways would be the same here, I think.