Follow me on Twitter!


Sunday, June 30, 2013

Which Have Been the "Best" Events This Season?

If you've been reading my blog for any length of time, you know that one of the ways I like to evaluate the effectiveness* of a CrossFit event is by looking at how well the results from that event correlate to results from a variety of other events. I laid out the theory behind this in my post from last year titled "Are certain events 'better' than others?", but I'll recap it here:
  • In a competition setting, what we are trying to do is learn as much about each athlete's overall fitness level as possible.
  • We only have a limited number of events to do this, particularly in the Open and at Regionals. Therefore, we need to maximize the information we get from each event.
  • If athletes who score well on a particular event tend to score well across the board, then that event is probably a good indicator of overall fitness. Conversely, if the results from that event don't correlate at all with results in other events, then maybe that particular event did not really tell us much.
Overall, this year's regionals and Open were set up well for me to do this type of analysis. I did this same analysis after last year's regionals, but due to the cuts after regional event 5, I was left with only about 250-300 athletes of each gender who completed all the Open and Regional workouts, and this limited the analysis to only the very elite athletes. I also did this analysis after the Open this year, which gave me a huge sample of athletes, but with only 5 events, I didn't really have a "wide variety" of events to evaluate. 

This year, because we did not have any cuts at Regionals, I was left with 673 men and 512 women who have completed all 12 events this season. Although these are all still very solid athletes, I got a lot more of the borderline regional competitors in the mix than I did last year. Remember, a "good" event for the Games might not make a "good" event for a competition within your own box. So keep in mind with this analysis that we are evaluating these events based on how well they predict overall fitness for Regional competitors.

The methodology for performing this analysis this year was the following:
  • For athletes who completed all events this year, compile their results (not just their ranking) for each of those events. The Regionals results I used are those that have been adjusted to account for the week in which week each athlete competed. See my previous post for more info on those adjustments.
  • Rerank the entire field in each of those events.
  • For each event, calculate the sum of each athlete's ranks in all other events.
  • Calculate the Pearson correlation (referred to from here on out as simply "correlation") between the ranks in each event and the sum of ranks in all other events. Higher correlations indicate "better" events.
Below are the results for both men and women.















The pattern that emerges is one I've noticed pretty much across the board since I've been doing this type of analysis: events with more movements tend to be better tests of fitness. This makes sense intuitively, since they test more things, by definition. That doesn't mean we shouldn't have single-modality events in competition, it just means they probably should be used sparingly and only for movements that are deemed very important.

You may notice that Open Event 2, which had three movements, is bucking this trend by falling quite close to the bottom. The concerns with this event were pretty well documented during the Open: judging was very difficult, the weights were extremely light for top competitors, and the option of step-ups was utilized by a lot more athletes than was probably expected. So simply because you have 3 or 4 movements in an event doesn't make it a great one.

To get a visual interpretation of the concept I'm getting at here, below are scatter plots for some of the best and worst events from this year. On each graph, the x-axis represents each athlete's rank on that event and the y-axis represents each athlete's combined rank on all other events.




It should be fairly clear that for the first two graphs, there is a clear relationship between the x- and y-axis. Athletes who did well on these events generally did well across the board. On the third graph, the points are much more scattered, indicating that there were plenty of athletes who did well on Open Event 2 who didn't fare well across the board, and vice versa.

Although I consider the results using the entire Regional field to be the most useful, I also tested performed this analysis on three other subsets of the regional field:
  1. Games athletes only
  2. Top 292 men and top 258 women (same number as I had in my analysis last year)
  3. Random sample of 20% of the entire regional field

What I was interested in was how volatile my results were. For instance, is Regional Event 1 really better than Regional Event 4 for women (76% vs. 74%)? Would that hold up if I changed the group of athletes a bit?

In general, the events near the very top and the very bottom stayed in that vicinity, with a couple of exceptions. Here are the main takeaways:
  • For men, Regional Event 4 was in the top 3 across all the samples and Open Event 1 was in the top 5 across all the samples. For women, Regional Event 4 was in the top 4 across all the samples, Regional Event 1 was in the top 5 across all the samples, Open Event 1 was in the top 5 across all the samples and Open Event 4 was also in the top 6 across all the samples.
  • Considering HQ is programming the same events for both men and women, I would conclude that Regional Event 4 ("The 100s") and Open Event 1 (AMRAP 17 of burpees and snatch) were the best events this season. In one of my 2013 Open recap posts, I noted that 13.4 was generally the best event of the Open. I still feel that it was a very good event for the entire Open field, but it wasn't quite as strong when we look at just these stronger athletes. 
  • Across both men and women, Open Event 2, Regional Event 5 and Regional Event 3 were each in the bottom 4 in all but one sample. I would conclude that these were generally the three weakest events this season. 
    • I mentioned some issues with Open Event 2 above.
    • For Regional Event 5, I think the issue is that we saw a lot of athletes near the bottom of the field do well on this simply because they could deadlift a house. If you could handle the deadlifts easily, you could generally do well even if you weren't particularly great at box jumps or had sub-par aerobic capacity.
    • For Regional Event 3, I think the issue was that the burpees did not really factor into this much, making it basically just a muscle-up test. As far as single-modalities go, this wasn't too bad of an event. But personally, I think this event and the overhead squat ladder (a true single-modality) should have been worth only 50% of the other events.  
  • Two of the events featuring box jumps turned out to be relatively modest tests of fitness. Throw in all the complaints about achilles problems we've seen popping up recently, and I think HQ may want to look into adjusting how they program box jumps. I think box jumps are a good test of fitness in general, but I'd personally love to see us go to box jump-overs (onto and over the box with an option to jump straight over) in the future.
  • With the exception of Open Event 2 and Regional Event 5, I think the rest of the events were generally solid. As I mentioned above, I might consider adjusting the point value for a couple of the other ones.

I also did one final analysis, primarily out of curiosity. Using the entire Regional field, I looked at the correlation between each pair of events. Some of the interesting findings:
  • In general, the most highly correlated pair was Regional Event 4/Regional Event 1 (71% for women and 67% for men). This is somewhat surprising given how the time domains were completely different, but both involved pull-ups and a light thruster-type movement (bar thrusters or wall-balls).
  • The two muscle-up workouts (Regional Event 3 and Open Event 3) were highly correlated (68% for women and 55% for men).
  • Regional Event 5 and Regional Event 7 were highly correlated (56% for women and 68% for men). Both were extremely heavy.
  • The least-correlated pair was Regional Event 3/Regional Event 5 (17% for women and 19% for men). Shouldn't be a surprise considering one was bodyweight only and one involved extremely heavy deadlifts. The pair of Regional Event 2/Regional Event 3 were also not very correlated (38% for women and 22% for men). Remember those occurred within 2 minutes of each other.
That's about it for today. This is always one of my favorite analyses to work on, but with the Games fast approaching, I suppose it's about time to start tackling the tough questions and making some predictions. Will Froning three-peat? (Probably) Who will emerge on the women's side with Annie out? (It's wide-open) Will the first event of the Games take more or less than 4 hours? (God, I hope so) What bizarre contraption will Rogue unveil this year? (Potentially a flying bicycle, similar to the one in E.T.) Who will wear the shortest shorts this season? (Stacie Tovar still the champ until proven otherwise)

Anyway, until next time, good luck with your training!


*I am referring to the effectiveness of this event as it relates to competition. In other words, is this event a good test of fitness. This does not necessarily mean the event is good or bad for training purposes. For instance, I feel that 13.2 was not a good workout for testing (due to a lot of factors, like how difficult it was to judge and how light it was for the top competitors), but in training, I think it would be a good workout for building aerobic capacity (and it definitely left me hurting).

No comments:

Post a Comment