Does that test what you think it tests?

Utilizing testing to monitor training adaptations and fitness is an important part of the training cycle. Many coaches dictate workloads by prescribing a percentage of a maximum; for example, on a given strength training day, an athlete might be prescribed to lift 75% of their maximum lift. In theory, this is all well and good, but what if the tests used don’t actually test what we think they test, to the extent we think they do?

A deeper look at 1RM tests

In the part of the world where I’m from, power cleans are, rightly or wrongly, well utilized in sprint training, and most sprinters will use one-repetition maximum (1RM) tests in the power clean as a proxy for improvements in strength. The bigger your number in the power clean, the stronger you are. If your number goes up over time, you’re getting stronger. Nice and easy.

Let’s subject this to a thought experiment. Let’s imagine we put as much weight on the bar as we can lift, and complete that lift with perfect technique, catching it in the right position. Good, we’re strong. Now let’s repeat that lift, but this time catch it without dropping into the catch. Did you make this lift? Probably not. Are you suddenly much weaker? No. What’s happened here is that a technical change has altered the outcome of the test. So, whilst a lift can be a reasonable measurement of maximum strength, it is sensitive to factors not related to maximum strength, most primarily athlete technique. Improvements in 1RM might, therefore, come from improvements in technique, so it’s worth keeping that in mind when it comes to utilizing 1RM tests in your program.

A deeper look at VO2max tests

What about a VO2max test, which measures the ability of your body to utilize oxygen during exercise? I’ve never done one of these, but I’ve seen them, and they look miserable. Let’s imagine that you do a VO2max test, pushing yourself really hard, and get a great score. Next week, you come back, but this time you’re a bit tired; perhaps you didn’t sleep quite as well as you normally would have, or perhaps your training volume this week has been a bit higher than normal. Because of this increased fatigue, you’re not able to push yourself as hard on the VO2max test as you usually would, and your score is lower. Are you suddenly less able to take oxygen from the air to your muscles, and then use that oxygen in the production of energy? No. Maximal exercise tests, especially longer duration maximal exercise tests, are sensitive to changes in motivation. Improvements or decrements on such tests might not be completely down to the variable of choice, but motivational factors; something to consider next time you test.

A deeper look at agility tests

How about a test of agility, such as the Illinois Agility Test, which is often used to assess agility in football clubs? It doesn’t even test agility! Instead, it measures change of direction speed – a preplanned action of altering the direction of travel. Whilst this change of direction speed is an important component of agility, agility itself also includes perception and decision-making ability. For example, running to a cone at a set position, and then turning and sprinting once you reach that cone, tests change of direction speed. Running forward to an oncoming defender, perceiving his body position, and making the correct decision of which way to turn to retain possession is an example of agility.

This might sound like semantics, but it has an important resonance for when it comes to athlete preparation. If you only train change of direction abilities your athletes might not be well prepared in a game to amke a decision based on their perception of movement of an opponent.

» Related content: learn about the difference between agility and change of direction on Episode 94 of the podcast with Professor Ken Clark.

A framework for analyzing tests

So what does all of this mean when it comes to testing? In my experience, athletes (and even coaches) put a lot of stock into how they perform on tests. Because they want to be successful, any test that indicates success is more likely is important to them. If they underperform in these tests, it can be really hard for them to take from a psychological perspective. The ability to place the result in the correct context, therefore, matters to the athlete, and knowledge of how closely the test matches performance is a large part of being able to do this.

For coaches, who make training decisions based on the results of these tests, the ability to utilize tests that are useful is crucial. That’s not to say that the tests mentioned here aren’t useful, just that there are factors outside the test itself that need to be considered when understanding the results.

Overall, there are a number of key questions you can ask yourself to ensure you’re getting the most out of your testing:

  1. Does this test assess what I think it does?
  2. Which external factors might alter the results of this test?
  3. How will I change my training based on the testing outcome?
  4. Is the aspect I’m testing important to performance in my sport of choice?

If the answers to the final two questions are “not at all”, then you clearly shouldn’t be using the test; testing should be used to inform training program design, not just to collect data for the sake of it. Being able to get the right balance between testing, and using the results of such tests to drive training, without collecting the wrong or irrelevant data, is crucial in being able to drive your athletes forward.