Posts

Microtesting: rethinking your testing strategy to improve your decision making

A hot topic in training is that of microdosing: how can we provide very small doses of stress to provide meaningful adaptations within a training session or program? That’s the theme on HMMR Media this month. But what if, instead of thinking of microdosing through the prism of training, we think about microdosing athlete testing? Would it be better to embed small tests into training regularly rather than doing a large slate of test every month or two?

The traditional approach to testing

A common, and often important, aspect of developing training programs is that of athlete testing. Typically, we use physical tests to understand where the athlete is currently at, along with how much the training we have programmed for them has allowed them to develop. The information gained from these tests can be very useful; we can identify key areas for improvement, and assess the effectiveness of the training so far.

In most set-ups, however, this testing happens relatively infrequently—perhaps on a three- or four-week basis, at the end of a training block, or at the end of a two- or three-month GPP/SPP period. Testing in this manner has to be infrequent, because it can be highly taxing on the athlete. Asking someone to perform a test—usually maximally—massively increases fatigue, along with potentially increasing the risk of injury. Athletes also understand the importance of testing; when I was competing, I became very invested in my flying 30m time; as a result, infrequent testing can be quite psychologically taxing for athletes, further increasing the need for recovery time.

» Related content: Our January 2018 site theme was testing. Click here for an overview of all of our resources on the topic.

This infrequent collection of data also hampers the use of said data in effective decision making. If we want to understand the fitness-fatigue status of an athlete, a physical test every three weeks doesn’t provide sufficient granularity; the athlete may have been fatigued for a long period prior to the testing taking place, and decisions around fatigue management could have been made sooner. Similarly, testing is often used to inform whether an athlete should be progressed or not; again, having weeks between tests means that the athlete may not be moved onwards quick enough, hampering improvements.

From testing to microtesting

More frequent embedded testing offers an alternate way to incorporate testing into the training plan. The concept of microtesting isn’t all that new: daily measurement of throwing results is the bedrock of Anatoliy Bondarchuk’s training methods for decades and Vern Gambetta has long advocated that training is testing. More recently coaches like Mladen Jovanovic have also criticized infrequent testing approaches and put forward more agile periodization methods that incorporate frequent feedback.

Embedded testing essentially refers to the use of various different tests as part of regular training. This could be in the warm-up as a method of understanding readiness to train, or in the session itself as a means of understanding fitness vs fatigue, as well as progressive adaptation. As a simple example, using sprint times during a session—something many coaches already do—is a form of embedded testing. Provided the conditions are more or less the same, then collecting, for example, flying 30-meter times within a session, and comparing across time, can be hugely informative:

  • Are the times in one session acutely worse than normal? Then the athlete is potentially fatigued/injured and could do with a rest;
  • Are the times continually tracking downwards over time? Then your training is likely being effective;
  • Are the times continually tracking upwards over time? Then fatigue is possibly accumulating, and/or your training might need to be reconsidered;
  • Are the times in one session acutely better than usual? Perhaps take a closer look at what you have done over the preceding days; could this form part of a future taper program at all?

Better testing = better decision making

By taking this approach, it’s clear to see how more frequent data collection can support your decision making. Similarly, around resistance training, changes in bar velocity may be indicative of fatigue, for example—collecting bar velocity data, especially when the adaptations you’re chasing require a high velocity of movement, can therefore inform your decisions in real-time. Vertical jump height might also be used, perhaps straight after a warm-up, to quantify readiness to train and again allow for real-time decision making to occur—hopefully providing better outcomes. For field-based sports, a submaximal running test—which could form part of a pre-training warm up—can be an effective may of monitoring aerobic fitness adaptations, and, if heart-rate measures are collected, may also give insights into fitness and fatigue.

Another example of this comes in the form of injury monitoring. From an epidemiological standpoint, there are three types of prevention. Primary prevention refers to taking steps to reduce the risk of injury; i.e., can we stop the athlete from becoming injured, before an injury occurs? Tertiary prevention refers to reducing the risk of reinjury, once injury has occurred. Both of these forms of prevention are the most common within sport, but there is a third type of prevention we can utilize—secondary prevention—which refers to detecting an injury early and preventing it from getting worse; i.e., we’re injured, but we don’t yet know it.

Embedded testing may assist in better understanding when we’re in the secondary prevention zone, as identified in a study published in 2020. Here, a group of soccer players underwent regular in-season hamstring strength testing in the first training session after a match. If players demonstrated a 14% or greater reduction in isometric hamstring strength in this test, they underwent a second test in the afternoon; if strength in this second test was still below the desired level, the player was referred for medical examination. If strength scores were within the normal bounds, the player undertook training as per usual. When compared to a control group, that did not utilize regular testing, the intervention group had five times fewer hamstring injuries. 

Spread it out

Whilst we typically view microdosing through the lens of adding small amounts of work (i.e., training) to sessions, perhaps we should cast our net wider, to consider what other useful practices we can spread across sessions—as in the example of embedded testing. It’s clear that more regular data collection can inform more effective decision making, around aspects such as training monitoring, fitness, fatigue, and risk of injury. In contrast to more traditional testing regimes, embedded testing allows for quicker decisions to be made. It also reduces the impact of testing sessions on the training program, reducing physical and psychological stress on the athlete – appearing the be a win-win.

Navigating the technology paradox in sport

Technology is a good thing, right? When we evaluate technology we tend to focus on the benefits: what it can add. But the paradox of technology is that it often has hidden costs we do not see up front. Determining whether technology is good or not can be harder than it looks. Read more

Penetrating the data smog through better visualization

As anyone who witnessed the UK government’s recent presentation on COVID-19 data will know, data is only useful if it can be read and understood clearly. For those who missed the broadcast, the presentation consisted of many slides of data with the BBC banner on the screen blocking out the titles! The audience of millions was left looking at lines and bar charts that had no context or explanation. Unfortunately such examples can easily be found in the world of sport too. Read more

GAINcast Episode 183: From data to speed (with Matt Rhea)

American football is a bastion of tradition. In some areas, such as strength and conditioning, that can hold the sport back. Many coaches, however, are working to change the traditions and they start out by asking simple questions all over again like how do we make our players better? On this week’s GAINcast, Matt Rhea explains how the forward-thinking setup he installed at Indiana University helped turn their program around using a practical data-driven approach. Read more

Some final thoughts on the future of coaching

The HMMR site theme in February was the future of training. I’m a little late to the party, but wanted to add a few points about the future of coaching. After listening to the thoughts of other coaches, many people gave insight into the future of training, but the future of the profession is just as important. Read more

Finding the right feedback for reflective coaching

Finding the right time for reflective coaching is critical, as I wrote about last week. But reflecting will not bring your coaching forward if you do not have the right information. In order to properly reflect, you need to search out the best information. Read more

Key questions in data science for sports

Over the last decade or so, there has been a Big Data revolution. This is true of our general lives; a good example is how Cambridge Analytica collected, both legally and potentially illegally, data of Facebook users for the targeting of campaign advertisements, but also within sport, where, in part thanks to the increase in technology, there is a vast amount of data available to sporting teams. Read more

From big data to smart data

There is an arms race in today’s sporting environment. Teams, athletes, coaches and support staff aren’t just fighting for the best facilities and talent, they’re also seeing who can collect the most numbers. This is the era of big data, and the past decade has seen an unrivaled amount of information available in sports from a wide variety of methods, including the use of GPS systems, electronic timing gates, force platforms, blood testing, and general wellness questionnaires. The richness and vastness of information available, however, can also be seen as a curse; both teams and individuals can feel like they have to collect more and more data, in the hope that they can gain an edge over their competitors, and better enhance athletic performance. As with many things, we need to shift the focus on data from quantity to quality. Read more

GAINcast Episode 133: Getting better feedback

Having a process to collect feedback is critical, but even more important is the type of feedback you get. If the process is not producing actionable steps to improve or reinforce what you did, it is not doing its job. On this week’s GAINcast we look at how coaches can improve debriefing and other feedback processes to get the information they are looking for to make them better coaches. Read more

The quest for prediction in coaching

Within sport, everyone is now looking at prediction. Coaches, athletes, and support staff are all searching for methods to predict various outcomes, such as injury, talent, performance, or training adaptation. The ability to successfully predict within these areas would obviously be hugely advantageous. Injury prediction could allow you to make interventions to stop that from happening. Talent prediction can allow teams to better focus resources. Predicting adaptations would allow coaches to design better training blocks or alter them based on the predicted response. In other words, prediction is the holy grail of sports science. Read more