Competition season is starting up again in many sports. We start this week’s GAINcast off by looking at how to best structure training in the days before a competition and how to learn what works best for your individual athletes. In addition we also talk about pre-meet workouts, psychological preparation, GPS data, single-factor models, and much more.
The SMU Locomotor Performance Lab is at the cutting edge of researching practical questions on sprinting and jumping. On this week’s GAINcast Emily McClelland joins us to discuss her latest research on vertical jumping and sex differences in short sprints.
Testing is a crucial part of the training and coaching process. On this episode of the GAINcast Vern walks through his approach to testing and shares his best practices, including a walk through of how he set up and implemented the testing program at the Chicago White Sox in the 1990s.
The California State Championships are perhaps the hardest competition in high school sports. This year Nick sent 6 throwers to the meet and walked away with 7 podium finishes. On this week’s podcast we take a look at the meet, how to prepare for it, and the unique path each athlete took to achieve their success this season.
We often think of explosiveness as just one quality, but the fastest sprinter isn’t always the best bobsledder or long jumper. Explosiveness can be expressed in different ways. Olympic bobsledder Ben Simons is well known on social media for his jumping exploits. On this week’s episode he talks about optimizing your plyometric training, the demands of bobsled training, and much more.
Jumping is one of the cornerstones of athleticism and plays a central role in the athletic development for early every sport. On this week’s episode we look at introducing jump training to athletes, our progressions, how to make jump training sport-specific, and testing protocols.
With the spring season nearing an end, many coaches are looking toward summer training. With summer training comes testing as well. Several of the football and rugby players that I coach will be subjected to a battery of fitness tests that include various jumps: depth jump, countermovement jump, squat jump, and single leg jumps.
Over the last several decades Professor Warren Young has been at the forefront of redefining how coaches test jumping ability, reactive strength, and agility. The tools he developed, such as the reactive strength index, have helped coaches better measure and train the physical abilities needed in their sport. He joins us on this week’s podcast to discuss his career, best practices for assessment, and how to bridge the gap from testing to training design.
Coach Kelvin Giles once said “If your coach:athlete ratio is 1:25 then you are managing a crowd, not coaching.” It’s not ideal, but it’s reality for many coaches. As I’ve worked more with field sports I’m often tasked with working with up to 50 athletes at one time. In such a setting, you have to make concessions as you transition from theory to practice. But with the right adjustments it can still look like coaching rather than crowd management.
https://www.hmmrmedia.com/wp-content/uploads/2021/11/speed_grid.jpg475900Martin Bingisserhttp://www.hmmrmedia.com/wp-content/uploads/2017/07/HMMR-Full-Logo400.pngMartin Bingisser2021-11-27 07:56:072021-11-27 08:25:09The speed grid and training speed in a large group setting
A hot topic in training is that of microdosing: how can we provide very small doses of stress to provide meaningful adaptations within a training session or program? That’s the theme on HMMR Media this month. But what if, instead of thinking of microdosing through the prism of training, we think about microdosing athlete testing? Would it be better to embed small tests into training regularly rather than doing a large slate of test every month or two?
The traditional approach to testing
A common, and often important, aspect of developing training programs is that of athlete testing. Typically, we use physical tests to understand where the athlete is currently at, along with how much the training we have programmed for them has allowed them to develop. The information gained from these tests can be very useful; we can identify key areas for improvement, and assess the effectiveness of the training so far.
In most set-ups, however, this testing happens relatively infrequently—perhaps on a three- or four-week basis, at the end of a training block, or at the end of a two- or three-month GPP/SPP period. Testing in this manner has to be infrequent, because it can be highly taxing on the athlete. Asking someone to perform a test—usually maximally—massively increases fatigue, along with potentially increasing the risk of injury. Athletes also understand the importance of testing; when I was competing, I became very invested in my flying 30m time; as a result, infrequent testing can be quite psychologically taxing for athletes, further increasing the need for recovery time.
This infrequent collection of data also hampers the use of said data in effective decision making. If we want to understand the fitness-fatigue status of an athlete, a physical test every three weeks doesn’t provide sufficient granularity; the athlete may have been fatigued for a long period prior to the testing taking place, and decisions around fatigue management could have been made sooner. Similarly, testing is often used to inform whether an athlete should be progressed or not; again, having weeks between tests means that the athlete may not be moved onwards quick enough, hampering improvements.
Embedded testing essentially refers to the use of various different tests as part of regular training. This could be in the warm-up as a method of understanding readiness to train, or in the session itself as a means of understanding fitness vs fatigue, as well as progressive adaptation. As a simple example, using sprint times during a session—something many coaches already do—is a form of embedded testing. Provided the conditions are more or less the same, then collecting, for example, flying 30-meter times within a session, and comparing across time, can be hugely informative:
Are the times in one session acutely worse than normal? Then the athlete is potentially fatigued/injured and could do with a rest;
Are the times continually tracking downwards over time? Then your training is likely being effective;
Are the times continually tracking upwards over time? Then fatigue is possibly accumulating, and/or your training might need to be reconsidered;
Are the times in one session acutely better than usual? Perhaps take a closer look at what you have done over the preceding days; could this form part of a future taper program at all?
Better testing = better decision making
By taking this approach, it’s clear to see how more frequent data collection can support your decision making. Similarly, around resistance training, changes in bar velocity may be indicative of fatigue, for example—collecting bar velocity data, especially when the adaptations you’re chasing require a high velocity of movement, can therefore inform your decisions in real-time. Vertical jump height might also be used, perhaps straight after a warm-up, to quantify readiness to train and again allow for real-time decision making to occur—hopefully providing better outcomes. For field-based sports, a submaximal running test—which could form part of a pre-training warm up—can be an effective may of monitoring aerobic fitness adaptations, and, if heart-rate measures are collected, may also give insights into fitness and fatigue.
Another example of this comes in the form of injury monitoring. From an epidemiological standpoint, there are three types of prevention. Primary prevention refers to taking steps to reduce the risk of injury; i.e., can we stop the athlete from becoming injured, before an injury occurs? Tertiary prevention refers to reducing the risk of reinjury, once injury has occurred. Both of these forms of prevention are the most common within sport, but there is a third type of prevention we can utilize—secondary prevention—which refers to detecting an injury early and preventing it from getting worse; i.e., we’re injured, but we don’t yet know it.
Embedded testing may assist in better understanding when we’re in the secondary prevention zone, as identified in a study published in 2020. Here, a group of soccer players underwent regular in-season hamstring strength testing in the first training session after a match. If players demonstrated a 14% or greater reduction in isometric hamstring strength in this test, they underwent a second test in the afternoon; if strength in this second test was still below the desired level, the player was referred for medical examination. If strength scores were within the normal bounds, the player undertook training as per usual. When compared to a control group, that did not utilize regular testing, the intervention group had five times fewer hamstring injuries.
Spread it out
Whilst we typically view microdosing through the lens of adding small amounts of work (i.e., training) to sessions, perhaps we should cast our net wider, to consider what other useful practices we can spread across sessions—as in the example of embedded testing. It’s clear that more regular data collection can inform more effective decision making, around aspects such as training monitoring, fitness, fatigue, and risk of injury. In contrast to more traditional testing regimes, embedded testing allows for quicker decisions to be made. It also reduces the impact of testing sessions on the training program, reducing physical and psychological stress on the athlete – appearing the be a win-win.
https://www.hmmrmedia.com/wp-content/uploads/2015/10/stopwatch.jpg360639Craig Pickeringhttp://www.hmmrmedia.com/wp-content/uploads/2017/07/HMMR-Full-Logo400.pngCraig Pickering2021-02-23 12:34:132021-02-23 12:34:18Microtesting: rethinking your testing strategy to improve your decision making