Sports Science Monthly – April 2016
Editor’s Note: We are testing out a new series called the Sports Science Monthly. The goal is to translate the latest sports science research into information that coaches can use. Do you like it? Is it helping your training? Please get in touch on Twitter or via email and let us know.
Welcome to a new monthly collection that I will be writing, looking at sports science for coaches. In today’s sporting climate, coaches aren’t just supposed to coach – they are expected to keep up-to-date with new trends within sports science, including (but not limited to) strength and conditioning, nutrition, biomechanics, and psychology. This creates a lot of problems; many coaches are too busy coaching to sit down and find the correct research. Often, research is presented without context, so the coach doesn’t quite know what the study means. Through these series of articles, I hope to create a resource for coaches to be able to find recent articles that are applicable to them, and be able to place them in context. I will also report on research that isn’t always specifically applicable to a coach, but is a great example of the scientific method in action – including the limitations of science. This isn’t meant to be an overview of the whole sports science field, as time constraints mean I only report on a small number of the research published each month. However, I aim to pick the ones that might be most relevant and applicable. As always, I would welcome your feedback going forward on how to improve the Sports Science Monthly.
This Month’s Topics
- Concurrent Training
- Training Load
- Overtraining Monitoring
- Injury Rehabilitation
- Stride Length and Stride Frequency of 100m in Training vs Competition
- Sled Pulls
- Strength Training and Endurance Athletes
Concurrent training is when strength training and endurance training occur within close proximity to each other. What has been known for a long time is that concurrent training can have an interference effect on adaptations to exercise – the first report on this that I could find came from 1980. In this initial study, the researchers had three training groups; strength training only (S), endurance training only (E), and a strength & endurance training group (S&E). After 10 weeks of training, the E and S&E groups had improved their VO2max to similar extents – i.e. strength training at the same time as endurance training did not reduce endurance based adaptations. However, the strength group had greater improvements in strength than the S&E group, indicating that endurance training alongside strength training does reduce the adaptations to strength training. Over the years, much more research in this field has been done, and a review article published in 1999 summarised that the findings were consistent with that initial study from 1980. More recent research has attempted to explain why this is the case. One of the most well established explanations is that adaptations to strength training and adaptations to endurance training may occur through different, competing pathways, such as that whilst the body is recovering from endurance training it cannot maximally adapt to resistance training. Knowledge of this has created best practice recommendations, including doing endurance training first, in a period of low energy, then consuming carbohydrate post-training and before strength training. This is obviously of huge importance to sports that require athletes to have some amount of aerobic fitness alongside strength, like the vast majority of team sports.
However, not all research shows these effects, and it would be misleading of me to say that the results here are 100% clear. That is why more research in this field needs to be done, and brings us to our first study this month, entitled “Endurance Exercise Enhances the Effect of Strength Training on Muscle Fibre Size and Protein Expression of Akt and mTOR” (Kazior et al. 2016). In this study, the researchers got 16 males who were not regular exercisers, either in endurance or resistance training (remember this, because it is important). These males were split into two groups. Group 1 carried out resistance training only (called group R from here on). Group two carried out resistance training and endurance training (called group ER from here on). The training period lasted seven weeks, with subjects training twice per week initially, then progressing eventually to four sessions by weeks six and seven. The R group performed only one exercise, leg press, for various set/rep schemes. The ER group did exactly the same resistance training, but preceded by a session on a stationary bike lasting between 30 and 63 minutes per session. The ER group had 10 minutes’ recovery between the cycle and leg press sessions.
I should also add that after training, the R group were given 20g of protein. The ER group were given the same amount of protein, but also maltodextrin (a carbohydrate) in an amount relative to the amount of energy these subjects used during the endurance training.
So what did they find? Well, some surprising results and some not-so-surprising results. Firmly in the latter camp, group R did not see improvements in VO2max, but group ER did. However, a surprising result was that there was no difference in increases in muscle strength between the groups; group R saw an average improvement of 86kg, and group ER 85kg. In terms of fast twitch muscle fibres, group R saw an increase in type IIC (a type of fast-twitch fibre with fairly good fatigue resistance), whilst group ER saw an increase in type I and IIC fibres. An interesting finding here is that the mean fibre area only increased significantly in the ER group; i.e. only the ER group put on significant amounts of muscle. The ER group also had much higher activity levels of enzymes thought to be associated with increases in muscle size and strength.
What does this actually mean? Go back and look at the title of the study – Endurance Exercise Enhances the Effect of Strength Training on Muscle Fibre Size and Protein Expression of Akt and mTOR. Sounds pretty positive doesn’t it? Endurance training can lead to larger muscles and also a greater amount of anabolic signals being given off above strength training alone. Great, we should all go out and do concurrent training, shouldn’t we? Well, no. I can only think of one “sport” where have bigger muscles leads to success, and that is bodybuilding. In all other sports, the size of your muscles doesn’t actually matter. What does matter is how strong you are, and this study found that strength increases were exactly the same in both the R and ER group. At best, we can say that concurrent training does not reduce the adaptation to strength training based on the results of this study. However, that would mean ignoring the majority of evidence that shows that concurrent training does dampen the effect of strength training.
And that is one of the reasons why I chose to profile this study. If you just look at the study alone, you might well start to programme endurance training for your strength athletes, because it doesn’t reduce adaptations to their strength training, and might be favourable. That is why context is always important – where does this study sit within the evidence base. If we look at the existing evidence base, we can see that this study is one of a handful of studies that report differing results to the majority/consensus, which is that endurance training does reduce strength adaptations. The key here is that science is an on-going conversation. This conversation can go one way or the other, depending on the results of studies, but there will always be a consensus. The more research that is done, the firmer and more accurate that consensus will be – but it is still just a consensus.
Let me use an example from outside of sports to illustrate this. Steven Avery (from Making a Murderer) is currently in jail for murder. At this point in time, he is guilty of this crime, even though many people think he is innocent. The evidence given was seen to be good enough to convict him. However, it is possible that Avery is innocent, and so the wrong conclusion was reached. Soon, Avery will likely get a new trial, and new evidence may be put forward. Avery may be found not guilty, and be released. However, Avery could still be guilty. There is no right or wrong answer; only one/two people know if Avery is really guilty (Avery, and the “real” murder, if s/he does exist). All everyone else had to go on is the strength of evidence that is presented. The big problem is how the evidence is interpreted and understood; incorrect interpretation can lead to the wrong verdict – exactly the same as within science.
So what do the results of this study mean in the real world? Perhaps nothing. For a start, its unlikely that your athletes will be complete beginners like these subjects were, so we don’t know what the results would have been in a trained population. Secondly, the ER group here did much more training that the R group; the likelihood is you perhaps have 90 minutes each day to train your athletes. Does 90 minutes’ resistance training outperform 90 minutes of resistance and endurance training? We don’t know, because it wasn’t studied. However, this study does add to the ongoing “conversation” regarding concurrent training, and possibly shows that endurance training isn’t as negative as previously thought. The study is a really good example, though, of why you need to look behind the headlines and fully examine each study you look at.
Staying on concurrent training, two more studies this month examined the effects of combined endurance and strength training on training adaptations. The first study showed that lower volume endurance training (as seen in the above study), had no negative effect on lower body strength gains, whilst high volume endurance training did. The second study found that 6 hours or less between sessions dampened adaptations during concurrent training. Both of these findings are similar to the body of research in this field, and yield important lessons for team sport coaches – try to separate strength and endurance sessions by as much time as possible, and potentially keep metabolic conditioning work higher in intensity and lower in volume to ensure optimal strength gains.
How much training should athletes be doing on a regular basis? Does more training cause a greater number of injuries? Previous research tends to indicate that this could well be the case. For example, this study from 1999 in US military recruits found that “weekly injuries rates were significantly correlated with hours of vigorous physical training”, stating that the more physical training recruits did, the greater their chances of injury. Similarly, Tim Gabbett reported in the British Journal of Sports Medicine (BJSM) in 2004 how a reduction in pre-season training loads in rugby league players led to fewer injuries in players. Injury rates are often reported in terms of injuries per 1000 hours training, and so it seems logical that the more training someone does, the more likely their chance of injury. For example, in this study it was reported that the injury rate for marathon runners was 2.5 per 1000 hours of training. Therefore, if a marathon runner were to do 2000 hours of training, we might expect them to suffer from 5 injuries – essentially more training leading to more injuries. Intensity might also be a factor; from the same study looking at marathon runners it was reported that sprinters suffer 5.6 injuries per 1000 hours, indicating that injury rates are much higher in sprinters than marathon runners. Of course, there is a huge confounding variable here, which is that sprint training can be completed in 90 minutes, whilst marathon training can take considerably longer; as such accumulating 1000 hours of training would likely occur sooner in marathon runners compared to sprinters.
However, some more recent research has looked at the time-distribution of injuries. This article from 2009 reported the hamstring injury rate in a group of sprinters from Hong Kong, which was 0.87 injuries per 1000 hours (this equates to roughly 1 hamstring injury per 1,150 hours of training). However, this research also showed that hamstring injuries were more likely to occur within the first 100 hours of training, illustrating how conditioning may play a preventative role against injuries.
All of this is useful background to have before examining our second paper this month, “The training-injury prevention paradox: should athletes be training smarter and harder” (2016) by Tim Gabbett. This is a really, really interesting read, and fortunately the BJSM have made it open access, so you can all read it for yourselves. Gabbett begins by providing an overview of different training load monitoring methods. These are split into two categories, internal and external. An example of external load monitoring given is the use of GPS in team sports. Within track and field, this can simply be a record of cumulative distance run per session, throws per session, or weight lifted per session. Nice and quantifiable, and easy to use for coaches. Internal training load is how the athlete feels, and often rating of perceived exertion (RPE) can be used here. This is useful because the same external load can lead to different internal loads for two athletes. For example, my 1RM bench press is 165kg, so a 100kg bench press is going to be a pretty low load for me. However, my mum, who to the best of my knowledge has never done bench press, would likely find this external load incredibly hard. Same external load, very different internal load. Similarly, the internal training load can differ within the same athlete as a result of fatigue. Running a 200m rep at 25 seconds would have been a very low internal load to me at my peak; however, if I were to do it the day after a 9 x 300m session, it would have been much harder.
Gabbett then reviews the evidence looking at injury risk and internal and external loads, which, in summary show that greater external and internal loads are associated with an increased injury risk. He also looks at the relationship between increases in weekly training load and injury risk, concluding that lower increases in training load are associated with a smaller risk of injury.
The crux of Gabbett’s paper is where he looks at the injury protective effects of training. Remember the incidence of hamstring injuries decrease in sprinters the more they trained in the study I linked to earlier. Similarly, injury rates in rugby league players are lower when they have a large training base behind them. A lack of physical fitness is also associated with an increased risk of injury, such that under-training is perhaps as dangerous as fatigue, especially long term. Exposure to competition level intensities is important in order to prepare the athlete for those intensities, and protecting the athlete from those intensities is counter-productive. One of Gabbett’s main points is that training can be seen as a vaccine against injuries, provided that the dose is appropriate.
What does this mean for real-world coaches? This likely adds to what you already know – too much training can cause injury. However, athletes need to be exposed to the intensities they are expected to compete at in order to reduce injury. Long term, a fitter athlete (however you may define fitness) is less likely to suffer from injuries, and so the training load needs to be sufficient to increase that fitness.In order to reduce injuries athletes need to be exposed to the intensities they expect to compete at. Click To Tweet
Let’s look at a hypothetical example of this. Sprinter A has a lot of natural talent, but a coach that favours high training loads. The sprinter consistently runs good early season times, but then breaks down with injury year after year, never fulfilling his potential. Clearly, this is an example off too much loading. This sprinter grows frustrated with his coach, and moves to another sprint group. In this group, the coach is very cautious about training volumes, and the overall training load is very low. Initially, Sprinter A has a lot of success, running personal bests and lasting whole competitive seasons. This is because his chronic workload is high – he has a good base fitness – but his acute workload is low, so his fatigue and injury risk is lower. However, after 18 months of success, the athlete starts to break down again, suffering from injuries that constantly disrupt his season. This is because any residual fitness left over from his previous coach has disappeared, and so the athlete is no longer adequately prepared for the demands of his sport. Whilst short term the drop in training volume helped this athlete, long term it has caused plenty of problems, such as that this athlete has to be protected from competition and can only compete a handful of times each year. The key, as per Gabbett’s paper, is somewhere in the middle.
On a side not, Gabbett has been somewhat prolific recently with papers, and is co-author of this paper on workload in return to play from injury. Check it out.
You might remember my blog from a month or so ago examining the effects of overtraining. In it, I discussed that diagnosing overtraining is actually pretty difficult, and tends to become a diagnosis of exclusion, which means that everything else has to ruled out first. I concluded that prevention is better than cure, because recovery from overtraining is a very long process. As part of prevention, I discussed the need for some sort of monitoring programme to be in place, to catch an athlete sliding towards overtraining before it becomes problematic. The question is – what type of monitoring is better? Do we need expensive pieces of equipment, or get our athletes to monitor their heart rate and heart rate variability on a daily basis? Or can we just ask them how they feel? A recent systematic review published in the BJSM goes someway towards answering that question. A systematic review is where the researchers look at all the available data from previously published studies, decide which have used methods that are suitable, and then determine the strength of the evidence.
The study is called Monitoring the athlete training response: subjective self-reported measures trump commonly used objective measures:a systematic review (Saw et al. 2015). The authors looked at the results of 56 different papers that were relevant to the monitoring of fatigue from training. The most commonly used subjective measures of training response were the Profile of Mood States (POMS), Daily Analysis of Life Demands for Athletes (DALDA), and the Recovery-Stress Questionnaire for Athletes (REST-Q). Objective measures included measurement of cortisol, testosterone, immune function, and inflammatory biomarkers. In general, the subjective measures were more sensitive to training load, especially acutely, that objective measures; they were also more consistent across time. Over time, subjective measures were good at identifying acute increases in training load (i.e. subjective well-being went down), and also good at measure improved well-being with a reduction in training load.
Overall, the researchers found that 85% of the studies favoured the use of subjective measures over objective measures for monitoring of athlete well-being. Of all the subjective measures, the researchers recommended the use of questions to determine motivation, symptoms of injury or illness, stress from outside of training, general health, recovery, and fatigue. These markers were though to be useful for both acute and chronic monitoring of training load. The authors found, interestingly, that the following markers did not respond well to acute changes in training load, and so might not be useful; sleep quality, depression, emotional stress, and social recovery.85% of studies favor subjective over objective measures for monitoring of athlete well-being. Click To Tweet
What does this mean for coaches? The good news is that it doesn’t necessarily seem to be crucial that we get athletes to undergo objective monitoring of training load, such as heart rate variability, resting heart rate, etc. This is useful, because objective measures such as those examples can involve expensive equipment, and can be time consuming for the athlete. Inconsistencies in using the equipment can also lead to quite a large variation in the athlete scores. Using more subjective measures can reduce this intra-athlete variation, increasing consistency and accuracy. They are also potentially less time consuming, increasing athlete adherence, which is often one of the main stumbling blocks of athlete monitoring systems.
Typically when we think about rehabilitation from injuries, we are interested in the physical side of it – what exercises should we be doing, at what volume and what intensity? That’s a good starting point, but we also have to recognise that rehabilitation from injury is not only physical, but also psychological. That was the focus of a recent review, again in the BJSM, entitled Psychosocial factors associated with the outcomes of sports injury rehabilitation in competitive athletes: a mixed studies systematic review (Forsdyke et al. 2016). I’ll describe the study findings first, and them give you my experience in this area.
The researchers identified 25 studies that were eligible for this review, which included a total of 942 athletes. The studies were very mixed; some focused exclusively on surgery, some contained data on very short term injury rehabilitation, others on very long term rehabilitation. Within these articles, there were three key themes regarding the psychosocial factors that affect the success of rehabilitation from sports injuries.
The first of these themes was related to the emotions felt by the injured athlete undergoing rehabilitation. Heightened levels of anxiety and fear were often reported, especially as return-to-play got closer. Many athletes feared re-injury, or under-performance, as they got closer to resuming normal training and competition. During rehabilitation, fear of falling behind peers was also present. Some of these studies reported that being unable to discuss these fears with a suitable practitioner increased the intensity of these feelings.
The injured athletes thoughts were also important. A successful return to sport was found to be associated with feelings of sport related confidence, which obviously needs to built up over time. Methods such as fitness testing also increased confidence, providing the results were positive. Seven studies reported that viewing an injury as a chance to improve was really important at increasing the chances of a successful rehabilitation.
The behaviour of the injured athlete was the third theme. The research isn’t clear as to whether avoidance strategies were useful or not during rehabilitation, but what was clear was the importance of social support. This could come from team mates, coaches, family, and also the medical/support staff. A feeling of being valued was also really important.
From these results, we can see how the coaching team can have a really positive impact on the rehabilitation process from a psychosocial point of view. There should be someone available for the athlete to speak to about their concerns, and this person should be able to place these concerns into context for the athlete. Framing the rehabilitation process as positive can also be useful – a chance to improve, or work on weaknesses. The correct use of physical rehabilitation methods can also be really important, as it builds confidence. Regular fitness testing, so the athlete can both see improvements and also compare themselves to their previous self can be really useful in increasing confidence. Exposure to gradually more challenging sport-specific tasks also increases the athletes self confidence. Finally, it can also be important for the athlete to still feel like part of the team, where appropriate. Where possible, the athlete should be able to train alongside their usual training group, to avoid feelings of isolation.The coaching team can have a positive impact on rehab process from a psychosocial point of view. Click To Tweet
I’ll give you an example of all the above from my experience. I suffered a pretty bad back injury in 2012, and as a result had back surgery and a long rehabilitation process. I was fortunate enough to be able to spend some time at the British Olympic Association’s Intensive Rehabilitation Unit (IRU). The set up here is optimised to get athletes back to full training and competition as soon as possible. On my first day there, I underwent a variety of fitness tests, to see where I was at that specific point in time, and also what my biggest weaknesses were. I was retested halfway through, and again at the end of my time there. I saw that I had managed big improvements, which went a long way to increasing my confidence. I was also able to have regular meetings with a psychologist, to discuss my fears and anxieties, and have strategies in place in order to be able to keep on top of everything as my return to competition got closer. Working with the S&C coach there, I was gradually exposed to harder exercises in an environment where I had 100% confidence that re-injury could not occur, which again furthered my improvements. Outside of the IRU, other things helped my recovery. I was using it as a chance to improve and finally get healthy, after 11 years of back pain. I was gradually exposed to more intense sport-specific exercises over time, which helped my confidence to grow; first jogging, then sprinting over short distances, then longer, then adding spikes, then bends, and finally blocks. Eventually, I made my return to sport 9 months after surgery, without any complications. Hopefully you can see the value of a 360-degree approach to rehabilitation, and not just focusing on the physical side of it – especially when pain has such a large psychological component.
Stride Length and Stride Frequency of 100m in Training vs Competition
If you’re a sprint coach, you’re likely interested in sprint mechanics. And this means that you’re interested in stride length and stride frequency, the two critical determinants of sprint speed. You might also be monitoring stride frequency and stride length in your athletes training session, which leads to an interesting question – how does performance in training correlate to performance in a race? If in training you can see your athlete are not achieving the stride length or frequency parameters you would like, should you worry, or should you assume that an improvement will be made in the race? If the later, how big will that improvement be? A recent article in the Journal of Strength and Conditioning Research helps to give us some answers.
In this study, the researchers recruited 19 male and female Japanese sprinters. The average 100m times for the males was 10.87 seconds, and females was 12.42 seconds. Far from world class, which we must consider when drawing conclusions; but still, pretty good. They were all well trained, with a minimum of 6 years of sprint training, and two of them were international athletes. The researchers analysed the sprinters during both maximum velocity speed training sessions, and also races.
What they found was similar to what you might expect – sprinters run faster in a race than they do in training. But the question is; is this increase caused by changes in stride length, stride frequency, or a combination of the two? In this cohort, stride length did not change between training and the race, only stride frequency did. Stride frequency is essentially made up of time spent in the air, and time spent on the ground. These athletes improved their stride frequency by spending last time on the ground, and the same amount of time in the air.Stride length did not change between training and the race, only stride frequency did. Click To Tweet
What can this mean for coaches? Well, firstly, if you’re monitoring stride lengths and frequencies within training, it means that you should be stricter on the stride length variable than the stride frequency one. You should ensure your athletes are hitting the stride length optimised for them, whilst you can allow a slightly slower stride frequency than what you might expect in a race. Secondly, and this could be me over extrapolating from the results slightly, I think this means that in the week leading up to the race, it might be a good idea to ensure that we can do all we can to ensure that stride frequency is optimised. This would mean ensuring good central nervous system recovery, and maybe utilising some sort of plyometrics or over-speed training, provided total volumes are low.
Of course, there are limitations to this study. We have no idea if the results would be the same in a group of 10.00 sprinters. Subject arousal is likely a big factor, as athletes tend to be much more motivated for races as opposed to training, although a potential strength of this study is that it does mirror the real world somewhat with regards to arousal. Research by my university tutor Aki Salo also shows that stride length and stride frequency is individualised to each athlete – some run their best races when stride length is optimised, others when stride frequency is optimised. Still, food for though, especially if you’re monitoring stride frequency in training.
If you use sleds for resisted acceleration/sprint work, it might be a good idea to consider the placement of the harness on your athletes. That’s according to a study published in the Journal of Strength and Conditioning Research, in which researchers tested the effects of shoulder and waist harnesses on sled drags. Using 14 male subjects, who had experience of resistance training (but note, they were not sprinters) ran from a 3-point start position over 6m with a sled attached at the shoulder or waist, or with no sled at all. Velocity was slightly higher in the waist trial compared to the shoulder trial. Contact time was the same, although the breaking phase was shorter in the waist trial. Both vertical and horizontal force production was higher in the waist trial compared to the shoulder trial, although only the net horizontal impulse was significantly different to the shoulder trial. There were differences in kinematics between the trials too, with the shoulder trial having significantly smaller trunk range of movement that the waist trial.
What does all this mean? Well, firstly, it means that using a sled does significantly alter running technique, so there isn’t a huge technical crossover between sled drags and actual sprinting – but you likely knew that any way. What this study adds is that if you are going to use a sled, you should use a waist harness as opposed to a shoulder harness, as it alters technique to a lesser degree, as well as slightly more horizontal force production, which is crucial during the acceleration phase.When using a sled, a waist harness alters sprinting technique to a lesser degere. Click To Tweet
Strength Training and Endurance Athletes
And finally this month we have the results of a meta-analysis conducted on endurance athletes, measuring the effects of strength training on their running economy. Meta-analyses are great because they pool the results from a set of studies, so you get a good idea regarding what the evidence is on an actual subject. This particular meta-analysis was useful because it looks at well trained runners, so it isn’t a case of beginners just improving due to the training effect. I won’t spend ages analysing this paper simply because, if you’re an endurance athlete or coach, you should already know that you should be doing some type of strength training. The results of this meta-analysis confirm this; resistance training has a “large, beneficial effect” on running economy. The guidelines put forward by the authors were to use a mix of high- and low-intensity resistance training, as well as plyometrics, 2-3 times per week for 8-12 weeks. So there you go, no more excuses.Resistance training has a large beneficial effect on running economy for endurance athletes. Click To Tweet
Please continue this series, it would be a great tool for all of us. Thank you for your work, Craig!