Earlier this month I have discussed the scientific process and how to interpret results, but at some point we also have to look at whether the process that brought about the results was valid in the first place. In my last article, I discussed a study looking at HMB supplementation and explained how the fact that the subjects included no females and no trained athletes limited what conclusions could be drawn from the findings and applied to larger populations of people. But there was another larger issue with the study design: the study had no control group.
Entries by Craig Pickering
Last week I wrote about the scientific process, and what a scientific research paper entails. In this article we will take a closer look at some issues we need to consider when evaluating the results of a given study. Taking results at face value can sometimes be misleading, as we will see.
In recent years, coaching has become increasingly science led. Where once coaching was primarily about designing a training programme, knowing the correct technique (as taught to you by someone else) and getting results, in today’s world the coach has an increasing number of responsibilities. The internet has further allowed discussion on all factors of training, the result of which is that the modern coach is expected by his or her athletes to be up to date with the latest training research, periodization, strength training methods, nutrition, recovery modalities, and biomechanical factors affecting technique, to name but a few.