Can You Repeat That Please?

You might not know it, but scientific research is facing a crisis. Swathes of previously accepted research findings are being called into question, as subsequent experiments have failed to reproduce the same findings as the original papers. This replications crisis is strongest in psychology, especially social psychology, but has roots in, and implications for, all branches of science. And as more coaches are looking for an edge in the latest scientific research and social psychology findings, this has a large impact on coaching too.

Science Isn’t Perfect

Perhaps the most famous example of this replication crisis is that of the power pose saga. One of the most popular TED talks ever is by Amy Cuddy, a social psychologist from the Harvard Business School. In the talk, Cuddy discusses the idea of power poses, simple, short stances that can increase testosterone and decrease cortisol. These findings were published in 2010, and a book soon followed. However, in 2015, a second study group attempted to replicate these findings in a larger subject group, and found no effect of power posing on hormones. Last year, one of Cuddy’s co-authors publicly stated that she does “not believe that power pose effects are real.” The reasons for this are varied, and linked to nuanced methodological issues; perhaps the key one being that it wasn’t possible to discern whether the poses themselves, or winning the game played in the experiment, boosted testosterone (winning tends to cause an increase testosterone).

Simlarly, doubts have been expressed recently regarding other popular researching findings like the “decision fatigue” hypothesis that came about as a result of the famous marshmallow test. This is another of many cases where the lack of replication could potentially further damage the social psychology field.

Replication and the Scientific Method

So why is this important? Well, replication is crucial to the development and refinement of ideas through the scientific method. The idea is that theories and mechanisms are tested, and then tested again, in order to ensure that the initial findings weren’t a fluke. The more a finding is replicated, the more confident you can be in its purported effects. This becomes more important when damage can be made through incorrect results, such as in medical drug research, where a false finding could harm someone’s life. This is why drug approvals require numerous well-designed replications. Even in my day job, we require a minimum of two replications of an initial study before we utilize those findings.

Samuel Zeller

Of course, non-replication itself does not necessarily mean that the initial study was wrong. There are a number of valid reasons why findings might not be replicated. To begin with, there might be differences in the underlying methodologies that different researchers used. If we step into the realm of sports science, one group might have used trained instead of untrained athletes, for example. Perhaps they used slightly different doses of a supplement – 3 mg/kg of caffeine as opposed to 6 mg/kg – or maybe they didn’t control for other extraneous variables, such as diet. Perhaps the subjects hadn’t slept as well the night before, and so couldn’t perform to as high a level on a maximal exercise test. The possibilities for this lack of replication are deep and varied; there could be an innocent explanation, or it could just be that there is no effect where there previously was thought to be one.

Impact on Sports Science

When it comes to the athlete, coach, or support staff, what does this potential replication crisis mean? Clearly, whilst it’s important to keep abreast of research, it is probably a good idea to wait for findings to replicated , ideally by different research groups, before you add them into practice. Secondly, if you are using scientific research to underpin your training decisions, become scientifically literate ; be able to critique a study, and understand basic statistical methods in order to tell whether the researchers are potentially over-egging their findings. A great example of this is statistical significance, often denoted by a p-value of <0.05. All this tells us is that the chances of having this finding as extreme as this, and of there actually being no effect, are low (for more on p-values, check this out). What this doesn’t tell us is the size of the effect, which, if you’re involved in sport, is what you should be interested in.

Of course, it’s also important to remain pragmatic when considering these issues. There is nothing inherently wrong with using techniques that have not been replicated, provided they don’t cause harm. This harm could be in terms of performance or injury, or in terms of time and cost of using the new method. For example, in many training programs, there are probably a number of components that don’t actually have any positive physical effect – but if the athlete believes they do, that can be important in and of itself. When considering research, it’s always worthwhile to consider what is the worst that can happen. For example, I have the belief that certain gene variants increase the risk of Achilles tendon injuries, and to offset this I recommend that athletes with those risk variants and who run undertake some form of prehabilitation work to protect themselves and reduce their risk of injury. If I’m wrong, what is the worst that happens? The athlete does some exercises which have been reliably shown to reduce injury risk. No damage occurs from this, so it’s hard to see a negative there. If I’m right, then potentially I’ve reduced the chances of injury in that athlete, which in turn increases performance. This pragmatism helps protect athletes from unnecessary risk, whilst at the same time ensures they utilize cutting edge research in order to improve their performance, and get a step ahead of their rivals.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *