This month’s theme on HMMRMedia is technology and sport. Over the past few years technology has often become synonymous with data. New technologies are allowing more data to be collected in sport. This information can then be utilized by coaches and support staff to understand where the athlete is at, and to make decisions on a future course of action.
Data, therefore, has a lot of potential upsides to its use, but the danger is that we become reliant on the data—which isn’t always accurate or able to give a true picture—at the expense of individual intuition and experience.
The value of intuition
A recent study, published in the International Journal of Sports Physiology and Performance, provides a nice illustrative example of this balancing act between intuition and data. Here, the authors wanted to understand how a coach’s subjective assessment of athlete performance aligned with that of an athlete monitoring tool. To do this, they recruited nine nationally competitive swimmers (one later withdrew due to external time commitments), all coached by a single coach in Australia. For a nine-month period, the swimmers recorded their morning HRV, and completed a subjective questionnaire on their perceived recovery and fatigue. Training sessions were also monitored, with data such as rating of perceived exertion, duration, and distance all collected. Prior to racing, the coach was asked to predict each athlete’s performance in terms of race time; a total of 93 races were “predicted” by the coach in this way.
It turns out that the coach was very good at predicting their athlete’s competition performances, based on their intuitive assessments of how the athlete’s training had been progressing. Predictive ability in this study was calculated through “Area Under the Curve” (AUC), in which a value of 1.0 indicates perfect prediction, and 0.5 indicates prediction no better than chance. The coaches AUC was 0.93 when it came to identifying performance decrements, and 0.89 when it came to predicting performance improvements. Adding the athlete monitoring data to the coach’s predictions did not improve the predictive power of any of the subsequent models.
This all suggests that experienced coaches—or at least this coach—can be very good at making intuitive assessments of athlete progression, and using this information to predict a competition performance outcome. The coach was likely able to “collect” data by observing their athletes in training sessions, as well as understanding their mood and psychological state, and use this data alongside their experience to accurately predict performance.
The problem with this approach is that it’s actually quite difficult to “teach” insight and intuition. Developing intuition takes time and experience; we need to be exposed to different situations, contexts, people, and outcomes, and this isn’t something that can be fast tracked. Instead, it requires years of practice, but, not only that, it also requires reflection on behalf of the coach; did what they thought would happen actually play out in reality? What did they miss? How can they use the information gained in this particular scenario to update their mental model and understanding to assist in future intuition?
Furthermore, intuition is a very human trait, and so is influenced by things such as bias and emotion, along with overconfidence. So, whilst intuition on the part of the coach appears really positive, being in a position to make accurate intuitive inferences takes time and experience, and even then, is heavily subject to bias. The coach in this study had 13 years’ experience in coaching national and international athletes, potentially demonstrating the time required to develop this intuition. Furthermore, as identified by the authors, swimming is a sport in which the environment is fairly constant, and race results are therefore rapidly available post-competition, and easily comparable across competitions. This allows for both rapid and accurate feedback, which can be used to update the mental prediction models used by coaches. Conversely, more complex sports, such as soccer, are harder to predict, as there are a substantially greater number of inputs that have to be taken into account, and luck can have a far greater effect.
Perhaps the best approach, then, is that of triangulation, which Paul Gamble has written about extensively on his own blog. Here, we can take information from multiple sources, and see whether they’re giving broadly the same message, even though they might differ slightly in the details. In the case of training load management, we could use both data and intuition; does the training load data match up with the athlete’s perceptions of load, and do these thoughts align with the coach’s intuition? If they do, we actually have more confidence in our decision; we’ve supported our gut feel of the situation with non-biased data. If not, we may need to reassess.
Finding the right role for data
Returning to the study, it’s important to note here that the data collected was not inaccurate or ineffective, just that, in this specific case, the collected data did not add further predictive value to this particular coach’s predictions. For a different coach, perhaps one much earlier on their developmental journey, the data may have added value. Nevertheless, I think this paper indicates a few important lesson:
- More data is not always better. Think about value as much as volume.
- >Coaches can outperform collected data in terms of prediction when they have the required knowledge and experience.
- Data can still always be used to check and challenge our intuitions.
- WUtilize measures that add value or support decision making when collecting data, and not merely collect data because we think we should.