top of page

Sportlight Conversations: Richard Clarke

Richard Clarke is the lead strength and conditioning coach for the Bristol Flyers basketball club (British Basketball League), an S&C course leader at the University of Wales and an independent consultant who is also the creator and host of the Strength Coach Curriculums podcast. A recent conversation with Clarke focused on agility, performance tracking technology and the challenges of data:

On your Twitter account, you wrote, “Agility is the ultimate multidisciplinary problem.” Can you elaborate on that?

Agility, I think, is poorly boxed in with all other components of fitness. Traditionally, what coaches are taught is that we have speed, strength, power, and then agility kind of gets fit into that – when in reality agility is an umbrella term for almost all of those other components of fitness manifesting themselves in a sport-specific situation. So that isn't “I am training quote, unquote, agility.” It’s taking all of those independent things and producing a useful, sports-specific X product.

Agility is going to be the closest thing that we could focus on, from a strength-and-conditioning, physical-performance staff point of view, that is going to have a very direct and very important transfer to competition outcomes. Everything else contributes, of course, but it normally contributes via an agility-related action – apart from specific situations like collisions in rugby or very, very sport-specific, isolated, less-dynamic running situations, I suppose.

What's important with agility is that – if we're going to optimize that final outcome of the way that somebody performs in their sport on the pitch, on the court, on the field – we look at it from a range of different viewpoints. That's exactly what agility, as a component, essentially does: It takes a physical understanding, a perceptual cognitive understanding and a sport-specific-requirement understanding, and then it says, “This is the kind of movement skills that we need somebody to be able to use in this situation.”

How close is the field of sports science to being able to marry the measurement technologies that exist right now with the human component of evaluation – analysis of all those dynamics you described within context?

I think we're getting there. It's more about the technology that's needed to optimally do that, to help us coaches guide our attention to other pieces of information, to broader and context-related things. That tech exists – it's just not mainstream yet. It’s been built, it's physically possible. It just hasn't been integrated into clubs, stadiums, courts, etc.

For me, that's something I'm really excited to see over the next five, eight, 10 years. Historically, we've had a real reliance on GPS to provide a very descriptive outline of some basic, primarily linear exposures to certain things, and I think there's going to be a huge boom in different types of technology coming together that talk to each other and give us much more detailed insight into the situation. That becomes really foundational.

I think where we’re going, we're understanding dynamic human movement in much more detail. We are still using a linear-speed, closed-skill thought process, and we're forgetting that we need to be better as coaches and sports scientists in understanding context, understanding task, thinking about attention. What was the athlete trying to achieve? What were all the external constraints that were influencing them at the time? And at the moment, the only way for us to really do that effectively is by watching subjectively, seeing what happens. New technologies that begin to talk to each other and expand our capability of measurement, for me, will probably have the next huge impact on the way we think about things in sport – energetics, general time-motion analysis, in-play kinematics, locality to defenders and time-aligning that with video for stuff that you can't measure.

When you think about those technologies and the problems you’d hope to solve, how important are the consistency and comprehensiveness of those measurements?

I can see some of the technology that gets developed will start giving us this greater insight into what's happening when we see certain movements, exposure to certain movements. But until we understand that better, we can't use that to optimally monitor and really inform day-to-day training, or to take on a dose-response relationship. It's almost like we have to measure it and it will be useful for performance problems and a particular isolated game-play situation, and then once we get a better handle on it and we've got more data and we understand the dose response, then it may become a bit more of a mainstream monitoring-based process, where we understand that because this is happening, this is what we think is happening physiologically to the athlete.

But I think there's a massive benefit before we even get to that point of understanding things more. An acute injury-risk situation, if we keep on monitoring from a load-manipulation, injury-reduction process, I think we’ll start to understand much better ACL occurrence and things like that. Initially, we’ll have a separate group of users – one for performance, exploration, education and understanding, and one for “We know what this does and we know what we're getting, so let's track these numbers.”

How much of the data that is being collected by existing tech is currently being used? How much of it is useful at all?

I can only speak from what I see and what I get the impression of. But I think there's a massive amount of data that gets collected and used and has no practical impact. So why am I excited for new technology and new data? There's still a risk of just getting a soup – so much information we do not know what to do with. That being said, I don't necessarily think that's a reason to not explore those new things.

When the data isn't getting used, I think it's often because there are issues with sensitivity or accuracy. GPS does some things very well and there are some things it doesn't do particularly well in terms of how reliable and how sensitive the number it produces and how useful that number is for dose-response monitoring, etc. So I think some of the data possibly exists, and it's useful but doesn't get used just because of the amount of time and personnel needed to analyze it in a way that we understand very well. Some of the data, I think, exists, but it’s just too noisy for it to have practical purposes – there just aren’t quite enough methods for collection yet. I think there are areas where the data just isn't there. And I think that the fourth category, potentially, is where the data is there, but then you would need six different independent pieces of technology to give you six independent things that aren't talking to each other. You come back to the personnel problem and a decision-making problem, because you'll just get information overload.

So I think that you've already got organizations that are very well resourced, that have a lot of staff with a lot of good critical understanding, and they are getting a high yield of information from the technology that they have, which they’re using very well. And for those with more tech, more information might be useful if they can effectively learn to manage it. But at the same time, then obviously you have these other environments where there just isn’t the time or resource to be able to data-mine and understand it.

So some of this data collection is exploratory in nature? The idea is to find out more about it over time?

Yeah, absolutely. When we think about the new tech that's starting to evolve, where it's going to be really challenging for clubs is that we're going to be faced with not only new technology and huge amounts of data, but also potentially with a little bit of a mentality shift from dose-response information to much more contextualized biomechanics and detailed exposure. The general level of understanding of change-of-direction actions, braking actions, agility-based situations is much lower than how well we understand repeated sprint exposure, for example. Will it be cart-before-the-horse because there's just not the underpinning staff understanding of 1) that area and 2) appreciating that these variables are all contextually dependent to what's happening in the game – athlete attention, essential skills and information, etc.

So I think that's going to be a challenge for clubs – to really think about both the volume of information they get and the context in which that information is coming. How they learn to use it – not just getting overwhelmed – and interpreting it appropriately. That's going to be the key to both successful implementation and successful products being integrated into practice.

Has the field of sports science arrived at a place where performance practitioners are able to kind of optimally balance the training of athletes for peak performance with the mitigation of injury risk? Has technology helped in that regard?

Yes, I think it has. The way that it's moved in the last 20, 30 years, before my time, the way sports science has progressed and our understanding of it, there's no doubt that tech has had a hugely positive impact.

I still sometimes reflect on the importance of the fact that we're getting all of this objective information from technology but that we still need to use that information subjectively. These are human beings and athletes and performance systems. Performance isn't as simple as, well, the computer said “red light,” so we won't do any more of this. The objective data gives us insight, and it helps us understand things that we can't rely on the eyeball test for. But you also get inspiration from subjective pieces of information. And you also have a coach’s intuition on a decision that gets made with performance improvements, managing the injury-reduction process. The decision ends up becoming a subjective decision informed by a range of different things.

But, yes, the tech has moved things forward. It has really helped and removed a lot of bias, and it's helped us educate ourselves much, much more effectively. It's useful in improving practice if the practitioner understands it, interprets it and uses it to make an appropriate decision.

78 views0 comments

Recent Posts

See All


bottom of page