Callum Walsh is a high performance practitioner who is currently completing his doctorate from Leeds Beckett University and who is the former head of sports science at Newcastle United Football Club. A recent conversation with Walsh focused on player-tracking technology, evaluating performance practitioners and the usability of data:
What skill or area of performance is still difficult, or perhaps impossible, to accurately measure that you’re eager to see technology catch up with?
There's always the context behind the physical data. So when you're doing maximal testing – whether that be game data, jump testing, sprint testing – there's always that context behind it. Do they feel comfortable doing it? Do they think that data is going to be used against them? Because I think that if they set the bar too high, then all of a sudden, if they don't reach that mark again, then they’ll be judged badly.
I think the other aspect – and why, for example, Sportlight is so good – is the ability to measure the exact game profile of every single player. Every player plays a game a completely different way, and how they cover every meter can be different, how they end up managing themselves in terms of understanding their own physical capabilities and how that then impacts what is required from them. So one of the really tough things to do, for example, is measured deceleration. You might have someone who is an incredibly fast player, but over a period of time he's learned that if he gets to top speed, he struggles to slow down very quickly. So if he's going to press a player, and he knows that if he gets to full speed he's not going to be able to stop, he then doesn't reach full speed in order to stop because he knows that he hasn't got the physical capabilities. So then there's also that metric of almost how important is being this quick, let's say 10.5 meters per second, if you can only slow down from 9.5 meters per second. That’s important in terms of certain game models, like a high-press game model. I think that's a really interesting one.
And then some of the data that we were looking at was degrees of turns at certain speeds and how hard those cuts were and how comfortable some players were making them. Some players felt extremely comfortable doing 90-plus-degree turns at quite high speed, and others didn't. And this is really important when you're looking at building a physical robustness in players, particularly if they're coming back from an injury. Let's say it’s an ACL injury. Well, what is their game model? What sort of speeds do they usually get up to, and what sort of angles are they cutting at? So you can gradually periodize that up – 90 degrees and 95, then 100. It just kind of gives you a point of reference. There's so much context within it that we have to be really smart and look to build individual athletes within a team sport.
How much of the data currently being collected by sports organizations would you estimate is usable, or being actively used, to guide training and performance and evaluating talent?
I think it's a constant learning aspect. It is really interesting to take one number and make two completely different decisions out of it. The perception of one number can be very different from how you look at it and your context that you bring with it. For example, a game the other day, I looked at it and thought the way the schedule worked, what we’d done – it was a bit of a crazy week. So we've done a long build-up, which was not like us. We equalize in the last minute, but physically we don't look great. Maybe the week was too long. But then we woke up and got the physical data, and it was the highest performance we'd had. So sometimes you can have one perception, but it's quite nice to see something else.
You can't just take one number. Maybe the physical stats are high, but maybe the technical stats were lower because they were a little bit more fatigued. So you can't take one number without the other. And I think a lot of data is used in terms of periodized weeks. I think we're getting quite good at that. For example, we don't want to do high speed too close to a game. Whether the coaches take that on board or not is always a completely different story; it’s a really complex matter within the industry – something I'm looking at for my PhD. Obviously, data helps you build pictures.
A lot of the data we collect now is looking more not just day to day or week to week, but longitudinally – what the norms are for players. It means that we can start to make judgments. For example, I had one player who had three games with crazy, crazy high-speed running. I mean, it makes me dizzy just thinking about how every game he got higher and higher. But that was probably OK for him because we talk about his chronic load, which we keep quite high. He was around 3,000 meters of high-speed running every week, which is astronomically high. But for him, that was the norm. So at the end of the week when he had the crazy games, he went maybe 20 percent above one. But if this number had been lower, maybe that might have been a problem. It's not saying he would have had a problem, but I tend to think the body likes consistency in terms of training loads. Those sorts of things – big jumps or big drops – I don't think the body deals well with that.
On Twitter, you asked a poll question: How many performance practitioners clearly understand how their role is judged via clear objectives or KPIs by the club or manager? About a quarter answered that they have clear ideas on KPIs, while almost three quarters answered no. Were you surprised by this?
For me, personally, this is a huge, huge issue. How we judge the performance practitioner is clear: We judge them on how many injuries occur every season. And then if it's higher than the previous season, the question gets asked: Why? But if we don't get to make the choices or if the manager doesn't listen to us, doesn't take our advice on board, can we be judged? If we can't be judged off this, what are we judged on? In-game metrics such as distance covered? Well, that's highly dependable, again, on the physical profile and players you've recruited – their game models. And also tactics. If you're a man-to-man high-press team, they're going to be high. If you're a low-block counterattack team, they’re probably going to be a little bit lower.
So unless there's clear autonomy on roles and responsibilities, that practitioner cannot be judged. The best analogy I have is that we work with a nutritionist, and he gives advice to the cook. But the nutritionist doesn’t get to go in the kitchen and cook the food. It's a real underlying problem.
In an episode of the Boot Room Podcast, you discuss the demands on modern footballers being too much. Between training load, meals, sleep – all these variables – how does a performance practitioner make the necessary determinations to, for instance, back off training?
First of all, I don't think it's ever really one person's decision. I think you often get a group of people – you get the manager or the coaching staff, they're probably very keen for the player to play or to train. You then get a physiotherapist, who is probably a little bit more risk-averse, because they're the one who’s going to have to deal with the injury. They have to answer to the board of directors, which may ask “Why do we have so many injuries?” So they have to protect themselves in their job. To me, the performance coach kind of sits in the middle to kind of balance the risk versus reward.
For example, is it a Cup final that weekend? Is it the last roll of the dice for the manager? Sometimes you get situations where you might have three strikers, and one is suspended and another is injured. When you have fewer numbers in one position, you probably take fewer risks. Again, there are lots of pictures to add in terms of where their data is at, what their norms are over a period of time, where their aerobic conditioning is. Some players travel further than others. Some might get up at 6 in the morning to drive two hours because they don't want to move to the club location. Maybe they're chronically bad sleepers. There's also understanding there might be a bit of a cognitive drain in terms of the game plan that the manager has asked a player to take on. Others can be affected by high emotional drains. I had one player who, in three seasons he's only pulled up with two minor, soft tissue injuries. Both happened in games against his previous club. It's never black and white. So you take all that human stuff and you take the data and try to see where things are.