Jared Powell:
Today's guest is Connor Gleadhill. Connor is our PhD candidate exploring causal inference, and is someone who speaks fluently about complex topics must be the smooth Irish accent. Causal inference is basically the pursuit of estimating. How and why things happen? How do we determine causation strictly through the randomized controlled trial? Or can we use observational evidence too? Causation is famously a dirty word in modern science, unless it is explicitly explored via the randomized control trial. However, Conner thinks this thought process is elitist and that other research methods can be used too with some caveats, of course Conner and I have many shared interests, including weird fascinations with quantum physics, complexity and consciousness science. So this was a conversation I thoroughly enjoyed and I hope you do too. This show is sponsored by the complete shoulder. Online course, brought to you by yours. Truly the complete shoulder online course will help you confidently treat people with shoulder pain while still acknowledging the complexity and uncertainty that is inherent to musculoskeletal healthcare in over 16 hours of training delivered in an engaging self-paced online format.
Jared Powell:
You'll learn everything you need to know to feel more at ease, treating people with shoulder pain, head to www.shoulderphysio.com for more information, one more announcements. New podcast episodes will now be released monthly instead of weekly. Whilst I thoroughly enjoy the podcast and talking to and conversing with guests on the show, the workload of releasing a weekly show is simply too much given my co commitments to research and clinical practice, not to mention the impact on family time and golf, which is in desperate need of attention. This will in no way diminish my passion for bringing you the most meaningful conversations in the musculoskeletal sphere as always. Thanks for your continued support without any further delay. I bring to you my conversation with Conner Gladhill okay. Con Gladhill. How are you? Thank you for coming on and having a conversation with me.
Connor Gleadhill:
Oh, thank you so much, Sharon mate, for having me along. I'm super excited. Yeah. Should be good.
Jared Powell:
Yeah, we've we've, we've been talking on and off in the, in the social media world for a, for a couple of years probably. And I really love some of your work that you've put out into the Instagram universe or Twitter universe or the internet in general. So I've admired some of your work and I've be, we've been trying to catch up for a while now, so I'm glad we've finally made it happen. So if you could sort of just give people a bit of a brief introduction into you and your history, who are you? What do you, what does a week look?
Connor Gleadhill:
We were, we were going to Jared, wasn't it. So who is Connor? I am a researcher and strength coach and a physiotherapist. And I think in that order at the moment is probably appropriate where I've, where I've gone and how I've I've gotten here is like all, well, like most physiotherapists I started off getting a lot of injuries as a, as a kid growing up, playing, playing rugby union. So I'm Irish and there is no other, well, there are other codes in Ireland. There's there's football the ball, but we played a of rugby union. So a lot of injuries and I've got a very medical family. So I thought physiotherapy is a very good fit. I was rhymed a lot as a kid as well. And so, you know, that's maybe strength of physiotherapy that they can have a real shaping role in, in kids' lives.
Connor Gleadhill:
So that story is pretty common after I did uni in Australia. So I, so I moved out here for uni and I worked publicly and then I ran my own business and then I ran a, a program for the new south Wales police force. And I can say, I can think, I can say the name, it's a large a large law enforcement agency where we had a really awesome team of strength coaches and psychologists and dieticians. And it was a high performance unit for, for the, you know, frontline officer. So it was, that was, that was really cool. And after that then finished, I ended up chasing my interests and I've always had roles in research throughout my clinical career. And so I delved in research a little bit more full time. I, I was research a clinician for a while. I, I ran my own clinic and I've recently made the people might say silly decision to pursue research career full time. And so that's what I've done this year. And eight months into a fulltime PhD scholarship with few amazing researchers. So Dr. Hopen Lee, who's in Oxford, Chris Williams. Who's here with me in Newcastle and Steve camper in Sydney. So a really fantastic team. And, and, and I'm very lucky to be where I'm at at the moment. I do a little bit of coaching, little bit of strength coaching on maybe like a half a day here and there, but predominantly researcher, strength coach at the moment.
Jared Powell:
That's great. That's, that's quite a cool evolution there. Kind of, not, not a lot of cause and effect actually in that evolution, you, it would be hard to predict where you'd end up in a fulltime research role after running a high performance unit in the police force, mate, what's going on?
Connor Gleadhill:
Yeah, that's a good, good point. Yeah. We can start to get into a discussion around prediction and causation there. Jared, that might be a good .
Jared Powell:
Interesting. You mentioned Steve, Steve camper. I'm talking to him next month, actually in a similar format to this. So this will be, this is gonna, this is gonna be a great, hopefully build up to that. He's, he's just published some fantastic sort of viewpoint editorial type articles in the J O S B T on some basic statistic, which statistics, which all physiotherapists hate and can't stand, but he really makes it quite, I guess, accessible, which is which is a credit to him. So do you are you, is he one of your supervisors?
Connor Gleadhill:
Yes. So Steve's evidence and practice here, I think is, is the go to maybe like entry level resource for clinicians who really wanna know a bit more about research and yeah. Bang on. Absolutely. He can make really complex topics, very clear and simple. And Steve and I, and, and the supervisory group. And I think that's, what's led me into this. I've just finished discussing this with them, is that we all think about the same ideas and, and that's what I get from research that is a little bit different. It's the cerebral challenge of people that think quite similarly around, you know, the same broad ideas and, and you all challenge each other to, I guess, improve and get better at thinking about those ideas. So embedding what's good about research culture into practice, I think is the ultimate aim and goal. And one of the, a large part of my PhD is, is around that. So, you know, it's exciting.
Jared Powell:
Okay. Before we get into the, to the dense intellectual cerebral topics, Connor, what book are you reading right now? Or what TV series are you watching now? It can be the trashiest TV show all time. I don't mind, or it could be a, a dense book. So what are you, what are you gonna go for?
Connor Gleadhill:
probably, I'm gonna go exactly that. So start with a trashy TV show and I'm, I'm soccer for food shows. Like I've almost watched exclusively food shows on, you know, whatever it is. Somebody feed Phil is my is my new favorite. Yeah. It's really corny, but
Jared Powell:
It's so corny.
Connor Gleadhill:
It's so corny, but, but in a nice way. And it's real, so I'll have to say it, you know, that it's real food porn, so, you know, you have fantastic dishes going on. So yeah, that's, that's the trashy show. The book is dense and I just finished it. And actually we were talking about complexity before jar. So this is called complexity, the science on the edge of chaos and order the science on the edge of chaos and order. It's by Mitchell Waldo. Really
Jared Powell:
Good. Kinda, I'm listening to that right now. That's the book I listen to
Connor Gleadhill:
Be listening to the similar kind of book yeah. And
Jared Powell:
Reading. So you finished it. Okay. So I, it's a, I love I'm loving it so far. It's fantastic. So I don't give too much away. Can you, can you give a brief overview for the sort of to the lay listener of what it is?
Connor Gleadhill:
Yeah. Great. So it centers around the establishment of the center research group, which is which I was actually referred this book by a colleague because we're doing the same kind of thing. And this is part of my PhD is establish a, a network of physiotherapists that come from different areas. And, and we're, we're Grammy to really generate clinically relevant research. So that's, that's one part of it is the Santa Fe Institute, but largely it's talking about the science of complexity and how it relates to different fields, but higher than those fields are all interlink. So economics you know, it used to be this really simple, clear, like I say, Newtonian. So we used to view the world in this really crystal clear way that, you know, cause and effect was very clear, you know, so this happened and that happened and, and there was very there was no binds of uncertainty around things.
Connor Gleadhill:
So that was kind of this, the way that economics was viewed, physics was viewed and physics has just moved on a, a lot quicker than other fields. So economics is kind of still stuck in this. They call it neoclassical way of, of viewing the, the world, the economic world. And so it's then watching these fields unfold and evolve. And so, you know, we, I, a little bit more recently we view economics and economic systems as these complex systems that emerge. And what I guess that means is there are various factors that contribute to, you know, a novel expo or a no a novel outcome that you probably couldn't accurately predict from, from previous, I guess, cause and effect relationships. And so what, what that is is this, you know, tension between chaos. So the natural world is very chaotic. We know that from quantum physics and the tension between the natural predilection for systems to want to self order it's getting really deep. So basically that, that, that's what the book is around and they go through economics and they go around kinda neuro a bit of neurobiology, don't they? Jared. Yep. So that, yeah, it's fantastic. It's good book. I recommend that poor pain has big links to our field.
Jared Powell:
That's exactly right. And I think the applications, not just to pain, but humans and, and human systems and, and behaviors are, are everywhere to be found. And I, I went down the rabbit hole of quantum physics couple of years ago, and I'm still down at, and just, and you mentioned, you mentioned a Newtonian view of the world, obviously shatters that entirely. And this is why causal inference, which is what we're gonna talk about and cause and effect. I've got some really interesting questions for you on whether we can ever actually have a strong causal, a causal reason for something occurring or is it just an estimate based on quantum quantum mechanics, is it just a statistical estimation that we make based on a number of different variable variables and what have you? So the, what you're saying really rings home to me. And I think if we look at humans as a complex biological system that is always teetering on the edge of chaos and the reason why we come back to this Newtonian view of the world, which I, which I'm sure you're gonna talk about is that is because it's coherent.
Jared Powell:
It makes sense, right? We, we drop a ball and it drops to the ground pretty consistently every single time. And we can often, and we love to, we love to have causes and reasons for things because without that, the world is entirely uncertain and a very hard place to navigate and sort of live in. So, so that sets us up beautifully and that, that wasn't planned kind of, we haven't even, we didn't even talk about that pre pre being on air. So we're reading the same book. It has a lot of applications to physiotherapy to pain, but it also has a lot of applications to your PhD, which is concerning causal inference or the science of causal inference. So do you mind giving me primarily a bit of an outline of what causal inference is just in a, in a, in a minute or two, if you can, and then also to the wider audience as well and try and make it as digestible as you, as you can.
Connor Gleadhill:
Yeah. Great. So causal inference, you know, it, it sounds like a, it sounds big in a science complex, but I think what the first really simple way of, I guess, outlining this and as a, you know, really important preface is it's a way of thinking that you, you know, I guess need to adopt to understand, I, I think you hit the nail on the hedge jar to understand the world in a little bit more appropriate detail and in, you know, a little bit more of an accurate way of, of understanding how research unfolds. But then also understanding how, how events unfold. So a way of thinking. And I think there's a, a few important reasons why it is important. And, and just as a bit of background, I think we all are really good at, at cause and effect as human beings, if we weren't, we wouldn't live very far.
Connor Gleadhill:
So we intuitively know what and are trying to find links between cause and effect throughout our whole lives. And you just mentioned the ball dropping scenario. So if we didn't have this way of thinking as human beings we wouldn't, we wouldn't get very far. So someone who doesn't know that, you know, a weight that when it is dropped, it falls you know, a baby who doesn't know that might have a weight, you know, drop on their hand and other sorts of horrible things happen to 'em. They, they may not get very far cause they may not react and and, and behave appropriately to those caus effects. The problem comes when, and I think this is getting into a second reason why causal inference is important is because causal inference is, is really important as a way of thinking to, to think through what's called spurious association.
Connor Gleadhill:
So we were, we were just talking Jared about the example of ice creams and shark attacks. So, you know, if you plot ice creams and shark attacks on a graph they rise in the same way and they fall in the same way. And you can like thinking you may leap to a conclusion that ice creams increasing consumption of ice creams may cause increasing shark attacks. But when you think about this in a clearer way, that's obviously that's obviously not the case. So that's spurious and that's spurious association. And I think this is the trick with causal inference is making the important, I guess, thinking that goes into caus and effect very explicit and then clarifying your thinking. So to avoid these spur associations, I think the other reason why causal inference is really important is because again, we do it so much as human beings that a lot of the reporting on research and science and, and, you know, we, we probably just gave a little bit of example again, off air Jared about where causal inference kind of came from as, as a science.
Connor Gleadhill:
And it, it in reaction to this really simple way of viewing research and, and, you know, the science that's out there that we really have a, a really cut and dry way. And I think it's through our training as clinicians in evidence based practice of saying that you can have cause, or you can't have calls. And again, this is really Newtonian, right? It's, it's, that's outdated, that's, that's pretty outdated, but importantly, people that don't have, you know, E even people with the training. So researchers and clinicians with evidence based practice training, we can really leap to strong conclusions about cause or the lack of, cause in things we read and, and then this, this might filter right into the, you know, our, our patients. And this is a big issue in the media, you know, so media might report on certain studies that either might conflate or, or, you know, make their causal claims a little bit stronger than they could be, or even the other way where, you know, so the media may not report on calls where there may be some kind of stronger causal inference possible.
Connor Gleadhill:
So where there isn't the training and even where there is the training of evidence based practice and science we have this really Newtonian view of, of cause and effect and, and causal inference is about understanding that there is a science in, in cause and effects. And that, you know, this view can be a lot more complex and, and appropriate to the natural world than is currently happening in science and, and, you know, the evidence base and then clinical life. So that's it in there wasn't a minute, but I hope probably that, that gives a bit of an Nightline.
Jared Powell:
Yeah, that's great. So, so causal inference could be perhaps the study of cause and effect, and then the reason why causal inference is perhaps important is it is maybe negates or minimizes these serious associations, which can sort of crop up without rigid or thorough scientific examination. Is that roughly accurate?
Connor Gleadhill:
Yeah, absolutely. So I think then, then going to the other side of this is that understanding that cause and effect is, is not just talking about SPS association, because again, I think what that might lead to is clinicians thinking that you can only have cause and effect in certain scenarios which is a, probably a hangover of this decree in the eighties by statistician saying that, you know, randomization is the only way to get at cause and, and, and a causal relationship. So that's probably going the other way. And it's trying to understand that there is a, another important way of thinking and, and a gray area between calls and effect with even the, our evidence based practice top of the pyramid type of studies. Yeah.
Jared Powell:
So, okay. Let's get into that. So if we're trying to determine causation, let's say, we're gonna talk about the shoulder. Let's say, we're trying to determine whether nonsurgical treatment for rotator cuff related shoulder pain is superior to surgical treatment of rotating cuff related C pain. And let's just say, it's a tendon repair, so we've got two groups or physiotherapy or nonsurgical and then surgical. And so if we're trying to determine which is more effective, then we would probably use a randomized control trial to do that. Right. And so the outcome of that, irrespective of sort of how we do the treatment or how long we give it, let's say it's a, it's perhaps a three month intervention of physiotherapy and it's a surgery and they're followed up for, for one year. So could we interpret if everything is done well within that randomized control trial, that that will, that experiment will spit out what is more effective an intervention which could give rise to what causes the better or the, the better effects out of which intervention. So I guess what I'm trying to say in a bit of a discombobulated way is a randomized control trial, still the gold standard, or is it the only way of determining proper cause and effect in physiotherapy land?
Connor Gleadhill:
Yeah, this is so brilliant and absolutely the most, I think one of the most important questions and for, for current clinical environment. And I, and I think so the simple answer is randomized control trials are still, you know, the, the most well controlled and, and well established way to create more confidence in your cause and effect relationship. So, you know, the cause or exposure, there is the exposure to whatever you're trying to test. So in your example, intervention, you know, so whatever intervention you're, you're interested in, in testing against, against a control group, right? So the, your RCT absolutely still the, the best way to do that with more confidence and with more certainty. And that's all we're doing with health research is we're trying to create more certainty, more confidence around the estimate that we're getting from some exposure to some intervention.
Connor Gleadhill:
The important point to this is that it isn't the only way to do it. So it is, it, it is the best way that we have currently, but it is not the only way. So there are other ways to establish a cause and effect relationship. So again, what you're just trying to find out is whether your exposure, so the intervention has some kind of effect and the way that there, there are many other ways to do that. So, you know, you, you can, cohort study is a, is another nice accessible example. And in, in a cohort study, you're following a group of people for a time period. And so that has the benefit of being, you know, so, so you are able to measure them before and, and then after and obviously then what you're wanting to do is to try to examine this group that you have, you've determined they have the appropriate exposure.
Connor Gleadhill:
So the intervention that you're, you're interested in finding, or, or understanding and estimating the effect of that exposure and what you need to do is create an alternate scenario where they don't have that exposure. And so maybe you pick a control group, you select a control group that don't have that exposure, and this is, you know, controlled cohort study. So this is a study then where you haven't randomized people, you can still determine to some level of confidence, to some level of certainty, what the effect has been of that exposure, your intervention, as long as you have, then done appropriate levels of ensuring your, your, your control group is at least adequately matched to group that you're trying to understand what the intervention is. So your exposure group and, you know, there are important ways to then list high. You're asking that question. And also importantly, there, there, there are, there are ways to determine how you're measuring statistically measuring the effect that you're, you're measuring. So statistically analyze the effect that you're measuring at the end of all of this exposure. So short answer is it is still the best way long answer is it, it's not the only way to determine cause and effect, but again, this comes into understanding more about causal inference and, and the science there and how we all do it and, and how we do it as researchers, how we do it as clinicians.
Jared Powell:
Wonderful. That's so you mentioned a cohort controlled trial or yeah, I guess we'll call it trial. Does that still, so that still comes under the banner of an observational study, right? Or, or because can you, is, are you doing an intervention and therefore it sort of comes onto a more interventional type study because sort of classically it's divided into interventional and then observational and the interventional ones you can, you do with an RCT or what have you, and that's establishing causation and then observational is you just observe a cohort of people you don't really intervene, but you just, you, I guess you look for associations and you could theorize, or, or perhaps hypothesize if there are causative events happening, is that, is that a fair distinction and within observational, within classic observational type research, apart from what you may have just mentioned in terms of the cohort control type study, are there any other observational type ways of establishing causation or is that purely correlational or
Connor Gleadhill:
Associational? This is such a good question. So I'm gonna just put a caveat there, jar that this is actually some work that is currently my, so can't give guess I can't much away, but where we have results or, or anything like this. It's it's, it's gonna be great, I guess, to, to, to follow on with this, but so to answer your question, observational research and interventional research. So I might call that experimental research. So that's maybe some semantics, observational research and causal inference that is actually really common in different fields. Just again, with our evidence based practice and clinical teaching, we, we kind put observational research into a completely non causal realm. You can definitely still determine to some level of confidence. And again, what we're talking about there is the confidence level in your cause and effect relationships. It's just stronger with your experimental.
Connor Gleadhill:
Maybe again, the top of the pyramid, there is your randomized control trial. So that's where you can be stronger in your inference of cause and effect, but observational research is still possible, and it's done very commonly in epidemiology. So, you know, large public health issues that you're not able to, you know, ethically randomized, but amongst the population, you'll still importantly be asking and looking at at a causal question and a causal issue. So for example, if you want to understand the exposure to lead in water and what that does, and we know, you know, years and years you know, hundreds for hundreds of years, it's not good, right? So, but you can't randomize people to have lead in their water and, and then lead, not in some people's water ethically you just can't do it, but this is, you know, that was epidemiology back in the 18 hundreds, they were figuring this stuff.
Connor Gleadhill:
I, and that was through observing and the, that is an observational research study to observe people over time and figure out what the cause of some kind of exposure is. And again, we, in, in clinical world, we, we think less, so much in terms of these big level exposures, we think about doing things to people, but interventions. So our interventions are still exposures, if that makes sense. So going back to your question, observational research, there's plenty of ways to get at a causal relationship through a cohort study, we listed that, but even you cross sectional studies you can still try to get at some kind of causal relationship and cause and effect through simply just being clear in your, your questions and what you're trying to ask and then how you do that. So how you statistically kind of work with your data to get at that question.
Connor Gleadhill:
So that's probably number one. And then going to your experimental, your interventional research, I think we, again have this really like very, not, not it's quite parochial, actually the way we view this, but it's really elitist that you can only do an RCT to, to term cause and effect. But what randomizing is, is it's just a research method. It's a tool to put into your study to simply try to make, again, your causal understanding of the phenomenon phenomena that you're looking at stronger. So, you know if you view randomization that way you can set up trials, experimental trials in all kinds of ways that are mimicking a randomized control trial, you just then don't randomize people to one thing or the other, you know, the, the cohort study without the randomizing is simply you know, a prospect of cohort where, you know, prospect of control cohort, then you put a randomizer in there, then you've just simply, you know, jumbled up the exposure and made your calls or inference a little bit stronger.
Jared Powell:
Beautiful. So, so if we look at it like a confidence spectrum or confidence continuum, then you have your R C T as the, we we're most confident of in terms of establishing cause and effect association. And perhaps that gets a little weaker as we go along into the observational type studies, but we can certainly speculate with observational research as to what if there may be some causal associations. If indeed the research question is set up well and good statistics are done.
Connor Gleadhill:
Yeah, I completely, I love that way of laying it. That's great. You should, that's
Jared Powell:
In your paper. It's alright. Just
Connor Gleadhill:
Well, so we, we do talk and, you know, in the causal inference world, that's probably a good point. We talk about strong causal inference and weight causal inference. And so, you know, if you start to, that's why I explain causal inference as a way of thinking, cuz if you start to talk about cause and effect in that way, takes away the burden of having to, you know deal with associations or associations as something that, you know, it's, it's just so elitist this way of thinking about research methods and design and cause and effect that we currently have. So if you can start to use different language, it really helps, you know, so you can be you can have an inference of caus and effect. That's very strong in an RCT, an inference of causal effect. That's very weak in a cross sectional study. So yeah,
Jared Powell:
That's interesting. So, so maybe if you get a result coming up in, in a, in a basic observational type paper and it may lead to what looks like an association or a, or a correlation and you hypothesize as to whether it's causative, can that, is that a good way of then leading into an I C T then to perhaps test that hypothesis in a stronger way? Does that, does that happen often at all?
Connor Gleadhill:
I, I really like this question. And again, this is, you know, part of my PhD is dealing with this, so I can't give too many specifics, but I do think this is that's a common sense way to progress, right? That you're maybe trying to really just descriptively understand something first. And, and so again, if we, if you're talking to Steve next next time, so his evidence and practice series deals with the kind of three general type of types of questions that you you're wanting to know in science and in understanding the world, you know, you have a descriptive question is maybe the, the first broad level of trying to understand something. And this is largely what we're doing with cross sectional studies and observational studies. But I think the key is then not to be too black and white with things and, and say that well, a descriptive, an observational study, that's a cross section study can never understand calls and effect because if it's set up well enough, it actually may be for certain issues.
Connor Gleadhill:
It may be the only way to understand calls and effect. And we have this idea that we can't use the word calls with certain questions and that's what Miguel Heran who's the, he's the real proponent of causal inference. And I would recommend the readers follow his work and read all of his papers, but I guess the easiest, most accessible is the C word paper. And that was in 2018. So, so reader listeners gone read that. And so the C word is a nice euphemism there and a nice title. Cause we tend to think about cause as a really dirty word in certain, you know research methods and, and certain areas. But to go back to your question, it's a sensible way to progress to describe something first. And then once you have described something, making sure you don't go too far and think too far into your data, because I think that's an issue that you might have literally just described something.
Connor Gleadhill:
So you have a picture of the world so that you don't start drawing links where there are none. So you don't start talking about calls in effect where you, you're not sure where that causal relationship has come from, because then the next step is to probably design a causal study and to start thinking is in a causal way and ask a causal question. But I think doing something as a really broad level to describe it is an important way to maybe understand the world and, and it is currently done. Like we do see that. I think that's what I'm talking about as the issue being, if you read too much into that as your researchers and the study, you know, the author team, if you, you're talking about a strong causal link where you, you haven't designed the study or the, you know, you haven't asked the causal question, you're just trying to describe it. Does that make sense? Is that,
Jared Powell:
Yeah, I love, I love sometimes you're just not able to do a experimental study on things, as you just mentioned a moment ago, there may be some ethical issues or in fact it's just too hard to do. And so that's that's okay. So then you should just focus on getting the best kind of observational data as you can. And that's, it's, it's fascinating that C cuz you sent me that C word paper, which is a great title by, by Heran in 2018. And it, it just made me think that so true. Whenever I see observational or cross sectional paper come out, the first thing they say is careful correlation doesn't equal causation. So it's like, everybody's afraid of, of saying the word causation in this day and age out of fear of being shot down on social media or by colleagues or, or whatever. So I, I, I see by, by your line of inquiry that you're kind of not a fan of that. And you think that in fact, this cross sectional or we should be more confident in, in speculating to the relat cause and effect relationships within cross sectional research, if indeed it's done well. Would that be a fair synopsis?
Connor Gleadhill:
Absolutely fair well said jar. I think I think the issue here is that then it, it, so this, this idea of really being overly cautious with, with the word calls and, and what you think might be causing something, is it renders so much of the world unaccessible to scientific inquiry. Whereas if we are just all a little bit happier to be clearer in what question we're asking and how we're doing it and being more open with that, then what we are left with is, is a world where we can start to understand things better. Whereas you, as you say, now, we currently we're in a situation where no one can talk about what they speculate and this. So I think the idea with causal inference is stay at your assumptions and you stay at what you think the relationship is that you're dealing with.
Connor Gleadhill:
And that's a large part of it. And much of that, the other relationships that go into say the primary exposure that we're looking at, we have a good idea about how those relationships work and that's based off data. We just wanna know about the, the one relationship in the, that most interested in, but importantly, when that's not the case, when there isn't data, we're still left to just simply make an educated guess on this stuff. And that happens in all other areas of the world that, you know, we make educated guesses on things that are, you know, based in logic and common sense, but we seem to be in this weird space in, in health and you know, science more broadly that we, we can't say that because we, we can't use the word cause so yeah, absolutely short answer is yes
Jared Powell:
. Yeah. We seem to be in this, especially in healthcare, we're in a funny place because so physiotherapy has been trending towards evidence based medicine for maybe a few decades now, which is a, which is a good thing. I think it's accelerated our profession in a sort of untold manner. It's been fantastic, but, but it gets to a point where like there has to be some logic and there has to be some intuition and there has to be some nuance to it because we, it's very hard to get empirical evidence on human experience. In fact, those two are almost polar opposites in how you set them out. Empiricism is third person science, which is, you know, thanks to Galileo for that 500 years ago. And then first person experiences phenomenology, right? So that's like subjective the person you can't get access to it directly.
Jared Powell:
It's it's we have to rely on, on subjective information. So trying to, trying to marry empiricism and phenomenology together, we're we're in that, we're that tension right now. We're in that struggle. We're in that Seasaw and I'm such an advocate of empiricism and science, but recently I'm starting to see the shortcomings of it. And I'm sort of starting to, to drift on the other side a little bit, but then I see gurus and quacks on that side. And I, I drift back to the, well, we need a randomized control trial, so I totally understand this push pull. And I don't think there's an easy answer because, and this may sort of come back to our complexity conversation that we, we opened with. It's do you have anything to add to that? Ah, is, is there a way to marry these two together? So, so we can understand both sides of the divide as it were equally and not be drawn to this, these binary kind of these, these sensationalist assumptions where it has to be one or the other, have you thought about this con yeah.
Connor Gleadhill:
All, all the time. I think it's such, , it's such a good conversation. I think for your listeners, it's probably gonna end up signing, you know, really philosophical because I, I agree, you know, it's, it's so tough and this is a realm where there, you know, there isn't a lot of data, so you can't talk about it with that's alright. Too much confidence, but a hundred percent. So I think, and, and I think this is the risk is that we get stuck into separating the two. And particularly when we're talking about pain and I mean, I'm not gonna name names. And I, I think most of your listeners would be aware of pain as a sensation perception de debate it's called debate, that it was, was quite, you know, it was quite prominent. At least in certain circles. I think what the issue there is that you are definitely trying to actively separate phenomenology and the human experience and, you know, and scientific inquiry.
Connor Gleadhill:
This may be rose tinted for me because I am, you know, I'm romantically involved with science. I do really enjoy it. And I think it's but, but I do, there are limitations to science. Absolutely. But a large part of that is that the way we've practiced science in the past has been, you know, certain way. And and, and then we're talking about causal inference in this call and, you know, that's, that's come about by certain failures in, in science and scientific inquiry. What I think, you know, when we're talking about science is it's literally just a way to understand the world. And I, I, I think, you know, there are aspects of science that you, you are able to measure things that aren't out of the realm of phenomenology. It's just, we don't have those methods yet. So I think where I'm getting what you think we will.
Connor Gleadhill:
Yeah. I, I think where, where I'm getting to is that I think, you know, consciousness we're going there is the ultimate, you know, expression of something that's out of the realm of scientific inquiry. I, I don't think we're gonna be having this conversation. We, I don't think we're gonna be thinking in that way in 500 years, you know, in 200 years, you know, science has moved on so much in 200 years already that, you know, you, you think about what we're going to be, what we will understand in 200 more years. So I don't think the two are mutually exclusive. I think the risk is definitely in pain and, and in the pain world is that we, we do continue to separate the two. I, I think underlying all of that, I, I think as one moves closer to the other, we're going to have, you know, more certainty and, and more accurate ways of at least measuring some aspects of phenomenology in the short term.
Connor Gleadhill:
And, you know, in the very long term in 200 years, I'm not sure where we'll be, but I don't think we're having this conversation fascinating. So that's a strong line, isn't it? It's a strong line. I like your confidence. there's confidence there. So I mean, let's temp to, you know, let's temper that statement in that. I'm not sure that's my opinion that I don't, I think the two aren't mutually exclusive, I think that being able to measure and understand things on a, on an objective scale is, you know, there's nothing about consciousness that will preclude that in my opinion in time.
Jared Powell:
So, so David charmers, who's a Australian, have you heard of David charmers? Yeah. So the, the fabulous consciousness researcher, I think is from Adelaide he's. He is now over in New York and he's a genius and he, he he's, he invented the term emergence really, or kind of as it relates to consciousness anyway. And he, so he thinks strong, there's only one strong emergent of phenomena in the world, and that is consciousness. And then everything else like pain for example, is weekly emergent. So strong emergence is that there is no way of explaining consciousness by looking at the brain or by looking at a neuron or by looking at any structure of our nervous system. This it's simply you look at it and you don't think that consciousness would emerge from that. And that's, that's strong, emergence and weak emergence is a, is a little bit different.
Jared Powell:
And we can talk about that in a moment. And so on that, on that definition of strong emergence kind of means that we need new physical laws, and this is probably where you're getting at to actually account for consciousness. So we need another quantum revolution or another Einstein to come up and, or perhaps that's quantum gravity, which they're working on, but we need more physical laws to actually account for consciousness. And so perhaps that's where you're going maybe in 200 years, was that look, Einstein only buddy had the general theory relatively in 1905. So that's just over a hundred years ago. So as I'm talking, I think what you're saying is maybe not as absurd as I thought it was a moment ago, but it's very interesting to, to think about in that we're gonna need new ways of understanding the world from a fundamental, from a granular level. Like we're, we're talking about how we perceive the laws of nature as they are now. So it's very, very interesting to think about, so you still wanna, you wanna stick to that Connor
Connor Gleadhill:
I'm, I'm sticking to it. And I think if I had my life again, you know, life is long, so I would've loved to be a physicist. And I, I think purely for that reason is that through, through physics, we've been able to really under understand the fundamental makeup of our existence, our consciousness. And yes, you know, I think I, I, I do think quantum physics is around where it will prove to be very important for consciousness. And I think, you know, that's why there are a lot of quantum theists that do a lot of consciousness research. So yeah, I sticking to it.
Jared Powell:
good. Good. I like, I like, I like a man with convictions Connor. It's great. Great to hear. So, all right, we're gonna have one final question and it's gonna be, who knows where this is gonna go. So if you are happy to accept that human beings and the world in fact is more quantum mechanical than Newtonian, would you be happy with that assumption? Yes. Or assertion? I should
Connor Gleadhill:
Say I'm confident with that high level
Jared Powell:
Then as it is right now, how, how can we ever understand pain, which is a weekly emergent phenomena from our research or scientific methods at the moment, given that the very definition of, of quantum is it's, it's almost entirely uncertain where that's why we use wave functions. We can, we can estimate where a particle is going to be, but we can never know for certain where it is. So if we extrapolate that to causation, can we ever, can we ever say with certainty that something, some variable, some factor causes an event, even in, simply in, in shoulder research strengthening versus a control group, strengthening a shown to be superior, but is it because the person got stronger or did strengthening, or is it because of a thousand other intrinsic or inherent variables or sociocultural variables at play? So just what are your thoughts? There's no real question there. What do you think about that? And I know this is at the heart of your very existence, so I'm, I'm intrigued
Connor Gleadhill:
oh, Jared, I love this question. you went deep, so yes, you're right. I, I think about this a lot. And, and trying to explain this to people is you did a really good job there, mate. I'm very impressed with that. So trying to explain this to people is really tough. So maybe I'll, I'll try to, I think what you're saying is that because cause, cause and effect, we we're, we're wanting to understand that a lot of the time and, and obviously with pain, we, we try to reduce things into cause and effect a lot. But is it, is it possible because of essentially the quantum nature of the world? Again, I think, I think when we're talking about maybe aspects of existence, like pain, that you, you know, you, you're talking about a weekly emergent phenomenon that we will get more confident with how these things work.
Connor Gleadhill:
But I think that comes into then because I, so I'll go back. I, I do think about when, what breaks D all of these systems is when we go to the future and you are not talking about time travel there. What we're talking about is trying to predict some kind of effect with, with the level of certainty, when you have complex systems that have myriad of factors, all interacting at multiple different ways together, you know, D does this make our prediction, useless prediction is really tough at the best of times. And we are terrible at prediction as scientists, as researchers like really, really kind of bad. And so I think our prediction is pretty terrible at the moment, but that doesn't mean that it, it's not going to be terrible in many years to come. So I think one thing I might make clear with your listeners as well, is that when we're talking about prediction, it doesn't mean that you can interact on many things that predict an outcome of pain.
Connor Gleadhill:
So, you know, you can interact on someone's age or previous pain that has, you know, a very strong, predictive relationship with their future pain. You can interact on that. So we need to be clear when we're reading literature and Steve will expand on this more is, you know, we're talking about prediction. That's not a causal issue. We're, we're trying to simply just predict something, but what you're talking about is so I think we need to get that clear. First is what do we actually want to intervene on? And that may shift factors less, automatically less.
Jared Powell:
So modifiable factors.
Connor Gleadhill:
Perfect, perfect modifiable factors. We're intervening on those. I think our prediction for those will get better, but I don't think we'll ever know for certain that's that's the nature of the world will just get more confident as we understand things better. That's my answer on that. It's such a good question. I think about it pretty much. Yeah,
Jared Powell:
It's good. It's it's, it's impossible to answer at this point in time in 2020, maybe if we can upload our consciousness to Elon Musks neural link, then we might be able to have this discussion in 200 years, but it just comes, it makes me think in physiotherapy, and this is a paper that I've just published, or it's just been accepted. It'll come out soon. Is we just, we just lurch or real from one reductionist miracle treatment to another. So we've gone from manual therapy, we've gone to corrective exercise. We've now we're in strengthening. We've been, we've been through stretching and we've been through all of this. Right. And we're just, we haven't learned our lesson. We keep reducing these complex emergent phenomena, such as pain or, or even injury. I think injury is probably emergent as well. That, that, we're just trying to say it's cause he was weak.
Jared Powell:
His external rotation, strength was 80% of the other side or, or she, or it or whatever. And it's just, we haven't learned our lesson and then we're obsessed with strength and conditioning for a decade. Yeah. Which has been great because I think a really strong understanding of strength is vital, but to then extrapolate that to being the cause of everybody's pain and therefore the fix of everybody's pain, musculoskeletal pain, I think it's shortsighted. And I think it's just, we've got to understand this, this, this complexity of, of, of pain. And we, we have to be at peace with it. And at the moment we don't have solid prediction rules of what causes pain in the first place. And in in fact, what's gonna fix it in the future or, or manage it in the future. It's so individual it's so variable, it's different in, from one continent to the next and probably one age group to the next and one occupation to the next or socio demographics.
Jared Powell:
There's so many dimensions to it. And this is why I love the term, why pain is it's multidimensional experience is that I don't think we can articulate one factor, like at scale, that's causing this, this pain epidemic that, that we supposedly have. And so that's kind of what I, what I'm thinking and why I, I sort of love this quantum mechanical view of the world because I think it does have a, have, have applications to physio. And it doesn't mean, and I, I kind of like what you're saying in that we may get there in the future. I hadn't really thought about that. Given how far we've come in, you know, humans have been around for what may homo sapiens, maybe a couple of million years or a million years, the universe has been around for 14 billion. Look how far we've come in a hundred. So I totally accept that line of thought. Do you have any closing thoughts or musings or ramblings you'd like to get off your chest before you go, go and think about this over,
Connor Gleadhill:
I need 60,000 years,
Jared Powell:
60,000,
Connor Gleadhill:
It's a blink in the, a blink of an eye. So absolutely. And I think what you actually just did there jar is to bring a full circle is to explain the importance of causal inference. So, because causal inference is not talking about does X cause Y it's talking about, okay, so what's the relationship of all of these factors together that then results in our outcome? Why or the shorthand generally is a, so, you know, it's having a more complex view of these things. And I think the typical responses then, you know, so as you, you've got so much of this knowledge, whereas I don't think a lot of physiotherapists would have, this is this quantum appreciation of the world is then to go, well, we can't, we can't understand or predict anything. So why bother? That's not the point. It's about just getting a more complex view of what you're trying to predict. Yeah. And I think that's causal inference.
Jared Powell:
Awesome. That's actually a beautiful finish. That's poetic. And I think we've done well to wrestle with some pretty complex topics. Let's be honest. And it, it, the topics are probably above our pay grade. We're not we're, we're in no way aiming to be philosophers or physicists who are just physios with an interest in this stuff. We're both doing a PhD. So we've got interest in research. I hate research mate. I'll, I'll say it out loud right now. But I'm forcing myself to, to get into it, but it's anyway, we're, we're just, we're two blokes trying to have a, have a chat and try and figure some stuff out. It may, may be wrong, but I think it's interesting to think about some of the like epistological kind of things in the world.
Connor Gleadhill:
Yeah, absolutely. And that's a great way to say it. We are just couple of physios. We, we do, we're not experts in this, so seek are the experts and that's what we sure. All do all the time. Right?
Jared Powell:
Yeah. Read some, read some books. There's so many good books out there. The, the book that we both alluded to was called complexity by Michael or Mitchell Waldrop or something like that. I'll link to it anyway. And there's a bunch of go YouTube. David charmers. If you're interested in consciousness, he is a genius of a man. So I highly recommend you listen to him. He's got a paper on emergence as well. So listen to that. Anyway. Conner thanks so much for, for joining me and having a, a conversation really nice to meet you and talk to you and all the, the best of luck with your too.
Connor Gleadhill:
Likewise. Thank you, mate. Thanks so much for having me on
Jared Powell:
Cheer mate. Thank you for listen to this episode of the shoulder physio podcast with con Gladhill. If you want more information about today's episode, check out our show notes at www dot shoulder, physio.com. If you liked what you heard today, don't forget to follow and subscribe on your podcast player of choice and leave a rating or review. It really helps the show reach more people. Thanks for listening. I'll chat to you. Soon. The shoulder physio podcast would like to acknowledge that this episode was recorded from the lands of the Yuba people. I also acknowledge the traditional custodians of the lands on which each of you are living, learning and working from every day. I pay my respects to elders past, present and emerging and celebrate the diversity of Aboriginal and Torres stra Islander peoples and their ongoing cultures and connections to the lands and waters of Australia.