TRANSCRIPT
Deborah Borfitz:
Hello and welcome to the Scope of Things podcast, a no-nonsense look at the promise and problems of clinical research, based on a sweep of the latest news and emerging trends in the field and what I think is worthy of your 30 or so minutes of time. I'm Deborah Borfitz, senior Science Writer for Clinical Research News, which means I spend a lot of time with my ear to the ground on your behalf and a lot of hours every week speaking to top experts from around the world. Please consider making this your trusted go-to channel for staying current on things that matter, whether they give us hope or cause for pause. In another five or six minutes or so, I'll be talking to Bethany Kwan and Heather Smyth, who hail from the University of Colorado Anschutz Medical Campus, a leading contributor to the advancement of pragmatic clinical trials and ensuring that research findings are relevant and impactful in real-world healthcare settings. But first the latest news, including a trio of pragmatic clinical trials specific to lung cancer treatment. Implementation of pharmacogen trials specific to lung cancer treatment. Implementation of pharmacogenomic testing on a national scale and an impressively efficient approach to comparing commonly used intravenous fluids. Plus improving access to gene therapy trials for a progressive heart condition. The landscape for Alzheimer's disease studies, clinical trials designed to predict the most effective therapy, and the creation of AI agents for clinical research. Today's guests have no doubt heard of the so-called Pragmatica lung trial that learned in just over two years that a treatment combining a monoclonal antibody, siramsa, with an immunotherapy drug, keytruda, did not significantly extend survival in advanced non-small cell lung cancer patients as had been seen with this regimen in a smaller phase 2 trial. However, the speed at which Pragmatica lung opened, the ease of conducting the study and its rapid enrollment of a highly representative population reportedly makes it a paradigm-shifting model for the design, and its rapid enrollment of a highly representative population reportedly makes it a paradigm-shifting model for the design and conduct of future large randomized studies, including both the pragmatic variety and those with FDA registrational intent. Notably, it was designed with relatively few eligibility restrictions, so the ultimate enrolled population looked much like the US population overall.
Deborah Borfitz:
In the UK, interim results of a pragmatic clinical trial suggest pharmacogenomic testing has substantial value in improving prescribing precision. The so-called PROGRESS trial of the National Health Service aims to implement pharmacogenomic guided prescribing into routine clinical practice across the country by integrating genomic data into multiple electronic health records used by primary care practices and hospitals. Progress recruited patients from 20 sites following prescription of common medicines, specifically statins, opioids, antidepressants and proton pump inhibitors, and pharmacogenomic guidance was returned to their clinicians, but with a median turnaround time of seven days. Tellingly 95% of patients had an actionable variant and just over one in four participants had their prescription adjusted to achieve safer or more effective treatment. In yet another pragmatic style trial, researchers in Canada have demonstrated a powerful, efficient approach for comparing different standard treatments. The trial compared two intravenous fluids, normal saline and Ringer's lactate that have been commonly used for decades in hospitalized patients. Unlike traditional trials that randomly assign patients to receive one fluid or the other, this cluster randomized trial randomly assigned entire hospitals to use one fluid for three months then switch to the other fluid. Since clinical data were downloaded directly from health administrative sources, no individual patient recruitment was required, allowing the team to quickly collect data from more than 43,000 patients in seven hospitals. The cost of enrolling a single patient was less than $10, versus more than $1,000 for a traditional trial.
Deborah Borfitz:
As part of its Get With the Guidelines program, the American Heart Association has launched an initiative that will improve education, outreach and access to clinical trials for gene editing therapies for transthyroidin amyloid cardiomyopathy. This is a progressive and often underdiagnosed condition that can impair the heart's ability to pump blood, leading to heart failure, and it disproportionately affects older adults and certain racial and ethnic groups. The initiative will, among other things, activate a referral network of non-trial sites and develop tools that leverage clinical data to help identify potentially eligible participants. An annual review of clinical trials for Alzheimer's disease reports on 182 active studies worldwide assessing 138 drugs for patients at all stages of the disease continuum assessing 138 drugs for patients at all stages of the disease continuum. Among the encouraging developments are significantly more Phase I trials 48 compared to 27 a year ago and several drugs that look promising enough to warrant further study. Since early 2024, 56 new trials began, including 10 Phase III trials. A dozen of the late-stage trials conclude this year, a list that notably includes one studying the effectiveness of the diabetes medication semaglutide as a preventive agent.
Deborah Borfitz:
With a world-first study, researchers in Switzerland have paved the way for new clinical trials that, instead of testing individual drugs, predict the most effective therapy. Over four weeks, nine different molecular biological technologies were used to precisely measure the properties of melanoma tumors at the individual cell level to enable a precise treatment decision for patients with cancer, demonstrating how tumor profiling can be implemented in clinical practice. Individual treatment recommendations were derived from 43,000 data points per sample and in 75% of cases, the treating specialists found the information helpful for the choice of therapy. And finally, nvidia and Acuvia have teamed up to build AI agents for clinical research tasks that notably include trial startup target identification and clinical data review. Notably include trial startup target identification and clinical data review exclusively for customers of Acuvia, a top-ranked contract research organization. The overall model, trained on Acuvia Life Sciences data, has so-called orchestrator AI agents acting as supervisors for specialized sub-agents that route actions things like speech-to-text transcription, clinical coding, structured data extraction and data summarization to other subagents. These pre-approval AI agents are expected to accelerate trial timelines, extract insights and reduce the data review process to possibly two weeks.
Deborah Borfitz:
It is now time to bring to the mic Bethany Kwan and Heather Smyth, a dynamic duo of PhDs from the University of Colorado Anschutz Medical Campus, to fill us in on the wonderful and ever-widening world of pragmatic clinical trials. Bethany is Director of the Dissemination Implementation Research Corps with the Colorado Clinical and Translational Sciences Institute, and Heather is a research associate with the Center for Innovative Design and Analysis in the Colorado School of Public Health. They are both affiliated with the University of Colorado's Adult and Child Center for Outcomes Research and Delivery Science, aka ACORTS. Welcome to the show, you two. I can't wait to dig in on this very timely topic Wonderful, thank you so much, debra. Glad to be here. I do not want to assume that everyone listening to today's episode knows exactly what pragmatic clinical trials are, including how they differ from traditional studies and their relevance in real-world clinical practice. So let's back up here for just a minute and simply define it. Bethany, what's your sort of go-to description, as I'm sure you hear this question a lot.
Bethany Kwan:
Yes, absolutely so. In my mind and application, pragmatic research broadly is principally about studying effectiveness in real-world settings and populations versus efficacy in more controlled clinical trial contexts. Pragmatic research spans a wide variety of research activities, not just clinical trials, and research can be more or less pragmatic in different ways, as we often think about it on a continuum, pragmatic research includes activities ranging from pragmatic clinical trials, as you were just describing, focusing on testing interventions and procedures at the patient level, to testing more complex interventions and behavioral interventions and models of care implemented at the practice or system level, with consideration of health system and policy contexts. Ideally, pragmatic research is designed to support decisions by service and care providers, policymakers, patients and other interested parties on whether and in what context to adopt, deliver or make use of an intervention. Pragmatic research also ideally includes usually a large interdisciplinary team with expertise in biostatistics and study, design, like Heather, technology, integration, data and informatics tools, dissemination and implementation, science, like myself, qualitative and mixed methods, partnership development and community and stakeholder engagement.
Bethany Kwan:
So I also wanted to note some myth busting. Some people think that a pragmatic trial means lacking in rigor, that it's messy, but that's certainly not the case. We have very rigorous methods for pragmatic trials, maybe even more so than traditional clinical trials, because you do have to account for so many extraneous variables. It also doesn't necessarily mean practical in the sense of doing whatever is easiest. It means a study, design and research being conducted in ways that best reflect usual care systems and processes.
Deborah Borfitz:
Sounds very practical. I think that's what we're getting at right, yes so why is it that we have been hearing so much more about these pragmatic clinical trials over the past decade? You know, especially you know. I would imagine it has something to do with the ready availability of digitized health data and ongoing concerns about improving care quality while bringing down costs. We've been hearing about that forever, and doing so as quickly as possible. Does that sort of wrap it up, Heather, or is the rationale a lot deeper than that?
Heather Smyth:
So, yeah, I think definitely the increase in technologies like electronic health records or personal wearable devices that track, you know, glucose monitoring or physical activity or sleep, all of those things definitely give us opportunity to ask questions and answer questions that we wouldn't necessarily have been able to do a few decades ago.
Heather Smyth:
But maybe I'm a little bit of an idealist, but I think the availability of this data is more serendipity. I think the reason that we're looking at this explosion of pragmatic trials is really that it's a natural extension of human curiosity and just the scientific endeavor. We always start out asking how and why things work the way they do, and then we start to realize wait, we have the ability to take this knowledge and shape the world around us. So, even though I have a very deep philosophical respect for basic research and bench science and I think it's important to ask questions for the sake of asking questions, I also feel as a pragmatic researcher that there's a very unique satisfaction in watching the application of this rigorous science in real world settings and that can pave the way for real world change.
Heather Smyth:
So, if you ask me, yeah, this current focus on pragmatic trials and effectiveness studies. You know, we as a society have recognized that there are aspects of our social world that can be improved, have recognized that there are aspects of our social world that can be improved, and pragmatic researchers recognize that we can use scientific principles to address those areas, and so it's really exciting and rewarding to work in this area.
Deborah Borfitz:
I bet.
Bethany Kwan:
And you know, 20 years ago we called this translation of research into practice. There's new terms for pragmatic trials that, in a lot of ways, this has existed for a while and it has really evolved into dissemination, implementation science and pragmatic research. There's also increasing recognition of the overlaps between quality improvement and evidence-based medicine as part of the learning health system concept, as you noted earlier. Yes, pragmatic research is meant to be practical, with results meant to be directly applicable and quickly informing practice and policy. I also want to acknowledge that inherent to this application is recognition that healthcare is a business. We need to align with how businesses and the consumers of their services that is, us, the patients make decisions and we need to collect data on outcomes that truly matter to all those decision makers. And we need everyone to agree that a new innovation improves outcomes that matter, satisfies unmet needs and can be sustained using available resources. And how do we do that? This is where that community and stakeholder engagement aspect of pragmatic research becomes so critical.
Deborah Borfitz:
Thank you so much for that addition, Bethany. That was really helpful, and I want to stick with you here for a second. I believe the National Institutes of Health was a big supporter of pragmatic clinical trials at one time, but of course we have new leadership now, so I have to ask is that still the case? Last I looked, the website for the NIH Pragmatic Trials Collaboratory was still up and multiple pragmatic trials were still enrolling. So I'm guessing the goals of the study approach are well aligned with the administration's priorities and have perhaps been spared funding cuts, at least to some degree.
Bethany Kwan:
I can't speak to NIH priorities per se they are still evolving but we do know that the NIH has now, and as it has always had, an explicit emphasis on improving the health of Americans and, in my experience as a reviewer of NIH grants myself, there is an emphasis on impact and research significance as a major driver of reviews and, ultimately, funding decisions. And pragmatic trials give us that opportunity to identify evidence-based approaches that will work in real-world healthcare settings and truly improve health and healthcare for us all.
Deborah Borfitz:
Okay, good, thank you for that. Since pragmatic trials tend to have unique design features and might utilize multiple EHR systems or data sources, it sounds like conducting these sorts of studies requires a fair amount of specialized knowledge and data science skills, as well as a lot of flexibility. So how does one find or acquire those skills and that mindset, and what are the biggest design and data obstacles faced by investigators and study teams? I don't know, bethany Heather, one or both of you may have something to say on this front. I'll let you duke it out here.
Bethany Kwan:
Yeah, absolutely. It does require a wide array of specialized knowledge and skills, but no one person needs all of that expertise. Team science is critical and team science is in itself a skill. I mean, on my teams we have data scientists, informatics, health, economics, implementation science, various community and clinical partners that are all part of the team, implementation science, various community and clinical partners that are all part of the team. And I believe Heather has some comments on design and data obstacles too.
Heather Smyth:
Okay, yeah, so I should introduce this first, that my training is as a quantitative psychologist, so concerns about the measurement of our variables is always forefront in my mind. Concerns about the measurement of our variables is always forefront in my mind and so kind of from that context, understanding that when we collect data, we can be collecting it for a lot of different reasons. Sometimes it's specifically planned for research. In the case of EHR data, this is like clinical information that is used for the purposes of making clinical decisions and keeping those records, or maybe administrative data and all of those types of data collection. They are collected for a specific purpose, with pragmatic trials when we look to utilize the data that is available often from these clinical records and using it as quickly as possible. Sometimes from a measurement perspective there can be a little bit of noise or it can be not quite as ideal as what a statistician would want to be working with. So I think it's important to think about the quality of the data and what the data is meant to be used for when you then use it for research and try to make interpretations from it.
Heather Smyth:
And I know earlier Bethany had kind of mentioned this difference between effectiveness in the real world versus efficacy in a lab. And in a similar way, I kind of framed the difference between traditional and pragmatic research as a difference in internal and external validity. I'm going to steal a phrase from Dave McKinnon, who was my graduate school mentor, and he would always say that measurement is the soft underbelly of all of our statistical models, meaning that really the models that we use are made for optimizing that internal validity, having robust and precise measurements that are really obtained through well-controlled studies with high internal validity. And I'm sure our listeners would guess, in pragmatic trials we don't always have all of those controls and that does have implications for which models we use statistically and how we interpret those models. And so I think that it's really as a statistician that lack of control can sometimes make me a little bit nervous.
Heather Smyth:
But if we switch our mindset and go, we're not focused on efficacy, we're focused on effectiveness. It's not internal validity, it's external validity. Those data issues where it's not perfectly clean data or if it's not very controlled data just becomes a really interesting design characteristic and it brings nuance to the project and to the interpretations rather than being seen as a scientific limitation. I mean, and at the end of the day. For me, this just means that we get to experience the joy that comes with thoughtfully thinking about our research questions, understanding that our questions exist within a multifaceted context with multiple layers of interested parties, and I think that's just a really fun way of doing good science.
Deborah Borfitz:
That was super helpful. Heather, thank you so much for that. I have used those terms myself interchangeably effectiveness and efficacy so I'm going to be careful moving forward to consider the nuances of that.
Announcement:
Are you enjoying the conversation? We'd love to hear from you. Please subscribe to the podcast and give us a rating. It helps other people find and join the conversation. If you've got speaker or topic ideas, we'd love to hear those too. You can send them in a podcast review.
Deborah Borfitz:
I wanted to pivot to a few years ago I was at your Colorado Pragmatic Research and Health Conference and one of the keynote speakers said that the yardsticks of success of a pragmatic trial are whether an intervention improves outcomes and the study accounts for the heterogeneity of patients, what he referred to as the other 85% that strict eligibility criteria of a traditional clinical trial would exclude from participation, from participation. So my question is this is that still the mantra, and do we know how often we are in fact capturing that other 85%? Either of you.
Heather Smyth:
Yes, emphatically yes. We do still care about that heterogeneity in our study designs and are intentional about building that into not just the designs but our research teams and the study participants that we recruit. And I know just a few moments ago I was saying how having this variation in our data can be a little bit noisy and make it difficult for our statistical models, but on the flip side, this also gives us a chance to investigate things like heterogeneity of treatment effects, and so if you have a very, a lot of variety in the participants that you are studying, you're able to say well, does this effectively work for one group differently than it does for another? Often you might hear terms like mediation effects and moderation effects, and that's kind of really what I'm talking about that heterogeneity of treatment effect or those interaction terms for my statistical fans out there, or the mechanisms, the mediation questions. But I really want to like I have to give you some, if I can say it, pragmatic advice in dealing with that.
Heather Smyth:
Ok, when we include this heterogeneity in our study designs, you have to remember that mathematically those effects are going to be smaller than whatever your main treatment effect is going to be. So if you are interested in looking at those effects, you need to think about your sample size. Oftentimes I'll have somebody writing a grant and they'll say how many people do I need to find this main effect? But in reality they're interested in heterogeneous effects and I have to like you're going to need a larger sample than that if you're just powering on your main effect, need a larger sample than that if you're just powering on your main effect. And so I think, because of some of the inherent limitations of pragmatic trials, it's important to think about the difference between statistical significance and clinical significance.
Heather Smyth:
If you have a nice heterogeneous sample and you want to look at heterogeneous treatment effects and you may or may not be able to get the sample size that you need, you might have to proposively oversample in certain groups. You might get to a place where you can't get that statistical significance. It's kind of like a magic cutoff and it's because of your sample size. But you can look at your treatment effects, your effect sizes and have a discussion about the clinical significance. So the bottom line is, when you're working with your statistician, don't ask them simply for a p-value. Also ask them about the effect sizes and have a discussion about that, because that is a creative way to get the most out of the data and information that we're collecting.
Deborah Borfitz:
Thank you so much for that, Heather. That was super interesting tip and gets to kind of my final question for each of you, and maybe we can move beyond that a little bit Planning a pragmatic trial is one thing and implementing it is another. So and obviously that's no doubt the harder part so I'm hoping that each of you, Bethany and Heather, can leave us today with a few tips on how to deal with some of the usual expected challenges, maybe three or four key takeaways on how to successfully to conduct pragmatic trials. What do you have to offer on this?
Heather Smyth:
Well, just real quickly. In my mind it all comes down to project management. When you've got a large, interdisciplinary team, multiple practice sites, various perspectives and priorities, organization, explicit communication, role clarity these are all the secret ingredients to a successful project. I would say make sure you have a dedicated project manager and give yourself permission to do the administrative tasks.
Deborah Borfitz:
So that everybody's on the same page Sounds great, Bethany, how about you?
Bethany Kwan:
Yes, absolutely, being well organized is one of the few things we can control in pragmatic research. Many things come up during the conduct of a pragmatic trial. Ehr systems on which we depend for our data, in some cases for delivery of our interventions, will change. They will be upgraded, they will be modified. There will be staff turnover, both on the research team and at the clinical sites. You might lose your champion at different clinical sites. It's really vital to anticipate that turnover and have a backup plan for staff trainings. Budget this into your timeline, into your financial budget, I would say. Also, investigators need to have a willingness to pivot, having a flexible mindset and different ways of accomplishing the project goals. These unplanned adaptations should be tracked systematically, both for explaining the findings as well as potentially identifying novel hypotheses that can be tested in future research. You asked about usual or expected challenges. Well, one usual or expected challenge is unusual and unexpected challenges like COVID.
Bethany Kwan:
The ability to shift in delivery presented an opportunity to study different modalities, different ways of delivering interventions during COVID. It was really a natural experiment in a lot of ways around telehealth and this flexible mindset, this ability to adapt to these unexpected challenges, is one of the key differences from a traditional clinical pragmatic trial, and one that I find makes really classically trained clinical trialists very uncomfortable at times I bet I bet Well.
Deborah Borfitz:
This has all been so very enlightening for me and I'm sure, for our listeners as well. There's really no arguing that the need for pragmatic clinical trials is real and increasingly well appreciated, given their enormous potential value to the overall healthcare enterprise. Thank you, Bethany and Heather, for enlightening us all on that point, as well as your pragmatic tips on how it can be done right to create lasting and meaningful change in real world healthcare settings. And, as always, a big thank you to everyone out there for listening in. If you're not subscribed to this podcast yet, please consider going to Apple Podcasts and doing so right now so you don't miss your monthly dose of news and perspectives. You'll be hard-pressed to find anywhere else. And, if you're up for it, I'd also be so very grateful if you'd leave a rating and review while you're there.
Deborah Borfitz:
One more thing before we go. If you liked today's conversation, it is only a glimpse of what you can expect from Scope Europe. Presenters and panelists, Please plan to join us October 14th and 15th in Barcelona when clinical operations executives will be exploring the latest trends in clinical innovation, planning and operations. Save an additional 10% off any current rate by using the code SOT10. For more information, visit scopesummiteuropecom. Bye for now.